Showing posts with label HSPD-12. Show all posts
Showing posts with label HSPD-12. Show all posts

Monday, June 25, 2012

Mobile Device Remote Identity Proofing Part 3 - Apples to Oranges

Download PDF of complete paper

IV. Apples to Oranges:   


Can a camera in a smart phone be used to capture the necessary images, to include those used for biometric identification, required for the enrollment and subsequent vetting of an individual in an Identity Management System (IDMS)?  Smart phone manufacturers are equipping their newest products with cameras capable of ten or more megapixels with Nokia’s latest offering claiming a forty plus megapixel camera!  This paper proposes using the camera to capture all of the required components to establish and vet an identity so it is important to understand some of the terminology involved.  

Contrary to popular belief more megapixels do not make for a better image.  It is important to understand what makes up a good image and how it is defined within the multiple industries involved. Most people base image quality on the output / final product, the best example being print media.  So this is where we are going to start.

Pictures are printed in DPI or Dots per Inch. For example a newspaper image is printed at 200 to 250 DPI, A magazine image is 400-600 DPI, yet a billboard is typically 30 dpi.  When you print a photo on your desktop printer the optimal setting is for 250 DPI.  Don’t be fooled by the fact your typical desktop printer is capable of far greater resolution, typically from 720 to 1440 dpi. The printer may be able to print very small dots but it can only accurately reproduce colors by combining a large number of dots to emulate various tints. That is why a 250 dpi image offers perfect output quality on a 1000+ dpi printer.  

PPI is Pixels per Inch.  PPI is the resolution terminology used in the Standards promulgated by the American National Standards Institute (ANSI) and the National Institute for Standards and Technology (NIST).  Within the context of this paper PPI is used to define the resolution of the scanning mechanism used to capture a fingerprint.  PPI is an appropriate term to describe scanner input and it is the term used by the applicable Federal standards, but technically, samples per inch (SPI) is more accurate. “For example, if you scan at 200% at 300 PPI or if you scan at 100% at 600 PPI, the scanner [sees] the same data.  The PPI is different for each file, but the sampling of the original by the scanner is the same.  Maximum SPI of a given device is the optical resolution at 100% “(Creamer, 2006)  

How do dots per inch equate to pixels?  The term pixel is predominantly used to describe the digital resolution on monitors, televisions, and smart phones.  A pixel is one dot of information in a digital photograph. Digital photos today are made up of millions of tiny pixel/dots (Mega = Million).  A digital photo that is made up of 15 megapixels is physically larger than a digital photo made up of 1.5 megapixels, not clearer or sharper.  The notable difference is in file size, not picture quality.   If you print a 250 DPI picture on an 8.5 by 11 piece of paper you will be printing a maximum of 2125 by 2750 pixels. Most computer screens display at 100 DPI.  A 1280 by 1024 resolution on your monitor equates to 1310720 pixels or 1.3 megapixels.  This begs the question, why do you need a ten plus megapixel camera to capture a very high quality image?  The answer is you do not.  

V.  Camera Technology


With an explanation of some of the terminology behind us we can explore the use of a digital camera or variant, for the capture of the necessary data for enrollment in an identity management system.  When the FIPS 201 standard was first published capturing a facial image of an individual required, by standard, the use of a three point five megapixel camera.  This level of resolution was at the top end of the capabilities of digital cameras readily available to the public at the time.  Costs in excess of a thousand or more dollars a for a camera meeting FIPS requirements were not uncommon.  That same Camera was also unable to do anything more than capture an individual’s picture.  Today native resolutions on smart phone integrated cameras are commonly five times the historical benchmark.  Exponential improvements in the image capture hardware, firmware and supporting software should also enable these same devices to not only capture a photo but be multi purposed for barcode reading, OCR enabled document capture, Fingerprint image capture, and even iris image capture.  4G and LTE networks now make it possible for high speed efficient exchange of data with next generation networks coming on line reinforcing and bolstering the capability.  Consistent with Moore’s Law the capability of cell phones is on the steep end of the climb with exponential growth and improvements in power, processors, and memory.  

“A digital camera can capture data based on the mega-pixel ability of its CCD.  For example, a 2 megapixel digital camera shoots at approximately 1600x1200. 1600 pixels times 1200 pixels = 1,920,000 total pixels (rounded up)  Usually the camera images have no resolution assigned to them (although some cameras can do this)  When you open a file into an image editing program such as Photoshop, a resolution HAS  to be assigned to the file.  Most programs, including Photoshop, use 72 PPI as a default resolution. (Creamer, 2006)

VI. Establishing ownership


Biometrics is the science and technology of measuring and analyzing biological data.  Biometric identifiers are the distinctive, measurable characteristics used to identify individuals. (Jain, Hong, & Pankanti, 2000) The two categories of biometric identifiers include physiological and behavioral characteristics. (Jain, Flynn, & Ross, 2008)  Physiological characteristics are related to the shape of the body, and include but are not limited to: fingerprint, face recognition, DNA, palm print, hand geometry, iris recognition (which has largely replaced retina), and odor/scent.  Behavioral characteristics are related to the behavior of a person, including but not limited to: typing rhythm, gait, and voice.  

The most common biometric identifiers currently used in IdM systems are fingerprint and facial recognition.  With the current PIV and PIV-I programs a dual approach in accordance with NIST recommendations (NIST, 2003)is used.  The capture of these biometric identifiers is easily within the scope of commonly available commercial technologies incorporated into today’s smart devices.  It is the analogous algorithms required for image analysis and development of minutia for analytical and comparison purposes that pose the challenge.  Current facial recognition software is more than capable of effectively using images captured within the common 8-14 megapixel range of the average smart phone.  The technology is rapidly outpacing the market’s ability to sustain new releases and/or uses as evidenced by Nokia’s release of a smart phone with a 41 megapixel camera sensor dubbed the 808 PureView (Foresman, 2012)  So the specific challenge relates to the fingerprint.

 

Works Cited

Creamer, D. (2006). Understanding Resolution and the meaning of DPI, PPI, SPI, & LPI. Retrieved May 30, 2012, from http://www.ideastraining.com: http://www.ideastraining.com/PDFs/UnderstandingResolution.pdf

Foresman, C. (2012, March 2). Innovation or hype? Ars examines Nokia's 41 megapixel smartphone camera. Retrieved March 5, 2012, from arc technica: http://arstechnica.com/gadgets/news/2012/03/innovation-or-hype-ars-examines-nokias-41-megapixel-smartphone-camerainnovation-or-hype-ars-examines-nokias-41-megapixel-smartphone-camera.ars?clicked=related_right

Jain, A. K., Flynn, P., & Ross, A. A. (2008). Handbook of Biometrics. New York, NY, USA: Springer Publishing Company.

Jain, A., Hong, L., & Pankanti, S. (2000, February). BIOMETRIC IDENTIFICATION. (W. Sipser, Ed.) COMMUNICATIONS OF THE ACM , 43, pp. p. 91-98.

NIST. (2003, February 11). Both Fingerprints, Facial Recognition Needed to Protect U.S. Borders. Retrieved March 5, 2012, from NIST; Public and Business Affairs: http://www.nist.gov/public_affairs/releases/n03-01.cfm

Friday, June 22, 2012

Mobile Device Remote Identity Proofing Part 2 - The requirement for ownership

Download PDF of complete paper

I.  Introduction

Although it is unlikely that development and adoption of a single ubiquitous identity will occur in the next five years it is reasonable to assume that various manifestations of a individuals identities are, and will continue to be established at various and increasing levels of trust and assurance.  The challenge to be faced is to fast track the ecosystems ability to work at moderate and high levels of assurance.  Historical barriers to widespread use of trusted identities at a high level of assurance are predominantly based on the high cost and limited availability of “approved” identity proofing “tools” and the infrastructure requirements in the security and maintenance of the “representation” of that identity.  This concept paper explorers the former challenge, the later being a topic that deserves its own attention.  

II.  Origins

Being able to establish and prove an identity and then use that proof of identity to ones advantage is as old as humanity itself.  It could be argued that gender, a genotype, as a biometric identifier was first used in the Garden of Eden when Adam, on being asked if he took fruit from the tree of knowledge, said “she gave it to me”.  The story in Genesis involves the only two living humans on earth and an omnipotent creator which makes identification straight forward.  This did not deter Adam from making a clear identification in order to shift guilt away from him.   Traditional methods of establishing and/or confirming the identity of an unknown person have relied on secret knowledge or possession of a token of some type.  Passwords and pins, the proverbial what you know, used so commonly today date back to the Roman Empire. The Hellenistic Greek Historian Polybius chronicled how passwords were used among the Roman Legions.

The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword - that is a wooden tablet with the word inscribed on it - takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.  (About.com, 2012)

Tokens, what you have, date to the Bronze Age.  “A. Leo Oppenheim of the Oriental Institute of the University of Chicago reported the existence of a recording system that made use of counters, or tokens. According to the Nuzi texts, such tokens were used for accounting purposes; they were spoken of as being deposited, transferred, and removed.” (Schmandt-Besserat, 1977) 
Today the pin, password, and token are synonymous with modern society.  There are seemingly endless equipments for passwords from the moment you turn on your computer through the moment you click on the accept agreement or purchase icon.  Where would you be without your ATM card, pin, and the ability to access your cash anywhere, at any time, worldwide?  The problem is that the methodology we are using in modern America has changed little since its antiquarian origins.  We are still only commonly testing for knowledge or possession, not ownership.  Enter Biometrics

III. The requirement for ownership

Testing for possession or knowledge has become the standard for commercial identity management.  In the 21st century most people have a virtual identity presence, one that resides in the World Wide Web.  This is the identity they use to move among the social networking sites, bank, pay bills, and shop.   With the massive increase in the use of the web has come a corresponding increase in identity theft.  “In 2011 identity fraud increased by 13 percent.  More than 11.6 million adults became a victim of identity fraud in the United States, while the dollar amount stolen held steady”. (Javelin Strategy & Research, 2012)  Steps have been taken to strengthen identity security especially in the financial sector with the addition of images, secret questions, and a plethora of additional knowledge based steps that are far more effective at frustrating users than they are at increasing security.  Each of these additional security features is still nothing more than additional knowledge and additional knowledge can easily be stolen.  What is required is something that is definitively tied to the identity holder, something that cannot be forged, lost or stolen.  That something is biometrics.

Biometrics, like passwords and tokens are not a 21st or even 20th century phenomenon. Handprints were used for identification purposes nearly four thousand years ago when Babylonian Kings used an imprint of the hand to prove the authenticity of certain engravings and works.  Babylonia had an abundance of clay and lack of stone which led to the extensive use of mudbrick.  Ancient Babylonians understood that no two hands were exactly alike and used this principle as a means of identity verification.  Modern dactylosscopy, the science of fingerprints was used as early as 1888 when Argentinean police officer Juan Vucetich published the first treatise on the subject. (Ashbourn, 2000)

Biometrics can be defined as observable physical or biochemical characteristics that can typically be placed into two categories, phenotype and genotype.  The phenotype biometrics category contains the identifiers most commonly used for transactional identification today.  Fingerprints, iris, facial features, signature patterns, are all phenotype identifiers based on features or behaviors that are influenced by experiences and physical development.  From the owners perspective these are often viewed as non-threatening and non intrusive.  The Genotype category measures genetically determined traits such as gender, blood type, and DNA, the collection of which is generally viewed as very intrusive.  DNA, the ultimate biometric signature, is generally considered the most intrusive often vilified in popular fiction.  In the 1997 film Gattaca DNA determines an individual’s status in society with each person categorized as a Valid or In-valid. In the 2012 blockbuster The Hunger Games DNA serves as a signature for children entering the Reaping, a lottery culminating in a morbid death match. Both of these examples of pop culture reflect the underlying distrust society has in the government’s possession of such an intimate identifier.  

Biometrics is primarily used in two modes, each with a different purpose; identification, and verification.  The term recognition is a generic one encompassing the one to one and one too many modes in which biometric systems operate.   Biometric identification is the process of associating a sample to a set of known signatures.  For example, the US Visit program which checks a presented set of fingerprints [sample] against multiple databases, containing known signatures.  The results of a one to many searches are usually displayed as a group of the most probable matches often associated with a probability score as a percentile that illustrates the degree of match between the sample and the matched group.  Biometric verification is the process of authenticating the sample to the record of a specific user with the results delivered in binary fashion, yes or no.  Real world examples of this one to one verification include fingerprint match on card in the PIV program or as a third factor of authentication to an access control system where what you have and what you know needs to be validated against ownership.  Most commercial systems operate in verification mode.

Before identification or verification can ever occur some type of enrollment process must take place in order to establish to some level of trust that the biometric signature is owned by a specific individual.  Only then can varied rights and privileges (attributes) be assigned to that owner and subsequently secured by means of PKI or similar technology.   One of the primary impediments to broad scale use of biometric signatures is the expense and inconvenience of enrollment programs.  But what if it were as easy as using your mobile phone in your living room?

Using a mobile device to establish the validly of the claim of a specific identity is simple in principle but problematic in execution.  The capture of the required information can be divided into the following two steps: creation of a claimant’s profile, and binding a known identity to the claimant.    Creation of the profile typically includes the identification and capture of two data types.  The first is biographical /descriptive data, the second is biometric data.  For the purposes of this paper, we shall refer to these combined datasets as the Individual Profile or IP.  

This concept is based on leveraging the rapidly increasing level of hardware technology and network availability incorporated into the worldwide wireless telecommunications system to provide a mechanism for the validation of claims to a specific identity, binding that identity to the claimant, and securing the identity for use in an environment requiring various levels of trust by a wide array of relying parties. 

Wednesday, June 13, 2012

Managed Attributes, Not Standards, Lead to Interoperability

Download Complete Paper

I.     Introduction

Managed attributes ensure essential interoperability. This is the foundation for providing the most skilled, most timely and most appropriate response to any situation, regardless of size. Emergency managers and incident commanders can make sound decisions with the additional data that comes from knowing when and where specific resources are located, what tasking assignments have been given and to whom. Not only is everyone on the scene accounted for, but tasks are given to responders with verified skills and capabilities thereby contributing to the command staff’s ability to predict the next threat and deploy resources accordingly, maintain critical situational awareness and respond to dynamic conditions quickly and effectively. Assigning responders to duty is not an issue. What’s critical is assigning the responder with the appropriate and verifiable skills to a job he/she is capable of accomplishing, ensuring a positive outcome for the situation and the responder.

II.    Setting the scene

A.                   Personal experience sets the stage for complete understanding

My first exposure to pre-hospital care was the mandatory “first responder” training required for firefighters by the State of California more than twenty years ago. The training program which was taken concurrently with a CPR class added up to more than the 120 hours of training required to be certified as Basic EMT in the Commonwealth of Massachusetts a couple of years later. In the end it was not the hours required to complete a training program that struck me as being the unusual dichotomy but the difference in skills. As a “first responder” I was trained in how to properly remove a helmet, place the electrodes from the 12-lead EKG on a patient, spike IVs, assist with medications, etc. As a “Basic EMT” in Massachusetts I was not trained in any of those skills. In fact I did not use them again until the PB waiver program was instituted. Many years later as a regional hospital preparedness coordinator I struggled with the concept that we could not send paramedics across regional boundaries within the same state, even within the same county and still allow them to work as paramedics because scope of practice and certification was regional and there was no reciprocity within the state!

Times have changed but the essential challenges in the practice of pre hospital care have not. There may be an EMS community but it is segregated even within its day-to-day practices never mind responses to what can be categorized as disasters. On February 20, 2003 the fourth deadliest   nightclub fire and the 9th deadliest place of public assembly fire in U.S. history took place at the Station Nightclub in Rhode Island. The multi-jurisdictional (on a very large scale) fire EMS response was atypical when it comes to patient care and it worked. It is conjecture but I would hypothesize that the response was modern in capability but traditional in implementation. That is, a small state with close boarder ties to services in Massachusetts and Connecticut and familiarity among the services responded as needed, there were no questions of scope of practice, patients were cared for at the level the provider was trained to without immediate regard for local or regional regulations.

In addition to the one hundred fatalities there were an estimated 230 casualties, 186 transported to hospitals by first responder agencies. Over five hundred firefighters, EMS, and Police responded with fifty-seven public and six commercial ambulance companies providing both basic and advanced life support services. (Kuntz, June 23 2000)1

I would argue the Station Nightclub fire response was a success carried out by heroic and dedicated professionals. The brethren of these same professionals also answered the call to service for hurricane Katrina in late August and early September of 2005. I would argue that that response was more typical of large multi jurisdictional, multi state responses. Some level of organization was applied to the call out and activation of resources on a national scale. The typical American answer of a call to duty resulted in a massive response. However, many police, fire and EMS organizations from outside the affected areas were reportedly hindered or otherwise slowed in their efforts to send help and assistance to the area. FEMA sent hundreds of firefighters who had volunteered to Atlanta for two days of training on topics including sexual harassment and the history of FEMA. (Bluestein, 2005)2
               

III.   Underlying Problems

So what is the underlying problem? We can look at it from a national service prospective as well as a level of service prospective. Take a look at the state of the service in general. An excellent summary is contained in a recent report issued by the National Academy of Sciences.

“Each year in the United States approximately 114 million visits to EDs occur, and 16 million of these patients arrive by ambulance. The transport of patients to available emergency care facilities is often fragmented and disorganized, and the quality of emergency medical services (EMS) is highly inconsistent from one town, city, or region to the next. Multiple EMS agencies some volunteer, some paid, some fire based, others hospital or privately operated frequently serve within a single population center and do not act cohesively. Very little is known about the quality of care delivered by EMS services. The reason for this lack of knowledge is that there are no nationally agreed-upon measures of EMS quality, no nationwide standards for the training and certification of EMS personnel, no accreditation of institutions that educate EMS personnel, and virtually no accountability for the performance of EMS systems. While most Americans assume that their communities are served by competent EMS services, the public has no idea whether this is true, and no way to know.

The education and training requirements for the EMTs and paramedics are substantially different from one state to the next and consequently, not all EMS personnel are equally prepared. For example, while the National Standard Curricula developed by the federal government calls for paramedics to receive 1,000 - 1,200 hours of didactic training, states vary in their requirements from as little as 270 hours to as much as 2,000 hours in the classroom. In addition, the range of responsibilities afforded to EMTs and paramedics, known as their scope of practice, varies significantly across the states. National efforts to promote greater uniformity have been progressing in recent years, but significant variation remains.” (Committee on the Future of Emergency Care in the United States Health System, 2006) 3

My initial brief example of the differences in training between states pales in comparison to the preceding quote. We have established the fact that we have dedicated trained and competent personnel working in an environment that is restrictive primarily due not to the lack of a national standard but to a lack of information. I will expound on that statement shortly. First, however, let’s take a look at the problem from a scope vs. patient care prospective. An excellent example was discussed in an article by Tori Socha published in February, 2011. The article dealing with stoke reminded me of the initial introduction of thrombolytic drug therapy through pre-hospital providers in Massachusetts and the personal struggle some metropolitan medics had being able to use this lifesaving tool in one region, with their big city services, but not have it available to them in the small local, sometimes volunteer ALS services in the communities in which they resided. Ms. Socha stated;

“Stroke, with direct and indirect costs totaling $68.9 billion, is a major primary health priority in the United States. Every 40 seconds, someone in the United States experiences a stroke, and every 3 to 4 minutes, someone dies of a stroke. Administering intravenous (IV) recombinant tissue plasminogen activator (tPA) within 3 hours of onset of symptoms is associated with a 30% greater likelihood of decreased disability compared with placebo. In selected patients, IV recombinant tPA may be safely used up to 4.5 hours after symptom onset. Despite its clinical efficacy and cost-effectiveness, only 3% to 8.5% of patients with stroke receive recombinant tPA. One limitation is timely access to care. In 2000, the Brain Attack Coalition recommended establishing primary stroke centers (PSCs). Researchers recently conducted a study to determine the proportion of the population with access to Acute Cerebrovascular Care in Emergency Stroke Systems (ACCESS). The analysis found that if ground ambulances are not permitted to cross state lines, fewer than 22.3% of Americans (1 in 4) have access to a PSC within 30 minutes of symptom onset.” (Socha, 2011)4

There is no doubt that lack of definition causes, at bare minimum, organizational angst and disparity in the EMS service. It can also be argued that this lack of definition can result in loss of life, not due to negligence but the inability of available service to provide a timely response across jurisdictional boundaries stymied by the invisible but very real wall of scope of practice limitations. This is evidenced by the research from the Socha article as well countless additional journal articles and studies. The truly disquieting issue is that this conundrum is not one unique to an incident of national consequence but can be found in day-to-day EMS operations.

IV.   Solutions

So what is the solution? I left emergency services several years ago to seek technology solutions for common operational problems faced by our nation’s first responders. Over the last ten years I have listened to a consistent theme propagated in general by well meaning federal civil servants. Regardless of the problem the solution is of course to regulate it at the federal level. The following quote from the Committee on the Future of Emergency Care starts with a rousing call to arms.
“While today’s emergency care system offers significantly more medical capability than was available in years past, it continues to suffer from severe fragmentation, an absence of system wide coordination and planning, and a lack of accountability. To overcome these challenges and chart a new direction for emergency care, the committee envisions a system in which all communities will be served by well planned and highly coordinated emergency care services that are accountable for their performance. In this new system, dispatchers, EMS personnel, medical providers, public safety officers, and public health officials will be fully interconnected and united in an effort to ensure that each patient receives the most appropriate care, at the optimal location, with the minimum delay.” (Committee on the Future of Emergency Care in the United States Health System, 2006)3
All communities should be served with highly coordinated emergency care services that are accountable for their performance and those services should be interconnected. I do, however, disagree with manner in which the coordination, accountability and connectivity should occur. A bit further in the report the foundation of the proposed solution is revealed.
“The National EMS Scope of Practice Model Task Force has created a national model to aid states in developing and refining their scope-of-practice parameters and licensure requirements for EMS personnel. The committee supports this effort and recommends that state governments adopt a common scope of practice for EMS personnel, with state licensing reciprocity. In addition, to support greater professionalism and consistency among and between the states, the committee recommends that states accept national certification as a prerequisite for state licensure and local credentialing of EMS providers. Further, to improve EMS education nationally, the committee recommends that states require national accreditation of paramedic education programs. The federal government should provide technical assistance and possibly financial support to state governments to help with this transition.” (Committee on the Future of Emergency Care in the United States Health System, 2006)3
There it is. Solution by national regulation. This could be effective if the United States were the size of Switzerland. It would also be quite effective if we did not have 50 different autonomous state governments, not including territories. The individual states do not want to give up their sovereignty, nor should they be forced to. It is not necessary. The solution is to allow the authority having jurisdiction the freedom to define the scope of practice. How can this premise, the perceived status quo, change things? The logical proposal is the delivery of this [scope] information in a trusted fashion attached to a non-reputable identity. For those familiar with the ongoing work to leverage trusted identity by the federal government for physical and logical access control you likely have an idea where I am going with this concept. Several states have taken definitive steps to leverage the work done by the federal government to institute their own identity management (IDM) programs. One or two truly visionary early adopters are using the trusted identity as a foundation and attaching attributes. For example some states have implemented, as part of its functional mandate, “authenticated qualifications and attributes” by which they mean trusted and validated by the authority having jurisdiction or accrediting organization and the ability to tie first responders' identities and attributes to authoritative sources of information (e.g. licensing, certification, and status databases for paramedics, police, licensed heath care practitioners, firefighters, etc). 

Management of these attributes allows for the rapid and effective allocation of personnel resources during an operation.  Historically, management of these resources, assisted through mutual aid compacts, both formal and informal, was hampered by a lack of information and trust.  Further there often is a lack of understanding as to the differing individual elements that defined the attribute from jurisdiction to jurisdiction.  Without any mechanism to provide a trusted and detailed definition of the attribute the only recourse has been to compare attributes between jurisdictions at the lowest common denominator.  Categorization of resources has been limited to generalized groupings like Emergency Services Functions (ESFs) and subsets of Critical Infrastructure and Key Resource sectors (CI/KR).  A frequently disputed alternative has been for the federal government to dictate the attribute definitions to state and local authorities.  This lack of information is compounded by the specter of legal accountability for the jurisdiction receiving the resources especially in those attributes which directly influence life safety.  The result is an under utilization of the available resources.

Attribute management within an identity system is similar to that in network management. In a network an “attribute” is the property of a managed object that has a value. Similarly in one example of an IDM attribute-enhanced system an attribute is the property of the person who has enrolled, and the value is “what that attribute is.” For example: Joe Smith enrolls and designates he is a paramedic. Joe is the “managed object” and paramedic is the “attribute.” The system then associates the “value” as the skill set of a paramedic.
Also similar to network management, certain mandatory initial values for attributes are specified as part of the managed object class definition. Associating the skill set of a paramedic is a mandatory initial value, but conditional values can also be added, these may be unique to the jurisdiction where a responder works on a local, regional, or state level. These paramedic conditional attributes could also be additional training or certifications that are above and/or beyond the initial mandatory value of a paramedic as defined by the federal AHJ. This allows all stakeholders to have their cake and eat it too. The federal government establishes the baseline and state and local jurisdictions are not forced into long term expensive programmatic changes.

When the attribute dataset is read by a computing device the retrieved information is reported to the user in local terminology and an instant comparison is made between the individual knowledge and task statements and requirements of the local jurisdictions certification requirements and the sending jurisdictions certification requirements and critical discrepancies are reported. For example as part of the comparison the table of pharmacology for a paramedic is compared between a sending jurisdiction and a receiving jurisdiction is compared and the receiving jurisdictions report shows that the medic is not trained in the administration of a thrombolytic, part of the scope of care of the receiving jurisdiction.

My example was originally designed to use national regulatory or volunteer compliance standards as a baseline. A methodology was developed allowing for local, regional, or county based training and skill sets to be incorporated into the system. The subsequent modifications to the system provided both a means of tracking these local training programs, optionally using the resources that are the outcome of these programs and communicating this information to disparate jurisdictions whose training has a completely different baseline but whose terminology and outcomes are similar. 

Systems of this type are designed to give command authorities trusted, verified, data on skills licenses and certifications held by respond in individuals and teams in order to allow use of these human resources at the highest common denominator thereby making the most effective use of the resources available and providing the highest level of care and services to those in need during times of disaster of any scale.
Twenty five years ago very little if any consideration was given to a need for instant reciprocity.  With a few exceptions emergency resources were drawn locally or regionally from immediately adjacent jurisdictions.  Today responses to critical events can be national, leveraging the spirit and altruism that defines America.  Twenty five years ago a piece of paper, a uniform, or a badge could serve as proof of qualification.  Today the litigiousness of our society has prevented even the federal government from using emergency services personnel to their demonstrated capabilities.   The advent of the “Google” age of instant access to information has raised both demand for service and expectations that such service will be quickly and effectively delivered.

[1] Kuntz, K. (June 23 2000). Federal Advisory Committee June 23 2000, National construction Safety Team Investigation, Station Nightclub Fire Emergency Response. Washington D.C.: U.S. Fire Administration, U.S. Department of Homeland Security .
 [2] Bluestein, G. (2005, September 7). Firefighters stuck in Ga. awaiting orders. USA Today .
[3] Committee on the Future of Emergency Care in the United States Health System, B. o. (2006). Emergency Medical Services at the Crossroads. Institute of Medicine , National Academy of Sciences. 500 Fifth Street, N.W. Washington DC: National Academies Press.
 [4] Socha, T. (2011, February 15). Timely Access to Primary Stroke Centers in the United States. (HMP Communications LLC) Retrieved April 12, 2011, from First Report Managed Care: http://www.firstreportnow.com/articles/timely-access-primary-stroke-centers-united-states

This concept paper was first delivered as an open letter to the National EMS Advisory Council in January of 2011.  A revised version of the paper was published by the IEEE as part of a poster presentation at the annual IEEE Conference on Technologies for Homeland Security in December of 2011.