Thursday, June 28, 2012

Mobile Device Remote Identity Proofing Part 4 – Best of the Biometrics

Download PDF of complete paper

VII. Fingerprints


There are two national fingerprint specifications; the FBI's Integrated Automated Fingerprint Identification System (IAFIS) Image Quality Specifications (IQS) Appendix F and NIST’s PIV-07 1006.  Appendix F has stringent image quality conditions, focusing on the human fingerprint comparison and facilitating large scale machine many-to-many matching operation.  (FBI Biometric COE, 2010)  Our focus however will be based on the PIV-071006 standard, a lower-level standard designed to support one-to-one fingerprint verification.  The class resolution requirements for fingerprint capture and use for Personal Identity Verification (PIV) at Fingerprint Application Profile (FAP) level ten or above are 500 PPI with a maximum tolerance variation of ± 2%.  Class resolution refers to the resolution required for acquisition or imaging related use. (Wing, 2011)

Most of the complexity related to resolution pertains to the friction ridges of the fingerprint.  A friction ridge is a raised section of the epidermis of the skin. A fingerprint is a trace image of the ridges in a human hand or foot to include the fingers and toes.  Traditionally fingerprints were captured by rolling the pad above the last joint of the finger and thumbs on an ink pad and then rolling the inked pad onto a piece of smooth card stock.  Impressions of fingerprints are left behind on various surfaces when the natural secretions of the body, or cosmetic oils and body lotions, gathered on the ridges are left behind when in deliberate or accidental contact with any smooth surface. These are referred to as latent prints.  While not always immediately visible these impressions could be lifted by dusting the print with specialized powders or exposing the print to chemicals like silver nitrate or cyanoacrylate ester  and capturing the image by pressing it to a specialized paper or plastic media. The latent print could then be compared to the inked print with a reasonable chance for a match determined by an experienced examiner in dermatoglyphics.  Although effective this method precludes its use in identity management based on the sheer volume of prints and comparisons required.  In other words it is not practically scalable.  What is needed is a means of digitally capturing the fingerprint and storing the resulting record.  Live Scan is the most widely used method of accomplishing this.  A live scan involves pressing or rolling a finger onto a specially coated piece of glass or platen and then imaging the fingerprint using optical, ultrasonic, capacitive or thermal imaging to capture the ridges of the finger and the valleys between them.  Optical imaging is in essence a specialized form of digital photography. The major difference between a digital camera and an optical imager for capturing fingerprints is the presence of a light-emitting phosphor layer which illuminates the surface of the finger increasing the quality of the resulting image.

There are challenging problems when developing fingerprint recognition systems that use a mobile camera. First, the contrast between the ridges and the valleys in images obtained with a mobile camera is low.  Second, because the depth of field of the camera is small, some parts of the fingerprint regions are in focus but some parts are out of focus. Third, the backgrounds, or non-finger regions, in mobile camera images are very erratic depending on how the image captures place and time. (Lee, Lee, & Kim, 2008)  So is there an insurmountable challenge with using a smart phone camera to capture a fingerprint?  Image quality is determined by light quality, lens quality and type, and shutter speed.  Smart phones do not fully address each of these important elements trading size and ease of use for function.  Because of this you will get a better picture from a low end Digital Single Lens Reflex (DSLR) camera than you will from a high end smart phone camera.  Shutter speed is not an applicable issue with fingerprint capture but light and lens quality and type are. 

An additional challenge is the probability that one can spoof or fool an optical camera with an image or impression of a fingerprint.  This is resolved within the industry by using various live finger detector technologies.  One means of live finger detection is accomplished “by measuring the unique electrical properties of a living finger that not only characterize the finger print but measure what is underneath it. This technology has the capability to process the acquired data, that is, characterize and classify the results in a way that enables the system to verify a living finger with a very high degree of confidence.” (Clausen & Christie, 2005)  It is unlikely that this type of fraud prevention technology can be integrated into widely available smart phones in the near future so the risk of fraudulent fingerprints in a mobile identity management program will have to be addressed through policy or another more easily implementable technology enhancement.  Despite the obvious challenges, capture of a useable fingerprint image with a cell phone camera is not impossible.  The operator must take into account the fixed focal length of the camera lens and make sure the auto focus is disabled in order to get close enough to capture an image with prominent ridges. Lighting also remains a challenge.  An informal test while this paper was being written used an I-Phone® 4 both with a flash and without.  A distance from the camera of four inches with no flash in a brightly lit room resulted in the best image with clearly defined ridges in the left index finger of the test subject.  By importing the image into and using the color inversion tool an image just as clear to the naked eye as one caught on a live scan was produced.  This test was by no means scientific but serves as an indicator that it is not a far stretch to utilize off the shelf cell phone technology.   The methodology of the image capture is not necessarily a limiting factor even taking into account challenges with optics and lighting.  The recognition algorithms used in the associated databases can counter or resolve some of the issues.  Many fingerprint recognition algorithms perform well on databases that had been collected with high-resolution cameras outperforming feature only searches by trained examiners. (Indovina, Hicklin, & Kiebuzinski, 2011) 

VIII. Face

Facial recognition is considered to be the most immediate and transparent biometric modality when it comes to physical authentication applications.  Why is it that many people are inclined to give up their facial image without question while the concept of giving up a fingerprint causes them great discomfort and angst.  Facial recognition is a modality that humans have always depended on to authenticate other humans.  We are in essence hardwired for facial recognition.  Therefore the addition of facial recognition through or enhanced by technology is an easy one to accept.  “Whether or not faces constitute a [special] class of visual stimuli has been the subject of much debate for many years. Since the first demonstrations of the Binversion effect…it has been suspected that unique cognitive and neural mechanisms may exist for face processing in the human visual system.” (Sinha, Balas, Ostrovsky, & Russell, 2006)

Facial recognition as a technology is one of the most mature of the biometric modalities.  It is also relatively simple from the image capture standpoint.   Capture of a facial image requires little or no cooperation from the subject making it the technique of choice for passive applications like those used in airports and casinos.  On the surface it seems as though all of the issues are algorithm related but as our concept is focused on a cell phone camera as our capture device this is not really the case. 

We previously discussed the megapixel issue but megapixel capability has no discernible impact on the biggest challenges with facial recognition which are image capture and pose correction.  Image capture is a light and optics issue.  One of the biggest drawbacks to smart phone cameras is the size of the sensor.  Camera technology has changed but the basic principles have applied since the first tin types were produced in the mid 19th century.  The sensor is the replacement to the emulsion based films.  The larger the sensor the more light it can detect resulting in better picture quality.  Smart phone cameras have a much smaller sensor than the traditional 35mm film size and as a result have a smaller angle of view when used with a lens of the same focal length.  This results in an image that is essentially cropped.  In order to adjust for this the camera must be further back from the subject posing problems related to lighting and detail.  

Facial recognition software analyzes a number of structural facial elements.  Examples of these distinctive surface features include shape of the eyes and the eye sockets; the width, length, and structure of the nose; the thickness of the lips, and the width of the mouth.  What is common about all of these elements is that they are three dimensional.  A camera captures images in two dimensions. The difference between a three dimensional subject and the two dimensional output of the cameras is handled by the software but pose issues including expressions, external features, background, and lighting all add variables that decrease the effectiveness of the algorithms. In the home environment it may be difficult to deal with lighting and background issues but this is not an insurmountable challenge.  In the same manner external features such as beards, glasses, jewelry, and piercings can all pose problems.  The author of this paper has endured lengthy picture sittings in front of DSLR cameras for PIV credentials.   It seems his white goatee gives the capture software conniptions.  This serves to demonstrate that issues of facial capture are not necessarily specific to smart phone cameras. 

Many of the issues in facial image capture would be solved if the images could be captured in 3D.  Of course this would eliminate the use of smart phones as a capture device, or would it?   Fujitsu continues to refine a way for phones that just have one rear camera to shoot three-dimensional videos with the aid of a special attachment.  The attachment uses mirrors to send two different images to the camera’s sensor and is smaller than a stick of Chap Stick.  In June of 2011 Sprint released the HTC Evo 3D 4G 'Gingerbread' Smartphone.  This phone had two integrated cameras capable of taking 3D pictures.  With the potential of standard 3d capture technology on the horizon it may not be long at all before changes in lighting and camera angles become irrelevant.  Three dimensional image captures can only serve to enhance the potential of fingerprint capture as well.  Even the issue of software sensitivity to expressions, one not mitigated by 3D technology, could soon be eliminated.  As far back as 2004 Technion, the Israel Institute of Technology, a public research university in Haifa researched using metric geometry to address the issue of expression sensitivity.  The approach was to use metric geometry isometrics to create an expression invariant three dimensional face recognition solution. (Bronstein, Bronstein, & Kimmel, 2004)

IX. Why not?


There are other biometric signatures that have both been the focus of research and have seen increased use and acceptance from the physical and logical access communities.  Iris scans, hand geometry, and voice recognition are no longer the purview of James Bond and Ethan Hunt.  Although not practical for this smart phone centric premise they are worth mentioning and potential near future candidates.
Iris scans are based on the stability of the trabecular meshwork, an area of tissue in the eye located around the base of the cornea.  The patterns are formed by the elastic connective tissues which gives the iris the appearance of radial divisions which are unique and often referred to as optical fingerprints. Iris sampling offers more reference coordinates than any other biometric resulting in an accuracy potential higher than any other biometric.  Iris scans require a high degree of cooperation from the subject from whom the sample is being acquired.  Today specialized capture devices are required.  Despite their complexity these capture devices are nothing more than still cameras capturing very high quality images.  It is certainly not out of the realm of possibility that a smart phone camera could one day soon be capable of the required performance.
Hand biometrics is a fairly mature technology that lends itself to applications where the size of the capture device is not a factor.  Current devices are based on charge-coupled device (CCD) optical scanning and consistently deliver better quality images than fingerprint scanners.  This is largely due to the increased sample size, your hand being many times larger than a finger pad.  Three-dimensional photography may show some promise as an alternative method of hand biometric image capture in the future.  Current technology remains expensive and not at all compatible with the proposed smart phone format.

Voice recognition is perhaps the oldest form of biometric identifier. Not to be confused with speech recognition, which is the process of translating speech into text, voice recognition is the process of identifying someone from their voice patterns.  It is a phenotype, an observable behavior influenced by development, often with regional characteristics.  Of all of the fields of biometric research, speech development has seen the most modern day focus with significant research over the last four decades.  Voice recognition has some uniquely distinct advantages over other biometric signatures in that it can be combined with pass phrases, knowledge based verification, or can be used as a passive background tool.   Voice recognition is the least invasive and is easy on the user.  With all this it would seem like speech recognition should be the biometric of choice but has its disadvantages.  Voice recognition programs take the digital recording and parse it into small recognizable pieces called phonemes.  These phonemes may not be consistently reproduced as they can be influenced by behavior and health factors and even background noise. 

Works Cited


Bronstein, A. M., Bronstein, M. M., & Kimmel, R. (2004). Three-Dimensional Face Recognition. Technion, Israel Institute of Technology, Department of Computer Science. Kluwer Academic Publishers.

Clausen, S., & Christie, N. W. (2005). Live Finger Detection. IDEX ASA. Fornebu, Norway: IDEX ASA.

FBI Biometric COE. (2010, April 27). FBI Biometric Specifications FAQ. Retrieved May 31, 2012, from FBI Biometric Center of Excellence:

Indovina, M., Hicklin, R. A., & Kiebuzinski, G. I. (2011). Evaluation of Latent Fingerprint Technologies: Extended Feature Sets [Evaluation #1]. U.S. Department of Commerce, National Institute of Science and Tecnhology. Washington D.C.: US Government Printing Office.

Lee, S., Lee, C., & Kim, J. (2008). Image Preprocessing of Fingerprint Images. Biometrics Engineering Research Center at Yonsei University., Korea Science and Engineering Foundation, Seoul, Korea.

Sinha, P., Balas, B., Ostrovsky, Y., & Russell, R. (2006). Face Recognition by Humans: Nineteen Results All ComputerVision Researchers Should Know About. Proceedings of the IEEE , 94 (11), 1957.

Wing, B. (2011). Data Format for the Interchange of Fingerprint, Facial & Other Biometric Information. US Department of Commerce, National Institute of Science and Technology. Gaithersburg: US Government Printing Office.

Monday, June 25, 2012

Mobile Device Remote Identity Proofing Part 3 - Apples to Oranges

Download PDF of complete paper

IV. Apples to Oranges:   

Can a camera in a smart phone be used to capture the necessary images, to include those used for biometric identification, required for the enrollment and subsequent vetting of an individual in an Identity Management System (IDMS)?  Smart phone manufacturers are equipping their newest products with cameras capable of ten or more megapixels with Nokia’s latest offering claiming a forty plus megapixel camera!  This paper proposes using the camera to capture all of the required components to establish and vet an identity so it is important to understand some of the terminology involved.  

Contrary to popular belief more megapixels do not make for a better image.  It is important to understand what makes up a good image and how it is defined within the multiple industries involved. Most people base image quality on the output / final product, the best example being print media.  So this is where we are going to start.

Pictures are printed in DPI or Dots per Inch. For example a newspaper image is printed at 200 to 250 DPI, A magazine image is 400-600 DPI, yet a billboard is typically 30 dpi.  When you print a photo on your desktop printer the optimal setting is for 250 DPI.  Don’t be fooled by the fact your typical desktop printer is capable of far greater resolution, typically from 720 to 1440 dpi. The printer may be able to print very small dots but it can only accurately reproduce colors by combining a large number of dots to emulate various tints. That is why a 250 dpi image offers perfect output quality on a 1000+ dpi printer.  

PPI is Pixels per Inch.  PPI is the resolution terminology used in the Standards promulgated by the American National Standards Institute (ANSI) and the National Institute for Standards and Technology (NIST).  Within the context of this paper PPI is used to define the resolution of the scanning mechanism used to capture a fingerprint.  PPI is an appropriate term to describe scanner input and it is the term used by the applicable Federal standards, but technically, samples per inch (SPI) is more accurate. “For example, if you scan at 200% at 300 PPI or if you scan at 100% at 600 PPI, the scanner [sees] the same data.  The PPI is different for each file, but the sampling of the original by the scanner is the same.  Maximum SPI of a given device is the optical resolution at 100% “(Creamer, 2006)  

How do dots per inch equate to pixels?  The term pixel is predominantly used to describe the digital resolution on monitors, televisions, and smart phones.  A pixel is one dot of information in a digital photograph. Digital photos today are made up of millions of tiny pixel/dots (Mega = Million).  A digital photo that is made up of 15 megapixels is physically larger than a digital photo made up of 1.5 megapixels, not clearer or sharper.  The notable difference is in file size, not picture quality.   If you print a 250 DPI picture on an 8.5 by 11 piece of paper you will be printing a maximum of 2125 by 2750 pixels. Most computer screens display at 100 DPI.  A 1280 by 1024 resolution on your monitor equates to 1310720 pixels or 1.3 megapixels.  This begs the question, why do you need a ten plus megapixel camera to capture a very high quality image?  The answer is you do not.  

V.  Camera Technology

With an explanation of some of the terminology behind us we can explore the use of a digital camera or variant, for the capture of the necessary data for enrollment in an identity management system.  When the FIPS 201 standard was first published capturing a facial image of an individual required, by standard, the use of a three point five megapixel camera.  This level of resolution was at the top end of the capabilities of digital cameras readily available to the public at the time.  Costs in excess of a thousand or more dollars a for a camera meeting FIPS requirements were not uncommon.  That same Camera was also unable to do anything more than capture an individual’s picture.  Today native resolutions on smart phone integrated cameras are commonly five times the historical benchmark.  Exponential improvements in the image capture hardware, firmware and supporting software should also enable these same devices to not only capture a photo but be multi purposed for barcode reading, OCR enabled document capture, Fingerprint image capture, and even iris image capture.  4G and LTE networks now make it possible for high speed efficient exchange of data with next generation networks coming on line reinforcing and bolstering the capability.  Consistent with Moore’s Law the capability of cell phones is on the steep end of the climb with exponential growth and improvements in power, processors, and memory.  

“A digital camera can capture data based on the mega-pixel ability of its CCD.  For example, a 2 megapixel digital camera shoots at approximately 1600x1200. 1600 pixels times 1200 pixels = 1,920,000 total pixels (rounded up)  Usually the camera images have no resolution assigned to them (although some cameras can do this)  When you open a file into an image editing program such as Photoshop, a resolution HAS  to be assigned to the file.  Most programs, including Photoshop, use 72 PPI as a default resolution. (Creamer, 2006)

VI. Establishing ownership

Biometrics is the science and technology of measuring and analyzing biological data.  Biometric identifiers are the distinctive, measurable characteristics used to identify individuals. (Jain, Hong, & Pankanti, 2000) The two categories of biometric identifiers include physiological and behavioral characteristics. (Jain, Flynn, & Ross, 2008)  Physiological characteristics are related to the shape of the body, and include but are not limited to: fingerprint, face recognition, DNA, palm print, hand geometry, iris recognition (which has largely replaced retina), and odor/scent.  Behavioral characteristics are related to the behavior of a person, including but not limited to: typing rhythm, gait, and voice.  

The most common biometric identifiers currently used in IdM systems are fingerprint and facial recognition.  With the current PIV and PIV-I programs a dual approach in accordance with NIST recommendations (NIST, 2003)is used.  The capture of these biometric identifiers is easily within the scope of commonly available commercial technologies incorporated into today’s smart devices.  It is the analogous algorithms required for image analysis and development of minutia for analytical and comparison purposes that pose the challenge.  Current facial recognition software is more than capable of effectively using images captured within the common 8-14 megapixel range of the average smart phone.  The technology is rapidly outpacing the market’s ability to sustain new releases and/or uses as evidenced by Nokia’s release of a smart phone with a 41 megapixel camera sensor dubbed the 808 PureView (Foresman, 2012)  So the specific challenge relates to the fingerprint.


Works Cited

Creamer, D. (2006). Understanding Resolution and the meaning of DPI, PPI, SPI, & LPI. Retrieved May 30, 2012, from

Foresman, C. (2012, March 2). Innovation or hype? Ars examines Nokia's 41 megapixel smartphone camera. Retrieved March 5, 2012, from arc technica:

Jain, A. K., Flynn, P., & Ross, A. A. (2008). Handbook of Biometrics. New York, NY, USA: Springer Publishing Company.

Jain, A., Hong, L., & Pankanti, S. (2000, February). BIOMETRIC IDENTIFICATION. (W. Sipser, Ed.) COMMUNICATIONS OF THE ACM , 43, pp. p. 91-98.

NIST. (2003, February 11). Both Fingerprints, Facial Recognition Needed to Protect U.S. Borders. Retrieved March 5, 2012, from NIST; Public and Business Affairs:

Friday, June 22, 2012

Mobile Device Remote Identity Proofing Part 2 - The requirement for ownership

Download PDF of complete paper

I.  Introduction

Although it is unlikely that development and adoption of a single ubiquitous identity will occur in the next five years it is reasonable to assume that various manifestations of a individuals identities are, and will continue to be established at various and increasing levels of trust and assurance.  The challenge to be faced is to fast track the ecosystems ability to work at moderate and high levels of assurance.  Historical barriers to widespread use of trusted identities at a high level of assurance are predominantly based on the high cost and limited availability of “approved” identity proofing “tools” and the infrastructure requirements in the security and maintenance of the “representation” of that identity.  This concept paper explorers the former challenge, the later being a topic that deserves its own attention.  

II.  Origins

Being able to establish and prove an identity and then use that proof of identity to ones advantage is as old as humanity itself.  It could be argued that gender, a genotype, as a biometric identifier was first used in the Garden of Eden when Adam, on being asked if he took fruit from the tree of knowledge, said “she gave it to me”.  The story in Genesis involves the only two living humans on earth and an omnipotent creator which makes identification straight forward.  This did not deter Adam from making a clear identification in order to shift guilt away from him.   Traditional methods of establishing and/or confirming the identity of an unknown person have relied on secret knowledge or possession of a token of some type.  Passwords and pins, the proverbial what you know, used so commonly today date back to the Roman Empire. The Hellenistic Greek Historian Polybius chronicled how passwords were used among the Roman Legions.

The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword - that is a wooden tablet with the word inscribed on it - takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.  (, 2012)

Tokens, what you have, date to the Bronze Age.  “A. Leo Oppenheim of the Oriental Institute of the University of Chicago reported the existence of a recording system that made use of counters, or tokens. According to the Nuzi texts, such tokens were used for accounting purposes; they were spoken of as being deposited, transferred, and removed.” (Schmandt-Besserat, 1977) 
Today the pin, password, and token are synonymous with modern society.  There are seemingly endless equipments for passwords from the moment you turn on your computer through the moment you click on the accept agreement or purchase icon.  Where would you be without your ATM card, pin, and the ability to access your cash anywhere, at any time, worldwide?  The problem is that the methodology we are using in modern America has changed little since its antiquarian origins.  We are still only commonly testing for knowledge or possession, not ownership.  Enter Biometrics

III. The requirement for ownership

Testing for possession or knowledge has become the standard for commercial identity management.  In the 21st century most people have a virtual identity presence, one that resides in the World Wide Web.  This is the identity they use to move among the social networking sites, bank, pay bills, and shop.   With the massive increase in the use of the web has come a corresponding increase in identity theft.  “In 2011 identity fraud increased by 13 percent.  More than 11.6 million adults became a victim of identity fraud in the United States, while the dollar amount stolen held steady”. (Javelin Strategy & Research, 2012)  Steps have been taken to strengthen identity security especially in the financial sector with the addition of images, secret questions, and a plethora of additional knowledge based steps that are far more effective at frustrating users than they are at increasing security.  Each of these additional security features is still nothing more than additional knowledge and additional knowledge can easily be stolen.  What is required is something that is definitively tied to the identity holder, something that cannot be forged, lost or stolen.  That something is biometrics.

Biometrics, like passwords and tokens are not a 21st or even 20th century phenomenon. Handprints were used for identification purposes nearly four thousand years ago when Babylonian Kings used an imprint of the hand to prove the authenticity of certain engravings and works.  Babylonia had an abundance of clay and lack of stone which led to the extensive use of mudbrick.  Ancient Babylonians understood that no two hands were exactly alike and used this principle as a means of identity verification.  Modern dactylosscopy, the science of fingerprints was used as early as 1888 when Argentinean police officer Juan Vucetich published the first treatise on the subject. (Ashbourn, 2000)

Biometrics can be defined as observable physical or biochemical characteristics that can typically be placed into two categories, phenotype and genotype.  The phenotype biometrics category contains the identifiers most commonly used for transactional identification today.  Fingerprints, iris, facial features, signature patterns, are all phenotype identifiers based on features or behaviors that are influenced by experiences and physical development.  From the owners perspective these are often viewed as non-threatening and non intrusive.  The Genotype category measures genetically determined traits such as gender, blood type, and DNA, the collection of which is generally viewed as very intrusive.  DNA, the ultimate biometric signature, is generally considered the most intrusive often vilified in popular fiction.  In the 1997 film Gattaca DNA determines an individual’s status in society with each person categorized as a Valid or In-valid. In the 2012 blockbuster The Hunger Games DNA serves as a signature for children entering the Reaping, a lottery culminating in a morbid death match. Both of these examples of pop culture reflect the underlying distrust society has in the government’s possession of such an intimate identifier.  

Biometrics is primarily used in two modes, each with a different purpose; identification, and verification.  The term recognition is a generic one encompassing the one to one and one too many modes in which biometric systems operate.   Biometric identification is the process of associating a sample to a set of known signatures.  For example, the US Visit program which checks a presented set of fingerprints [sample] against multiple databases, containing known signatures.  The results of a one to many searches are usually displayed as a group of the most probable matches often associated with a probability score as a percentile that illustrates the degree of match between the sample and the matched group.  Biometric verification is the process of authenticating the sample to the record of a specific user with the results delivered in binary fashion, yes or no.  Real world examples of this one to one verification include fingerprint match on card in the PIV program or as a third factor of authentication to an access control system where what you have and what you know needs to be validated against ownership.  Most commercial systems operate in verification mode.

Before identification or verification can ever occur some type of enrollment process must take place in order to establish to some level of trust that the biometric signature is owned by a specific individual.  Only then can varied rights and privileges (attributes) be assigned to that owner and subsequently secured by means of PKI or similar technology.   One of the primary impediments to broad scale use of biometric signatures is the expense and inconvenience of enrollment programs.  But what if it were as easy as using your mobile phone in your living room?

Using a mobile device to establish the validly of the claim of a specific identity is simple in principle but problematic in execution.  The capture of the required information can be divided into the following two steps: creation of a claimant’s profile, and binding a known identity to the claimant.    Creation of the profile typically includes the identification and capture of two data types.  The first is biographical /descriptive data, the second is biometric data.  For the purposes of this paper, we shall refer to these combined datasets as the Individual Profile or IP.  

This concept is based on leveraging the rapidly increasing level of hardware technology and network availability incorporated into the worldwide wireless telecommunications system to provide a mechanism for the validation of claims to a specific identity, binding that identity to the claimant, and securing the identity for use in an environment requiring various levels of trust by a wide array of relying parties. 

Friday, June 15, 2012

Mobile Device Remote Identity Proofing Part One

Download PDF of Complete Paper

How smart phones could change the identity management system ecosystem

Part One:

This concept paper was recently submitted for consideration for an up coming technical conference. After receiving notification that the abstract met with positive peer review I decided that a healthy topical discussion may be in order before I finished up the final version.  Rather than posting a lengthy paper in one shot I decided to break it up into its key components to allow you, the reader, to digest each section and focus any comments you may have accordingly.  This first post is the abstract with which I hope to whet your appetite. I have a bit of time before the final paper must be submitted.  I rather selfishly hope that any comments you may make over the next week or so as each section is posted will help in its refinement.

The Abstract

Questions regarding an individual’s identity are addressed millions, if not billions, of times a day.  E-commerce, healthcare, government and financial institutions, among others, must constantly address the question, “is this person who he/she claims to be?”  Each institution struggles with results of varied “discrete multiplicities” (Deleuze, 1966) on which they must base a decision to the relying party’s pivotal question “what rights or privileges should be granted to this individual?”  This paper addresses the persistent challenges of extending strong identity management from government sponsored programs for government employees to privacy and security protection programs for the general population.  Among the proposed concepts is a solution based on leveraging the rapid acceleration in hardware/smart-phone sophistication and network availability incorporated into the worldwide wireless telecommunications system.   These elements provide a modality allowing validation of claims to a specific identity, binding that identity to the claimant, and securing the identity for use in an environment requiring various levels of trust by a wide array of relying parties.  

Although it is unlikely that development and adoption of a single ubiquitous identity will occur in the next five years it is reasonable to assume that various manifestations of an individual’s cyber identities are, and will continue to be established at various and increasing levels of trust and assurance.  The challenge to be faced is to fast track the ecosystem’s ability to work at moderate and high levels of assurance.  Historical barriers to widespread use of trusted identities at a high level of assurance are predominantly based on the high cost and limited availability of “approved” identity proofing “tools” and the infrastructure requirements in the security and maintenance of the “representation” of that identity.

The most common biometric identifiers currently used in IdM systems are fingerprint and facial recognition.  With the current PIV and PIV-I programs a dual approach in accordance with NIST recommendations (NIST, 2003)is used.  The capture of these biometric identifiers is easily within the scope of commonly available commercial technologies incorporated into today’s smart devices.  It is the analogous algorithms required for image analysis and development of minutia for analytical and comparison purposes that pose the challenge.  Obstacles include contrast, depth of field and background, or non-finger regions (Lee, Lee, & Kim, 2008)  Current facial recognition software is more than capable of effectively using images captured within the common 8-14 megapixel range of the average smart phone.  The technology is rapidly outpacing the market’s ability to sustain new releases and/or uses as evidenced by Nokia’s release of a smart phone with a 41 megapixel camera sensor dubbed the 808 PureView (Foresman, 2012)  So the specific challenge relates to the fingerprint.

(1966). In G. Deleuze, Bergsonism (H. Tomlinson, & B. Habberjam, Trans.). New York, New York: Zone Publishing Inc.

NIST. (2003, February 11). Both Fingerprints, Facial Recognition Needed to Protect U.S. Borders. Retrieved March 5, 2012, from NIST; Public and Business Affairs:

Lee, S., Lee, C., & Kim, J. (2008). Image Preprocessing of Fingerprint Images. Biometrics Engineering Research Center at Yonsei University., Korea Science and Engineering Foundation, Seoul, Korea.

Foresman, C. (2012, March 2). Innovation or hype? Ars examines Nokia's 41 megapixel smartphone camera. Retrieved March 5, 2012, from arc technica:

Wednesday, June 13, 2012

Managed Attributes, Not Standards, Lead to Interoperability

Download Complete Paper

I.     Introduction

Managed attributes ensure essential interoperability. This is the foundation for providing the most skilled, most timely and most appropriate response to any situation, regardless of size. Emergency managers and incident commanders can make sound decisions with the additional data that comes from knowing when and where specific resources are located, what tasking assignments have been given and to whom. Not only is everyone on the scene accounted for, but tasks are given to responders with verified skills and capabilities thereby contributing to the command staff’s ability to predict the next threat and deploy resources accordingly, maintain critical situational awareness and respond to dynamic conditions quickly and effectively. Assigning responders to duty is not an issue. What’s critical is assigning the responder with the appropriate and verifiable skills to a job he/she is capable of accomplishing, ensuring a positive outcome for the situation and the responder.

II.    Setting the scene

A.                   Personal experience sets the stage for complete understanding

My first exposure to pre-hospital care was the mandatory “first responder” training required for firefighters by the State of California more than twenty years ago. The training program which was taken concurrently with a CPR class added up to more than the 120 hours of training required to be certified as Basic EMT in the Commonwealth of Massachusetts a couple of years later. In the end it was not the hours required to complete a training program that struck me as being the unusual dichotomy but the difference in skills. As a “first responder” I was trained in how to properly remove a helmet, place the electrodes from the 12-lead EKG on a patient, spike IVs, assist with medications, etc. As a “Basic EMT” in Massachusetts I was not trained in any of those skills. In fact I did not use them again until the PB waiver program was instituted. Many years later as a regional hospital preparedness coordinator I struggled with the concept that we could not send paramedics across regional boundaries within the same state, even within the same county and still allow them to work as paramedics because scope of practice and certification was regional and there was no reciprocity within the state!

Times have changed but the essential challenges in the practice of pre hospital care have not. There may be an EMS community but it is segregated even within its day-to-day practices never mind responses to what can be categorized as disasters. On February 20, 2003 the fourth deadliest   nightclub fire and the 9th deadliest place of public assembly fire in U.S. history took place at the Station Nightclub in Rhode Island. The multi-jurisdictional (on a very large scale) fire EMS response was atypical when it comes to patient care and it worked. It is conjecture but I would hypothesize that the response was modern in capability but traditional in implementation. That is, a small state with close boarder ties to services in Massachusetts and Connecticut and familiarity among the services responded as needed, there were no questions of scope of practice, patients were cared for at the level the provider was trained to without immediate regard for local or regional regulations.

In addition to the one hundred fatalities there were an estimated 230 casualties, 186 transported to hospitals by first responder agencies. Over five hundred firefighters, EMS, and Police responded with fifty-seven public and six commercial ambulance companies providing both basic and advanced life support services. (Kuntz, June 23 2000)1

I would argue the Station Nightclub fire response was a success carried out by heroic and dedicated professionals. The brethren of these same professionals also answered the call to service for hurricane Katrina in late August and early September of 2005. I would argue that that response was more typical of large multi jurisdictional, multi state responses. Some level of organization was applied to the call out and activation of resources on a national scale. The typical American answer of a call to duty resulted in a massive response. However, many police, fire and EMS organizations from outside the affected areas were reportedly hindered or otherwise slowed in their efforts to send help and assistance to the area. FEMA sent hundreds of firefighters who had volunteered to Atlanta for two days of training on topics including sexual harassment and the history of FEMA. (Bluestein, 2005)2

III.   Underlying Problems

So what is the underlying problem? We can look at it from a national service prospective as well as a level of service prospective. Take a look at the state of the service in general. An excellent summary is contained in a recent report issued by the National Academy of Sciences.

“Each year in the United States approximately 114 million visits to EDs occur, and 16 million of these patients arrive by ambulance. The transport of patients to available emergency care facilities is often fragmented and disorganized, and the quality of emergency medical services (EMS) is highly inconsistent from one town, city, or region to the next. Multiple EMS agencies some volunteer, some paid, some fire based, others hospital or privately operated frequently serve within a single population center and do not act cohesively. Very little is known about the quality of care delivered by EMS services. The reason for this lack of knowledge is that there are no nationally agreed-upon measures of EMS quality, no nationwide standards for the training and certification of EMS personnel, no accreditation of institutions that educate EMS personnel, and virtually no accountability for the performance of EMS systems. While most Americans assume that their communities are served by competent EMS services, the public has no idea whether this is true, and no way to know.

The education and training requirements for the EMTs and paramedics are substantially different from one state to the next and consequently, not all EMS personnel are equally prepared. For example, while the National Standard Curricula developed by the federal government calls for paramedics to receive 1,000 - 1,200 hours of didactic training, states vary in their requirements from as little as 270 hours to as much as 2,000 hours in the classroom. In addition, the range of responsibilities afforded to EMTs and paramedics, known as their scope of practice, varies significantly across the states. National efforts to promote greater uniformity have been progressing in recent years, but significant variation remains.” (Committee on the Future of Emergency Care in the United States Health System, 2006) 3

My initial brief example of the differences in training between states pales in comparison to the preceding quote. We have established the fact that we have dedicated trained and competent personnel working in an environment that is restrictive primarily due not to the lack of a national standard but to a lack of information. I will expound on that statement shortly. First, however, let’s take a look at the problem from a scope vs. patient care prospective. An excellent example was discussed in an article by Tori Socha published in February, 2011. The article dealing with stoke reminded me of the initial introduction of thrombolytic drug therapy through pre-hospital providers in Massachusetts and the personal struggle some metropolitan medics had being able to use this lifesaving tool in one region, with their big city services, but not have it available to them in the small local, sometimes volunteer ALS services in the communities in which they resided. Ms. Socha stated;

“Stroke, with direct and indirect costs totaling $68.9 billion, is a major primary health priority in the United States. Every 40 seconds, someone in the United States experiences a stroke, and every 3 to 4 minutes, someone dies of a stroke. Administering intravenous (IV) recombinant tissue plasminogen activator (tPA) within 3 hours of onset of symptoms is associated with a 30% greater likelihood of decreased disability compared with placebo. In selected patients, IV recombinant tPA may be safely used up to 4.5 hours after symptom onset. Despite its clinical efficacy and cost-effectiveness, only 3% to 8.5% of patients with stroke receive recombinant tPA. One limitation is timely access to care. In 2000, the Brain Attack Coalition recommended establishing primary stroke centers (PSCs). Researchers recently conducted a study to determine the proportion of the population with access to Acute Cerebrovascular Care in Emergency Stroke Systems (ACCESS). The analysis found that if ground ambulances are not permitted to cross state lines, fewer than 22.3% of Americans (1 in 4) have access to a PSC within 30 minutes of symptom onset.” (Socha, 2011)4

There is no doubt that lack of definition causes, at bare minimum, organizational angst and disparity in the EMS service. It can also be argued that this lack of definition can result in loss of life, not due to negligence but the inability of available service to provide a timely response across jurisdictional boundaries stymied by the invisible but very real wall of scope of practice limitations. This is evidenced by the research from the Socha article as well countless additional journal articles and studies. The truly disquieting issue is that this conundrum is not one unique to an incident of national consequence but can be found in day-to-day EMS operations.

IV.   Solutions

So what is the solution? I left emergency services several years ago to seek technology solutions for common operational problems faced by our nation’s first responders. Over the last ten years I have listened to a consistent theme propagated in general by well meaning federal civil servants. Regardless of the problem the solution is of course to regulate it at the federal level. The following quote from the Committee on the Future of Emergency Care starts with a rousing call to arms.
“While today’s emergency care system offers significantly more medical capability than was available in years past, it continues to suffer from severe fragmentation, an absence of system wide coordination and planning, and a lack of accountability. To overcome these challenges and chart a new direction for emergency care, the committee envisions a system in which all communities will be served by well planned and highly coordinated emergency care services that are accountable for their performance. In this new system, dispatchers, EMS personnel, medical providers, public safety officers, and public health officials will be fully interconnected and united in an effort to ensure that each patient receives the most appropriate care, at the optimal location, with the minimum delay.” (Committee on the Future of Emergency Care in the United States Health System, 2006)3
All communities should be served with highly coordinated emergency care services that are accountable for their performance and those services should be interconnected. I do, however, disagree with manner in which the coordination, accountability and connectivity should occur. A bit further in the report the foundation of the proposed solution is revealed.
“The National EMS Scope of Practice Model Task Force has created a national model to aid states in developing and refining their scope-of-practice parameters and licensure requirements for EMS personnel. The committee supports this effort and recommends that state governments adopt a common scope of practice for EMS personnel, with state licensing reciprocity. In addition, to support greater professionalism and consistency among and between the states, the committee recommends that states accept national certification as a prerequisite for state licensure and local credentialing of EMS providers. Further, to improve EMS education nationally, the committee recommends that states require national accreditation of paramedic education programs. The federal government should provide technical assistance and possibly financial support to state governments to help with this transition.” (Committee on the Future of Emergency Care in the United States Health System, 2006)3
There it is. Solution by national regulation. This could be effective if the United States were the size of Switzerland. It would also be quite effective if we did not have 50 different autonomous state governments, not including territories. The individual states do not want to give up their sovereignty, nor should they be forced to. It is not necessary. The solution is to allow the authority having jurisdiction the freedom to define the scope of practice. How can this premise, the perceived status quo, change things? The logical proposal is the delivery of this [scope] information in a trusted fashion attached to a non-reputable identity. For those familiar with the ongoing work to leverage trusted identity by the federal government for physical and logical access control you likely have an idea where I am going with this concept. Several states have taken definitive steps to leverage the work done by the federal government to institute their own identity management (IDM) programs. One or two truly visionary early adopters are using the trusted identity as a foundation and attaching attributes. For example some states have implemented, as part of its functional mandate, “authenticated qualifications and attributes” by which they mean trusted and validated by the authority having jurisdiction or accrediting organization and the ability to tie first responders' identities and attributes to authoritative sources of information (e.g. licensing, certification, and status databases for paramedics, police, licensed heath care practitioners, firefighters, etc). 

Management of these attributes allows for the rapid and effective allocation of personnel resources during an operation.  Historically, management of these resources, assisted through mutual aid compacts, both formal and informal, was hampered by a lack of information and trust.  Further there often is a lack of understanding as to the differing individual elements that defined the attribute from jurisdiction to jurisdiction.  Without any mechanism to provide a trusted and detailed definition of the attribute the only recourse has been to compare attributes between jurisdictions at the lowest common denominator.  Categorization of resources has been limited to generalized groupings like Emergency Services Functions (ESFs) and subsets of Critical Infrastructure and Key Resource sectors (CI/KR).  A frequently disputed alternative has been for the federal government to dictate the attribute definitions to state and local authorities.  This lack of information is compounded by the specter of legal accountability for the jurisdiction receiving the resources especially in those attributes which directly influence life safety.  The result is an under utilization of the available resources.

Attribute management within an identity system is similar to that in network management. In a network an “attribute” is the property of a managed object that has a value. Similarly in one example of an IDM attribute-enhanced system an attribute is the property of the person who has enrolled, and the value is “what that attribute is.” For example: Joe Smith enrolls and designates he is a paramedic. Joe is the “managed object” and paramedic is the “attribute.” The system then associates the “value” as the skill set of a paramedic.
Also similar to network management, certain mandatory initial values for attributes are specified as part of the managed object class definition. Associating the skill set of a paramedic is a mandatory initial value, but conditional values can also be added, these may be unique to the jurisdiction where a responder works on a local, regional, or state level. These paramedic conditional attributes could also be additional training or certifications that are above and/or beyond the initial mandatory value of a paramedic as defined by the federal AHJ. This allows all stakeholders to have their cake and eat it too. The federal government establishes the baseline and state and local jurisdictions are not forced into long term expensive programmatic changes.

When the attribute dataset is read by a computing device the retrieved information is reported to the user in local terminology and an instant comparison is made between the individual knowledge and task statements and requirements of the local jurisdictions certification requirements and the sending jurisdictions certification requirements and critical discrepancies are reported. For example as part of the comparison the table of pharmacology for a paramedic is compared between a sending jurisdiction and a receiving jurisdiction is compared and the receiving jurisdictions report shows that the medic is not trained in the administration of a thrombolytic, part of the scope of care of the receiving jurisdiction.

My example was originally designed to use national regulatory or volunteer compliance standards as a baseline. A methodology was developed allowing for local, regional, or county based training and skill sets to be incorporated into the system. The subsequent modifications to the system provided both a means of tracking these local training programs, optionally using the resources that are the outcome of these programs and communicating this information to disparate jurisdictions whose training has a completely different baseline but whose terminology and outcomes are similar. 

Systems of this type are designed to give command authorities trusted, verified, data on skills licenses and certifications held by respond in individuals and teams in order to allow use of these human resources at the highest common denominator thereby making the most effective use of the resources available and providing the highest level of care and services to those in need during times of disaster of any scale.
Twenty five years ago very little if any consideration was given to a need for instant reciprocity.  With a few exceptions emergency resources were drawn locally or regionally from immediately adjacent jurisdictions.  Today responses to critical events can be national, leveraging the spirit and altruism that defines America.  Twenty five years ago a piece of paper, a uniform, or a badge could serve as proof of qualification.  Today the litigiousness of our society has prevented even the federal government from using emergency services personnel to their demonstrated capabilities.   The advent of the “Google” age of instant access to information has raised both demand for service and expectations that such service will be quickly and effectively delivered.

[1] Kuntz, K. (June 23 2000). Federal Advisory Committee June 23 2000, National construction Safety Team Investigation, Station Nightclub Fire Emergency Response. Washington D.C.: U.S. Fire Administration, U.S. Department of Homeland Security .
 [2] Bluestein, G. (2005, September 7). Firefighters stuck in Ga. awaiting orders. USA Today .
[3] Committee on the Future of Emergency Care in the United States Health System, B. o. (2006). Emergency Medical Services at the Crossroads. Institute of Medicine , National Academy of Sciences. 500 Fifth Street, N.W. Washington DC: National Academies Press.
 [4] Socha, T. (2011, February 15). Timely Access to Primary Stroke Centers in the United States. (HMP Communications LLC) Retrieved April 12, 2011, from First Report Managed Care:

This concept paper was first delivered as an open letter to the National EMS Advisory Council in January of 2011.  A revised version of the paper was published by the IEEE as part of a poster presentation at the annual IEEE Conference on Technologies for Homeland Security in December of 2011.

Friday, June 8, 2012

Sanity is not statistical: Why does it really matter if you are who you say you are?

Download PDF of complete paper

As Winston Smith, the protagonist of 1984’s big brother dominated world, falls asleep his last thought is “Sanity is not Statistical” (Orwell, 1949).  There are multitudes of varied analysis that have accompanied this poignant quote from the George Orwell classic.    At their root they break down to a single common theme, everything is objectively true or false.  Depending on what side of the societal fence you reside this could mean truth is what is reported by Fox News or MSNBC, or America is represented by the Occupy movement or the Tea Party.  The reality is that fundamental truths or untruths lie someplace in between the extremes.  Things do not become true just because the majority believes in them or false because the minority believes in them.  Ask 100 people leaving the local chain pharmacy if they need to have their loyalty card scanned or provide their email or phone number to complete their purchase and the majority will say yes.  Ask them why and you will likely be treated to some blank and or puzzled stares. 

The problem

 If you ask 100 people on the street if HTTPS is secure it is likely that half of them will ask you what HTTPS is.  The majority of the remaining half will insist it is safe based on their tertiary experiences.  HTTPS begins "my banks" URL, Amazons URL etc., so of course it is safe or they would not use it.  A small minority will tell you nothing is secure or make a statement that includes a variation on that theme.  It is true that HTTPS is a lot more secure than HTTP.  It is also true that is possible to break into HTTPS/TLS/SSL even when websites do everything correctly.  Most people think of HTTPS as a bank vault when in fact they should equate it to the lock on the door of their house.    A locked door will keep the honest people honest and the casual thief forewarned but it will not stop a determined attack.   Determined attacks like breaking into a CA, compromising a web site, compromising a DNS or a router are all paths around the HTTPS security. 
The United States population is one of the most open, information centric demographics in the world.  Tens of millions of people voluntarily expose the most intimate details of their lives through the pervasive world of social networking.  More than 88% of consumers have made purchases online spending more than 142 billion dollars in 2010 with a 14% increase continuing to trend upwards through the 2nd quarter of 2011 (comScore, Inc., 2011). Within a few years this trend will represent hundreds of billions of dollars of transactions conducted with the barest of security protections.  The bulk of these transactions can be characterized as the modern equivalent of giving your checking account number, routing number, and driver’s license information to a 16 year old supermarket customer service worker in return for a check cashing card.  A FTC-sponsored survey estimated that the annual total loss to businesses due to ID theft approached $50 billion with the total annual cost of identity theft to victims at $5 billion (H CMTE on Ways and Means, 2012).  This means more than a third of annual gross cyber revenue is lost to business or more likely the losses are passed to the consumers.  Yet those same hordes of consumers who willingly play this financial Russian roulette on a regular basis are the doppelgänger vocal detractors of government sponsored identity systems.  The paradox of an individual who will surrender his or her credit card, credit history, and identity to a faceless cyber organization but balk at providing their government-issued social security numbers to either state or federal government program is astounding.
The fundamental issue is one of trust - not trusted identity but trusted government.  Winston, in 1984, represented a tacit prediction of the lack of trust people would have in their governments and the total control that governments would impose in their people in the future.  Although we have thus far escaped turning America into a totalitarian state public trust is at an all time low according to the Pew Research Center.  Nearly eighty percent of Americans do not trust their own government.  In fact, the only time since 1975 that government trust broke 50% was in the months following 9/11 (Thompson, 2010).  To summarize, eighty eight percent of Americans trust the internet with their identity and their hard earned money while eighty percent of Americans distrust their government.  Given this situation, it is not surprising that government sponsored identity trust models have struggled to get off the ground unless they are thus elevated by significant amounts of funding.  

The best possible solution?

Granted there are a number of security programs that offer trust to some degree, the most common of these are digital certificates.  A digital certificate is an electronic signature that establishes your credentials when doing business or other transactions on the Web. It is issued by a certification authority (CA). It contains your name, a serial number, expiration dates, a copy of the certificate holder's public key (used for encrypting messages and digital signatures), and the digital signature of the certificate-issuing authority so that a recipient can verify that the certificate is real.  It is not just individuals who can possess digital certificates.  In fact digital certificates are a byproduct of the secure sockets layer protocol developed in 1994 by Netscape for sending information over the relatively new internet.   It is this specific solution that we have put under the magnifying glass.

SSL was created in the infancy of the internet and designed to prevent passive attacks.  When SSl was developed there was no such thing as e-commerce and “credentials” were seldom if ever transmitted other than in and through government networks.  At the time the internet had less than five million users but growth at nearly a hundred percent per year beginning in the late 1990’s resulted in to the four billion publicly facing hosts of today. (Coffman & Odlyzko, 1998)  However the development of the SSL protocol recognized a potential vulnerability known as “Man in the Middle Attacks”.  A man in the middle attack is carried out by an attacker making independent contact with the victims, e.g. user and host, and relays information between them so that it appears as though they are communicating directly when in fact the data can be both modified and/or stolen.  In order to guard against this [at the time] perceived threat Certificate Authorities (CA’s) providing public key encryption was introduced.  Public Key Encryption was described as follows during the development of the AAL protocol; 

“Public key encryption is a technique that leverages asymmetric ciphers.  A public key system consists of two keys: a public key and a private key. Messages encrypted with the public key can only be decrypted with the associated private key. Conversely, messages encrypted with the private key can only be decrypted with the public key. Public key encryption tends to be extremely computing intensive and so is not suitable as a bulk cipher”. (Hickman, 1995)  In an interview with Moxie Marlinspike, CTO and co-founder of Whisper Systems, SSL designer Kipp Hickman said the addition of CA’s was “thrown in at the end” …”the whole CA thing was a bit of a hand wave” (Marlinspike, 2011)

In 2011, Comodo, the 2nd largest certificate authority in the world was hacked resulting in nine certificates for seven domains being issued.  Among the domains affected were Google, Yahoo, Skype, Mozilla and Microsoft’s Live.  Originally thought to be an action of “cyber terrorism” by a city state (Iran) based on the IP address trace ( it later appeared to be the work of a single individual without a great deal of technical experience. (Marlinspike, 2011)

So given what appears to be a less than auspicious track record and questionable parentage why would the educated consumer turn to a CA to help establish their identity and more importantly trust the identity of the cyber entity to which they are surrendering their financial information.  Historically digital certificates can lay claim to a twenty plus year history of trust and effectiveness.   Each time you log into your bank account online or make a purchase with your Amazon account the transactions and parities involved are authenticated using digital certificates.  As is obvious from our previous examples the technology is not without its detractors and its very public failures.  These however need to be balanced against its success stories.

Why Government?

 Government sponsored PKI, more specifically US government sponsored PKI has not yet been compromised.  Like most of the rest of the PKI world the US government PKI is built around the International Telecommunication Union (ITU) X.509 standard.  Program policy is overseen and managed through the Federal Public Key Infrastructure (FPKI) Policy Authority.  FPKI is an interagency body set up under the CIO Council to enforce digital certificate standards for trusted identity authentication across the federal agencies and between federal agencies and outside bodies, such as universities, state and local governments and commercial entities.  The United States has adopted a Federal PKI policy and program as a response to the Paperwork Elimination Act of 1998 which required electronic government services by October 21, 2003.  The law itself is technology agnostic but the consensus is that PKI combined with biometrics, multi factor authentication, and hardware tokens, is the best available option.  In and of itself PKI is superior to the physical inked signature on a document and when used with the previously described accoutrements are superior to other existing electronic signature. 

The senior advisor to the chair of the Federal PKI steering committee sums up the US government program thusly; 

“The goals of the U.S. Federal PKI are to create a cross-governmental, ubiquitous, interoperable Public Key Infrastructure and the development and use of applications which employ that PKI in support of Agency business processes. In addition, the U.S. Federal PKI must interoperate with State governments and with other national governments. Our goals recognize that the purpose of deploying a PKI is to provide secure electronic government services utilizing Internet technology, not only to satisfy the little hearts of a dedicated cadre of techno-nerds and paranoiac security gurus but to serve the citizenry.” (Alterman, 2012)

Who are you?  In Orwell’s 1984 Winston Smith was a clerk in the records department of the Ministry of Truth where is job is to rewrite historical documents so that they can match the ever changing party line.  This job involves removing photographs and altering documents generally for the purpose of removing “un-persons” that have crossed the party and are eliminated both physically and virtually.  The hesitancy for people to “share” information with the government is strongly influenced by an Orwellion fear that the more information the government has on you the more control they will have over your life.  The purpose of this paper is not to debate the right or wrong of that statement rather to clarify just what the government already knows and why it is necessary in the Identity management world.  

Who and what you are digitally is broken down into a series of attributes that define your person and lead to the rights and privileges that are based on those defining attributes.  Standardizing what these attributes are and how they are vetted leads to trust in the identities, a requirement for interoperability.  The best example of this trust model across multiple jurisdictions is RealID.  Real ID has some controversial elements but we are just focusing on the Identity, vetting, and information sharing elements.  These are the same elements required for you to open and use an® account and contain what is known as Personally Identifiable information or PII

“The REAL ID Act of 2005, Pub.L. 109-13, 119 Stat. 302, enacted May 11, 2005, was an Act of Congress that modified U.S. federal law pertaining to security, authentication, and issuance procedures standards for the state driver's licenses and identification (ID) cards, as well as various immigration issues pertaining to terrorism.
The law set forth certain requirements for state driver's licenses and ID cards to be accepted by the federal government for "official purposes", as defined by the Secretary of Homeland Security. The Secretary of Homeland Security has currently defined "official purposes" as presenting state driver's licenses and identification cards for boarding commercially operated airline flights and entering federal buildings and nuclear power plants”. (Wikimedia Foundation, Inc., 2012)

The American Civil Liberties Union, a strong opponent of Real ID and its variants consistently claims that these types of programs are a severe detriment to privacy rights.  The ACLU states that there are “real security concerns with creating a federal identity document every American will need in order to fly on commercial airlines, enter government buildings, or open a bank account” and that “tens of thousands of people will have access to our information in a massive government database.  The national database could well become a one-stop shop for identity thieves.” (ACLU , 2008)    It can be successfully argued that it is the hard sell, or the phrase required by law, that defines government programs that causes the dissension.  

Who are you really?

 PII is any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual‘s identity, such as name, social security number, date and place of birth, mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.‖ Examples of PII include, but are not limited to:
  •  Name, such as full name, maiden name, mother‘s maiden name, or alias
  • Personal identification number, such as social security number (SSN), passport number, driver‘s
  • license number, taxpayer identification number, or financial account or credit card number
  • Address information, such as street address or email address
  • Personal characteristics, including photographic image (especially of face or other identifying characteristic), fingerprints, handwriting, or other biometric data (e.g., retina scan, voice signature, facial geometry)
 (McCallister, Grance, & Scarfone, 2010)
The problem with PII is that it is personally identifiable, and we live in a world where we have identities both physically and in cyberspace.  Twenty first century interaction requires we have a digital identity but the digital ecosystem has not yet balanced out.  As a result you can have multiple digital identities.  The real problem is that that your identity could be stolen from you or even created without your knowledge.  Why is this?  Millions of Americans who do not trust local, state, or federal government to keep a database of PII willingly give it to any cyber entity who asks for it.  Consider all of the social networking sites, game and entertainment sites, browsers, cloud applications and others all requiring you to fill out a simple form which most people do without questions.  Without more than a few seconds consideration many people give up their information to a faceless entity because that entity has something they want, information, a purchase, a connection, a relationship.  In goes your name, alias, address, bank or credit card information.  Now that your basic information is in you will nearly always be prompted for answers to secret questions and in goes your mother’s maiden name, place of birth, fathers middle name etc.   Now that you have your account how often do you fill in a profile with your age, gender, personal preferences and more.  All of this data it not used for making sure travel and government buildings are secure.  It is not protected in FISMA compliant data centers or secured and encrypted with federally regulated PKI.  Rather it is collected for the sole purpose of generating revenue either directly or indirectly for the social networking or e-commerce web site you registered with.  The final blow comes with the social networking sites that flood you with a number of options for sharing your information.  

Who do you want to be?

The vast majority of Americans feel that the internet offers anonymity.  The old adage “On the internet, nobody knows you’re a dog” (Steiner, 1993) was published as part of a satirical cartoon in a 1993 addition of the magazine New Yorker.  The message that the cartoon was originally meant to convey was that internet users could send and receive messages in relative anonymity.  1993 was before social networking and e-commence, a time when cyber anonymity equated to privacy.  That same anonymity is now a looming specter of privacy infringement and fraudulent identity creation because there is no requirement to prove you are you who claim to be in order to establish a cyber identity.  Try the 20-20 experiment.  Spend twenty minutes and twenty dollars researching yourself on the internet.     Even the layperson is likely to develop enough information that would allow them to establish a cyber identity to include finding their social security number and financial history.  From this point ecommerce is but a shot step away.  

Winston Smith rewrote identity history for the totalitarian government in Orwell’s 1984.  It is not the government that is the nameless faceless predator stalking the dark paths of our cyber world but the opportunistic hacker or the casual yet technologically savvy cyber mugger.  Stealing your purse or wallet used to be an intimate physical act. Today it is accomplished with the stroke of a keyboard.  It is time for the cyber world to recognize its inhabitants as unique individuals.  Contrary to popular belief this uniqueness can be achieved in near complete anonymity as compared to the publicly facing methods currently in use.   Moreover the uniqueness can vastly increase the level of trust possible in a cyber identity while greatly reducing fraud and identity theft.  Your cyber identity need be nothing more than a digitally signed public and private key pair, an encrypted series of numbers that represent you.  Rather than repeatedly creating an untested, un-vetted   cyber identity on every site you visit you create a single profile for a single certificate authority.  Given the private sectors track record it is logical that that authority be, or be regulated by and overseen by government.   This does not require any information beyond what you have already provided to the government throughout your life in the form of Birth certificates, social security card applications, tax records, vehicle registrations, and license applications of all types.  The difference is that this time the information will be cross checked and a cyber alias, a series of numbers, will be created for and associated with that information.  The cyber alias can be tied to you through any number of physical unique identifiers which make it virtually impossible for anyone to co-opt or use without your express permission and physical presence.  This process is in reality the exact opposite of the claims of its detractors.  It locks up your cyber identity and provides you with the sole key to unlock and use it. 

Google your own name and ask yourself the question, is this really me?  Are you really willing to play the odds?  Nine million Americans were victims of Identity theft in 2011.  Just how sane is that statistic?

Works Cited

ACLU . (2008, April 29). ACLU Testifies before Senate against Real ID. Retrieved May 15, 2012, from ACLU:
Alterman, P. (2012). The U.S. Federal PKI and the Federal Bridge Certification Authority. Retrieved May 15, 2012, from Federal PKI Policy Authority:
Coffman, K. G., & Odlyzko, A. M. (1998). The size and growth rate of the Internet. AT&T Labs - Research (2 Oct 1998).
comScore, Inc. (2011, August 8). comScore Reports $37.5 Billion in Q2 2011 U.S. Retail E-Commerce Spending, Up 14 Percent vs. Year Ago. Retrieved March 1, 2012, from comScore, Press & Events :
H CMTE on Ways and Means. (2012, February 29). Committee on Ways and Means Facts and Figures: Identity Theft. Retrieved March 2, 2012, from Committee on Ways and Means:
Hickman, K. E. (1995, April). The SSL Protocol. Internet Draft . CA: Netscape Communications Corp. Retrieved May 154, 2012, from
Marlinspike, M. (2011). SSL And The Future Of Authenticity. Las Vegas, NV, USA. Retrieved May 15, 2012, from
McCallister, E., Grance, T., & Scarfone, K. (2010, April). Special Publication 800-122. Guide to Protecting the Confidentiality of Personally Identifiable Information (PII), . Gaithersburg, MD, USA: US Dept of Commerce National Institute of Standards and Technology.
Orwell, G. (1949). 1984. (E. Fromm, Ed.) New York, New York: Harcourt.
Steiner, P. (1993, July 5). On the internet nobody knows your a dog. The New Yorker . (D. Remnick, Ed.) New York City, New York, USA: Condé Nast. Retrieved May 16, 2012, from,_nobody_knows_you%27re_a_dog
Thompson, D. (2010, Aril 19). 80 Percent of Americans Don't Trust the Government. Here's Why. Retrieved March 1, 2012, from The Atlantic Business Archive:
Wikimedia Foundation, Inc. (2012, May 10). The Real ID Act. Retrieved May 16, 2012, from