Biometric Identification Technology - A Primer

What is Biometric Identification?

The word biometric is from a combination of two words, namely; bio (human) and metric (measurement). In simple terms, it is all about measuring things about humans which make them different from other humans. Biometric identification is a technique that uses unique human characteristics to identify an individual. As opposed to key cards (something you have) or passwords (something you know), the various biometric modalities are something a person is and so they are harder to spoof, share or break. The biometric that people are most familiar with is the fingerprint, but there are many other types of biometrics with their own strengths and weaknesses. In essence, biometrics are a complex, diverse field of science.

Biometric Modalities

Biometric modalities describe types of measurement. The modalities are broken into two categories: physiological and behavioral. Physiological biometrics are things like the image of an iris, face or the pattern of the ridges on the fingers. Behavioral biometrics are things like a signature, the way an individual types or walks. The most common biometric modality is the fingerprint, which has been used for identification since ancient Babylon.

Iris image, face image and fingerprint - the most common biometric modalities.

Other common biometrics are the iris (colored part of the eye) and face. These three constitute the vast majority of biometrics used in practice. Other biometrics include voice, how a person walks, retina, hand geometry, ear shape, heartbeat, typing style and even smell. It turns out that humans are full of unique things. This is the result of random variations in genetics and the womb. Even identical twins have unique biometrics!

Biometric Verification vs. Identification

In a biometric system, you will typically match a sample image (called a probe) against a database of images (called a gallery). If an individual gets a background check from the FBI, it is to check the fingerprints against the gallery of 70 million criminals. Unlocking a phone is searching a gallery of a few examples (3-5) of a finger. Although both of these processes involve biometric matching, they are fundamentally different in terms of difficulty.

One to one matching (1:1)
One to N matching (1:N)

When a phone is unlocked, the phone is checking to see that fingerprint matches the owner, a process called verification. This is also called 1:1 matching because it is the matching of a sample against one person in the gallery. With a background check or a crime scene, there is no guarantee that the sample is in the gallery so the goal is to compare the fingerprint against all of the records to see if there is a match, a process called identification. This process is also called 1:N matching because there is a comparison with every person in the gallery. Consequently, identification is harder than verification and increases in difficulty as the size of the gallery increases.

All About Fingerprints

Fingerprints are the most common method for criminal identification (and other identification) today. They are useful because fingerprints are unique and don’t change as people age. We can also detect latent fingerprints at a crime scene. No two people have ever been found to have the same fingerprints but mathematically, it can’t be proven that they are unique. It’s just very, very improbable that two people will have the same prints. The ridges on a finger can be worn down; builders who lay bricks or people who wash dishes will lose some detail. But the ridges grow back. People have tried to remove fingerprints with burns or acid, and that can alter the pattern, but the new pattern is still unique. So it’s a pretty handy way to identify people over time.

Fingerprint Matching

Before computers, fingerprints were stored on cards and categorized based on the category of pattern made by the ridges. In 1924, the Identification Division of the FBI was established by congress and by 1946, the FBI had processed over 100 million fingerprint cards.  By 1971, the FBI stored over 200 million fingerprint cards.

To make a match, a person had to look at each card of that type and manually verify the match. Obviously, it was a slow and tedious process. Today, fingerprints are converted into a mathematical construct called a template. Specific features of the fingerprint called minutiae points are identified and a vector is created from the point to the fingerprint core. A fingerprint template is a mathematical representation of the minutiae points and their relationship to each other. Templates are much smaller than images. A fingerprint template can be as small as 600 bytes and a fingerprint image is around 500,000 bytes. The smaller size means they are faster to match and less expensive to store.

When automated fingerprint identification systems (AFIS) were first developed, it was decided that the computer should never make a “match” or “no-match” decision. So these systems are designed to look at a latent print and return the top 20 candidate matches in the gallery. If there are no matches, the computer will still return the top 20 (even though these will all be non matches) for a human to verify, so the computer is not making the “no match” decision. A latent print examiner will review all of the candidates and determine if there is a match. If the system finds a match, it will still return 20 candidates for a human to review. Even though the computer finds the likely matches in the database, a human being always verifies the match using the images. The computer is used as a tool to assist the latent print examiner by reducing the number of images that need to be evaluated.

How Computers See

Face recognition is easy for most people, but getting a computer to see a face is a much more difficult problem than most people imagine. That’s because humans are really good at vision. So good that we often forget how difficult it is. You don’t really see with your eyes – those are just the lenses. You see with your brain. It is ALL brain, and super-mind-blowing complex brain stuff as well. You use about 30% of your brain just processing the input from your eyes, compared to about 3% for hearing.

If you squint, it's easy to tell these are the same image but for a computer, these are two very different arrays of pixels.

Babies can recognize a facial shape (but not individuals) almost immediately after birth and by four months a baby is able to recognize individuals at almost an adult level, even though the rest of their visual processing is not fully developed. Being able to recognize “mom” obviously has a huge survival advantage so it is not surprising that this skill is developed so early. And the magic isn’t constrained to babies either. As adults, faces are processed in a special part of the brain different from the area other objects are processed. That’s why we sometimes see faces in abstract objects or attribute meaning to animal expressions that may be totally incorrect.

A computer can’t take advantage of millions of years of evolution in order to recognize a face. All it has is a 2-dimensional array of pixels, just like every other thing it encounters through its “eyes.” The human brain breaks down a scene into its component elements full of meaning and context. However, computers can’t do that – there is no context to anything they “see”.

Facial Recognition

Face Finding

When a computer looks at a scene to match faces, it first has to find the faces themselves since it doesn’t see the scene as individual objects the way we do. This process is called face detection and this is also used with a modern cell phone when a small square box is drawn around a face on the phone camera screen. Face detection is a mature area and it can be done extremely fast. The basic process is that a computer algorithm is trained in things that are faces and things that are not faces and it learns to distinguish between them.

Face Matching

Once faces are found, the computer turns them into templates – mathematical representations of the face itself. A template is typically much smaller than the original image – anywhere from 1KB to 20KB; it just contains the key information needed to match the face. Interestingly, the region that the computer looks at is much smaller than the region humans look at when recognizing faces. The main reason for this is that this center region doesn’t change compared to things like hair and beards. Face matching is also accomplished in gray scale, so the color information is gone by the time the matching starts. Another interesting thing is that a photograph can be turned into a template which will reliably match the same person, but a template cannot be turned back into an image.

For high-security applications, face recognition is rarely used by itself. Most systems suffer from a high failure-to-acquire rate (the sensors don’t capture usable images) because of lighting variations and the many different ways people can hold their head. And when a match is made, it is not as certain as a fingerprint or iris match primarily because of the “fuzziness” of the information used to match compared with those two modalities. Face matches are typically going to have a higher error rate than other modalities because of difficulties in the presentation.

Iris versus Retina Recognition

When an optometrist looks through the lens of the eye, they are examining the retina. Retina recognition is rarely used anymore because the process can cause discomfort and requires multiple attempts. For this reason, it has fallen out of favor and has been replaced with iris recognition. If you hear anyone talking about an identification method that uses the retina, they almost certainly mean the iris.

An image of the retina of the eye and the iris of an eye.
On the left is a retina image. On the right is an iris image.

Iris recognition uses an image of the iris – the colored part of the eye surrounding the pupil. As opposed to retina recognition, iris recognition is easy, fast and painless. Images of the iris are captured with an ordinary camera while being illuminated with near infrared (NIR) light. This allows the detailed texture of even dark brown eyes to be captured.

Accuracy & Errors

In biometrics, one of the first questions that comes up is about accuracy. It’s a complicated issue because a biometric match is never a sure thing – it is a matter of probability. When a phone owner unlocks their phone with a fingerprint, the phone is not 100% sure that the fingerprints match. And this a problem because we want important things to be 100% true. It’s just not possible with biometrics (blame math).

The lower you set one type of error, the higher the other type becomes.

There are two types of errors a biometric system can make. A system can erroneously match someone it shouldn’t. This type of error is called a false match. The second type of error is when the system does not match someone who should match, called a false non-match. To understand the accuracy of a biometric system, it is important to understand both of these probabilities. Consider a typical biometric system used to control access to a facility. A false match means allowing someone in when that individual shouldn’t be allowed. A false non-match means blocking someone who should be allowed. False non-matches are annoying; false matches are dangerous. Biometric systems can be tuned to optimize one factor over another, but it is a trade-off. If false matches are decreased, false non-matches automatically increases and vice-versa.

Identity & Privacy

Biometric technologies are still new and they are advancing rapidly. Laws are just starting to be created to address issues of privacy and ownership of biometric data. A few states have created laws and generally they say that a person must be notified if their biometrics are being collected and told how the biometrics are being used. The biometrics cannot be sold or used for any other purpose and they must be disposed of when there is no longer a need for them. But most states do not have any laws covering biometrics and companies are largely free to do what they like.

Neither these laws nor any existing laws prohibit government agencies from collecting or using biometric information in connection with law enforcement, immigration, border security, or national security. Like most technology, biometrics can be used for good or evil. However, for security, forensics, fraud prevention, and a host of other applications biometrics are an invaluable tool. To protect privacy, the best option is to continue to push for government and commercial transparency and accountability.

Want to read more about identity technology & privacy? Subscribe now.

Your email address will be used exclusively for the stated purpose and will not be made available to any other party.