Blink Identity - High throughput, privacy preserving identification service.

View Original

One of the Problems with Mass Surveillance

When talking about biometric identification, it is important to understand the difference between verification and identification.  If you use a mobile phone, you are probably familiar with verification - you present your finger, face or iris to your phone and it verifies that you are you. The phone already knows you are claiming to be yourself and it has to compare the biometric data you give it to what is stored and decide if it is close enough to match and unlock your phone. In contrast, with identification biometric data is compared against a database of known people and the best match (if any) is found.

With identification, we are mostly worried about a type of error called a false match - we erroneously say a person matches someone in the database when they don’t. As we increase the size of the database, that becomes more likely, so the problem gets harder and harder. As an analogy, think of the old video games from the 80’s, where the pixels were very large - you could point to particular pixel with your finger:

“Dragon” from Atari’s Adventure, 1980

However, with a modern game, you would need a microscope to point at a particular pixel. In other words, better precision requires better tools.

In physical spaces, identification is used in one of two ways - a whitelist or a watchlist. Basically, a whitelist is people you want to allow in, and a watchlist is people you want to keep out. For example, a whitelist could be employees in an office while a watchlist could be known shoplifters at a store. It may sound like these are two sides of the same coin, with the only difference being what happens when you make a match. But math tells a different story.

The interesting thing about math is that there is no arguing with it - if you don’t like how it works you are stuck. In this case, the math challenge comes from something called conditional probability. Here is a really easy way to understand the basic concept behind conditional probability (Bayes’ Rule):

“If you live in the United States, you probably speak English. If you speak English, you probably don’t live in the United States”

That sounds contradictory, but if you think about it, it is clearly true. The US is obviously English-centric, but the population of English speakers living outside the US is much larger than the US population. Something can be more or less likely depending on something else, but it is directional relationship.

Here is why this matters. Let’s assume our biometric matching system has a false match rate of .1%. That means that if we send 1,000 imposters through, 1 will get in. Furthermore, let’s assume we have a true match rate of 99%. That means if we send 100 people through that are in our database, we will match 99 of them. We will assume we are using this same system for both a watchlist and a whitelist. These are things that are measurable for an existing system.

Now we have to make an assumption about the prior probabilities - what is the likelihood that an imposter will try to fool the system? Well, an imposter in a whitelist system is obvious - it is anyone who isn’t in the database. With a watchlist, through, most people are “imposters” - they aren’t on the watchlist. And that is what makes all the difference.

Let’s assume an whitelist office scenario where 5 out of every 100 people walking by will try their luck and see if they can get in. That means our prior probability that a person is authorized is 95%. However, for the watchlist, let’s assume we are scanning a crowd of 100,000 at a public location and we think maybe 10 people (1:10,000) are on our watchlist, giving us a prior probability of .01%.

A match on a watchlist means there is less than a 1% chance that match is correct, even though the overall accuracy of the system is high. Put another way, for every person we catch on the watchlist, we are going to detain 99 innocent people. Again, feel free to put in your own numbers and see how things work out.

This is because of something called the Paradox of the False Positive. When we are looking for things that are unlikely to happen, it means most of our positives will be false positives, unless we have a very accurate system. In fact, to get something that performs close to the whitelist scenario requires a false match probability of .00001 %!

I’ve described this problem in a biometric identity scenario, but it actually isn’t specific to biometrics at all - it is just basic probability and it occurs everywhere, and is especially problematic in the medical fields. If you get a positive test for a rare disease, you probably don’t have the disease. In medicine, we are OK with stressing out tens out thousands of people in order to catch the one person with the disease. The stakes are that high. However, can we say we are OK with stressing out thousands of people trying to have a good time in order to catch one criminal?