NY Daily News - Face it: Recognition technology isn’t close to ready for prime-time

Last week was the latest setback in the fight to see how the NYPD looks at all of us — specifically, how it uses facial recognition. A state Supreme Court judge forced privacy researchers to give back to the police department 20 pages of confidential information about how the police department uses artificial intelligence to look at photos of millions of New Yorkers. The ruling came after the NYPD failed to redact sensitive information in the 3,700-page document trove it was forced to turn over in litigation with the Georgetown Center on Privacy and Technology.

Before the lawsuit, the NYPD refused to share even the most basic details of how these tools work, when they’re used, and how often they get it wrong.

So what do we know? We know that face recognition software is flawed, very flawed. This technology can be an incredibly powerful and accurate tool, but it’s only as good as the photos we feed it. Have a high-resolution photo of the suspect looking straight at the camera, eyes wide open, and mouth shut? Great, you probably can get a good match with a prior mugshot. But how often are those the photos we get from crime scenes?

Often, police try to match a blurry, black and white image, taken at an angle, with the face partially hidden. So what do officers do when only part of the face is visible? Do they feed it into a magic algorithm that reconstructs the suspect’s appearance? No, they typically turn to Photoshop. Officers alter the image to create a guesstimate of how the suspect looks. Given these low-quality photos, it’s no wonder that the NYPD typically gets more than 200 “potential matches.”

The situation is even worse for New Yorkers of color, especially women. Why? Facial recognition is racist. The software that usually does a good job with white men actually has a much worse track record with women and people of color. According to a MIT study from earlier this year, Amazon’s facial recognition correctly identified white men 100% of the time, but it was wrong 31% of the time for women with darker skin tones.

And here’s where we go from computer bias to human bias. For decades, we’ve known that human beings who look at a lineup will often get it wrong. In 1984, psychology researcher Gary Wells found that witnesses “have a natural propensity to identify the person in the lineup who looks most like the perpetrator relative to the others.” In other words “if the real perp’s not there, there’s still somebody who looks more like the perpetrator than others.”

Officers who are reviewing a sheet of hundreds of potential matches will naturally choose the person who looks most like the suspect, not the person who is actually a “match.” And since the NYPD database returns over 200 New Yorkers for every photo, it means the system gets it wrong, at least, 95.5% of the time. Not exactly an open-and-shut case.

As of 2017, the department boasted of more than 2,700 arrests from facial recognition. How many of those were the real perpetrators? Until the NYPD can show us how its facial recognition system works, until it can prove it is not biased, it should turn a blind eye to facial recognition.

Cahn is the executive director of The Surveillance Technology Oversight Project, a New York–based civil rights and police accountability organization. On Twitter @cahnlawny.