For the last few years, police forces around China have invested heavily to build the world’s largest video surveillance and facial recognition system, incorporating more than 170 million cameras so far. In a December test of the dragnet in Guiyang, a city of 4.3 million people in southwest China, a BBC reporter was flagged for arrest within seven minutes of police adding his headshot to a facial recognition database. And in the southeast city of Nanchang, Chinese police say that last month they arrested a suspect wanted for “economic crimes” after a facial recognition system spotted him at a pop concert amidst 60,000 other attendees.
These types of stories, combined with reports that computer vision recognizes some types of images more accurately than humans, makes it seem like the Panopticon has officially arrived. In the US alone, 117 million Americans, or roughly one in two US adults, have their picture in a law enforcement facial-recognition database.
But the technology’s accuracy and reliability at this point is much more modest than advertised, and those imperfections make law enforcement’s use of it potentially sinister in a different way. They’re prone to both false positives—a program incorrectly identifies Lisa as Ann—and false negatives, in which a person goes unidentified even if they’re in the database.
For an extreme example of what can go wrong, take data recently released by an EU Freedom of Information request and then posted by the South Wales police. It shows that at the Champions League final game in Cardiff last year, South Wales police logged 173 true face matches and wrongly identified a whopping 2,297 people as suspicious—a 92 percent false positive rate.
“From a government’s point of view a dragnet that catches a lot of extra people from which they then filter out what they’re interested in might be considered as working and might not cost them too much,” says Suresh Venkatasubramanian, a professor of computer science at the University of Utah who studies discrimination and bias in automated decision making. “But from your point of view if you’re caught up in one of these false positive dragnets, that might not seem like it’s working to you.”
The South Wales police department says it has refined its algorithms and improved the quality of images in its databases since then, but it still describes that early deployment as successful. “The past 10 months have been a resounding success it terms of validating the technology, building confidence amongst our officers and the public whilst offering a potential area for growth for us with the technology in the future,” the department wrote in a defense of its facial recognition program. Meanwhile, according to its own data, the South Wales system had an 87.5 percent false positive rate at an Anthony Joshua boxing match in Cardiff at the end of March.
“Normal error rates would suggest that you’re going to get a lot of hits if you just indiscriminately take a lot of people’s faces and run them against your database, versus the other way around where you target your search for a specific person and try to match that person’s face to the crowd,” Venkatasubramanian notes. “There are subtleties in how a system is deployed versus how it was trained. We often see a failure mode for the use of algorithms in these systems where they’re trained for one thing, but they’re being used a slightly different way and that causes problems.”
‘If you’re caught up in one of these false positive dragnets, that might not seem like it’s working to you.’
Suresh Venkatasubramanian, University of Utah
Those worried about how facial recognition surveillance may impact their personal privacy may view these flaws as a potential advantage; a fallible system might be easier to hide from. But in practice, these deficiencies can cause innocent people to be flagged as suspicious and can even lead to wrongful arrests. In one example in the US, a Denver man was arrested two separate times in connection with two bank robberies perpetrated by someone who looked like him in CCTV footage, but was later determined to be someone else. Researchers have also found that societal biases, such as racial prejudices, are reflected in the data used to train facial recognition models, and in the algorithms themselves.
“The issue we have is there’s no transparency into all the misfires, and the reason there’s no transparency is that there’s no law,” says Alvaro Bedoya, the executive director of the Center for Privacy & Technology, which has extensively studied law enforcement use of facial recognition. “It’s very easy to write a glowing report about real-time face recognition if your only source is the police department bragging about the guy they caught.”
Machine learning researchers note that some inaccuracies in facial recognition systems are inevitable no matter how refined the technology becomes. And privacy advocates argue that this reality underscores the need for system audits and legislation that manages facial recognition deployment, crucial measures for protecting individual privacy and the ability to remain anonymous.
Reining It In
So far, lawmakers worldwide have been slow to codify parameters for the technology. Even the United Kingdom, which has experimented with facial recognition tools in law enforcement since the 1990s, lacks a regulatory framework for it. In the US, representatives Jim Jordan from Ohio and Ted Lieu from California have expressed a desire to introduce legislation addressing government’s use of facial recognition. In the meantime, though, the technology has proliferated unchecked.
“Customs and Border Protection is already starting to use face recognition at the borders in airports and also at a couple of land border crossings,” says Jennifer Lynch, a senior staff attorney at the Electronic Frontier Foundation who in February published an extensive report on law enforcement’s use of facial recognition tools. “I think what I am most concerned about is what they are starting to do in airports, because they’re partnering with some of the private airlines in doing face recognition screening of people getting on international flights. That includes US citizens and they’re retaining the data on US citizens and I don’t think they have any legal authority to do that at all.”
‘There’s no transparency into all the misfires, and the reason there’s no transparency is that there’s no law.’
Alvaro Bedoya, Center for Privacy & Technology
Research also indicates that at least a quarter of local and state police departments nationwide in the US either have their own facial recognition database or can access another agency’s to use as they see fit in investigations and policing. And law enforcement the majority of states can access photos taken for identification documents like drivers licenses. The Department of Justice also operates broad facial recognition programs; in a March 2017 statement.
Meanwhile, Government Accountability Office audits have surfaced serious reliability and privacy concerns. One study in May 2016 found that the FBI was not fully complying with existing privacy laws and was not publishing “privacy impact assessments” in a timely way. The evaluation also concluded that the FBI’s system accuracy tests were inadequate and not comprehensive. In March 2017, GAO said that the FBI had responded to some of the findings and initiated changes, but that some were still outstanding.
If there’s an upside to the unease around facial recognition technology—both when it works and when it doesn’t—it’s that people intuitively understand the technology’s privacy implications. “People on the street get how creepy face recognition is in a way they don’t really get it for cookies on your browser or other technologies. They understand this,” says Center for Privacy & Technology’s Bedoya.
That visceral connection may help privacy advocates encourage oversight to ensure that the technology is used responsibly. But for the public, promises of a better-regulated tomorrow don’t help protect people today.