AI, Protests, and Justice

By Mike Loukides
July 21, 2020
Wireframe Wireframe (source: Pixabay)

Largely on the impetus of the Black Lives Matter movement, the public’s response to the murder of George Floyd, and the subsequent demonstrations, we’ve seen increased concern about the use of facial identification in policing.

First, in a highly publicized wave of announcements, IBM, Microsoft, and Amazon have announced that they will not sell face recognition technology to police forces. IBM’s announcement went the furthest; they’re withdrawing from face recognition research and product development. Amazon’s statement was much more limited; they’re putting a one-year moratorium on the police use of their Rekognition product, and hoping that Congress will pass regulation on the use of face recognition in the meantime.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

These statements are fine, as far as they go. As many point out, Amazon and Microsoft are just passing the buck to Congress, which isn’t likely to do anything substantial. (As far as I know, Amazon is still partnering with local police forces on their Ring smart lock, which includes a camera.) And, as others have pointed out, IBM, Microsoft, and Amazon are not the most important companies that supply face recognition technology to law enforcement. That’s dominated by a number of less prominent companies, of which the most visible are Palantir and Clearview AI. I suspect the executives at those companies are smiling; perhaps IBM, Microsoft, and Amazon aren’t the most important players, but their departure (even if only temporary) means that there’s less competition.

So, much as I approve companies pulling back from products that are used unethically, we also have to be clear about what this actually accomplishes: not much. Other companies that are less concerned about ethics will fill the gap.

Another response is increased efforts within cities to ban the use of face recognition technologies by police. That trend, of course, isn’t new; San Francisco, Oakland, Boston, and a number of other cities have instituted such bans. Accuracy is an issue—not just for people of color, but for anyone. London’s police chief is on record as saying that he’s “completely comfortable” with the use of face recognition technology, despite their department’s 98% false positive rate. I’ve seen similar statements, and similar false positive rates, from other departments.

We’ve also seen the first known case of a person falsely arrested because of face recognition. “First known case” is extremely important in this context; the victim only found out that he was targeted by face recognition because he overheard a conversation between police officers. We need to ask: how many people have already been arrested, imprisoned, and even convicted on the basis of incorrect face recognition? I am sure that number isn’t zero, and I suspect it is shockingly large.

City-wide bans on the use of face recognition by police are a step in the right direction; statewide and national legislation would be better; but I think we have to ask the harder question. Given that police response to the protests over George Floyd’s murder has revealed that, in many cities, law enforcement is essentially lawless, will these regulations have any effect? Or will they just be ignored? My guess is “ignored.”

That brings me to my point: given that companies backing off from sales of face recognition products, and local regulation of the use of these products, are praiseworthy but unlikely to be effective, what other response is possible? How do we shift the balance of power between surveillors and surveillees? What can be done to subvert these systems?

There are two kinds of responses. First, the use of extreme fashion. CVDazzle is one site that shows how fashion can be used to defeat face detection. There are others, such as Juggalo makeup. If you don’t like these rather extreme looks, remember that researchers have shown that even a few altered pixels can defeat image recognition, changing a stop sign into something else. Can a simple “birthmark,” applied with a felt-tip pen or lipstick, defeat face recognition? I have not read anything about this specifically, but I would bet that it can. Facemasks themselves provide good protection from face ID, and COVID-19 is not going away any time soon.

The problem with these techniques (particularly my birthmark suggestion) is that you don’t know what technology is being used for face recognition, and useful adversarial techniques depend highly on the specific face recognition model. The CVDazzle site states clearly that it’s designs have only been tested against one algorithm (and one that is now relatively old.) Juggalo makeup doesn’t alter basic facial structure. Fake birthmarks would depend on very specific vulnerabilities in the face recognition algorithms. Even with facemasks, there has been research on reconstructing images of faces when you only have an image of the ears.

Many vendors (including Adobe and YouTube) have provided tools for blurring faces in photos and videos. Stanford has just released a new web app that detects all the faces in the picture and blocks them out. Anyone who is at a demonstration and wants to take photographs should use these.

But we shouldn’t limit ourselves to defense. In many cities, police refused to identify themselves; in Washington DC, an army of federal agents appeared, wearing no identification or insignia. And similar incognito armies have recently appeared in Portland, Oregon and other cities. Face recognition works both ways, and I bet that most of the software you’d need to construct a face recognition platform is open source. Would it be possible to create a tool for identifying violent police officers and bringing them to justice? Indeed, human rights groups are already using AI: there’s an important initiative to use AI to document war crimes in Yemen. If it’s difficult or impossible to limit the use of facial recognition by those in power, the answer may well be to give these tools to the public to increase accountability–much as David Brin suggested many years ago in his prescient book about privacy, The Transparent Society.

Technology “solutionism” won’t solve the problem of abuse—whether that’s abuse of technology itself, or more plain old physical abuse. But we shouldn’t naively think that regulation will put technology back into some mythical “box.” Face recognition isn’t going away. That being the case, people interested in justice need to understand it, experiment with ways to deflect it, and perhaps even to start using it.

Post topics: AI & ML
Post tags: Commentary
Share:

Get the O’Reilly Radar Trends to Watch newsletter