Face recognition.
Face recognition. (source: Steven Lilley on Flickr)

A few weeks ago, I wrote about subverting technologies like face recognition, focusing specifically on the CV-Dazzle project to design cosmetics that defeat computer vision.

More recently, I've read about less dramatic approaches to subverting face recognition. Accessorize to the Crime, an article by researchers at Carnegie Mellon and the University of North Carolina, also discusses how minor changes to images can fool face recognition systems. These changes include minor (and undetectable) distortions of the image itself, but the authors realize that, in many situations, the people wanting to avoid identification won't have access to the images. So, they demonstrate that simply wearing a pair of eyeglass frames can confuse face recognition: in one case, causing the system to mislabel a picture of Reese Witherspoon as Russel Crowe. As the authors point out, eyeglass frames are widely available, and even 3D-printable. And glasses are nowhere near as conspicious as the makeup from Dazzle.

In Fooling Image Classification Networks Is Really Easy, Michael Byrne writes about a technique that distorts a picture in ways that are almost imperceptible to humans but foil current image classification algorithms. The changes aren't dramatic; they look more like a watermark (remember watermarks on high-quality paper?) than anything else. While you can't alter images other people take of you, you can at least ID-proof images you take of your friends. How long will it be before we see tools like this as Instagram filters?

These are all examples of "defensive computing": even Dazzle, which relies on makeup artists rather than algorithms. Defensive computing is all about preserving some sense of privacy and invisibility in the face of surveillance: the use and abuse of information about you, without your consent. It's easy to fixate on face recognition: it's in the news, it's hot, it's creepy. But while computer vision is in the news, it's not the only area where we need defensive countermeasures. We're all familiar with ad blockers; they're an example of defensive computing that's already widely accepted.

The U.S. Congress recently gave ISPs a huge gift: they can sell your data to whomever they want, without your permission. Writing on Cathy O'Neil's blog, Angela Grammatas has a good explanation of what's happening: it's all fodder for targeted marketing, for ads that follow you around wherever you go, even long after you've bought (or not bought) the product. An editorial in BloombergView cheerily opines that we should let the market do its magic: privacy at a price, for those who can afford it. If you can't pay, you should shut up and let the advertisers subsidize your internet habit. But that argument not only misses the point, it's ill-informed about the protection offered by our current technology and our current laws.

The Bloomberg author suggests that growing use of the HTTPS protocol, which encrypts data sent to and from a web site, will protect us. HTTPS is important; you should never do business with an insecure website. But it's incorrect to assume that HTTPS hides everything from your ISP. For various reasons, hostnames are still visible. So, your internet service provider can see all the companies you visit (if not the precise page), and sell that. Do our current laws protect you? Bloomberg points to U.S. Code Title 47, Chapter 5, Subchapter II, Part 1, paragraph 222 and its privacy protections, but doesn't realize that when this law says "telecommunications carrier," it means a telephone system, not an ISP. And legally, there is a huge difference.

In her post, Grammatas describes a browser plugin she's created that makes your browsing history a lot less valuable. Noiszy constantly visits random web sites in the background, drowning out any meaningful data in a lot of noise. How much random data does it take to make any information gathering meaningless? It's hard to say, and even harder to tell whether an ISP could filter out the noise. A naive approach to obscuring your browsing history (for example, visiting misleading sites in a fixed order) could easily be detected and filtered out if an ISP wanted to expend the effort. Noiszy only visits a list of sites you approve, but even that provides a kind of information. Still, what's important about Noiszy isn't that it's perfect, but that it exists, and that people are thinking about countermeasures against intrusive computing.

It's hard for someone who has watched the internet from its infancy to admit that it has become a hostile place—even for someone who has written about the computing of distrust. It's one thing to write about a disenchanted world, another thing to live it. But that's where we are: the tools of defensive computing, whether they involve mascara and face paint or random autonomous web browsing, belong to that harsh reality we've built. What other defensive tools will we see? I don't know, but I'll be watching.

Article image: Face recognition. (source: Steven Lilley on Flickr).