Learning from adversaries

Adversarial images aren’t a problem—they’re an opportunity to explore new ways of interacting with AI.

By Mike Loukides
July 31, 2019
Stereogram Stereogram (source: fdecomite on Flickr)

A recent paper, Natural Adversarial Examples, pointed out many real-world images that machine learning (ML) systems identify incorrectly: squirrels classified as sea lions or frogs, eagles classified as limousines, mushrooms classified as pretzels or nails, and so on. If you’re involved with machine learning, you’ve probably seen a number of these images already, in addition to others.

It’s important to realize that machine learning makes mistakes. Not only does it make mistakes, it always will make mistakes; unlike traditional programming, it’s impossible for any ML system to be perfect. I’ve called that “the paradox of machine learning.” And that fact requires us to treat ML with a certain amount of caution.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

But we need to remember three other things:

  1. Humans make mistakes, too. It’s easy to look at a picture of a mushroom and say “that’s so obviously not a nail.” But all of us have been fooled at one time or another—possibly many times—about some visual object. And some of those ML misidentifications are mistakes I’d make. Find a picture of the harvestman (daddy longlegs) and see if you think it doesn’t look like a ladybug. Or if you could blame anyone for misidentifying the squirrel hidden in the grass as a frog. It’s important to approach ML’s limitations with at least some humility.
  2. ML mistakes are often completely different from human mistakes. When ML is wrong, it’s “really” wrong—really? Or do ML mistakes seem outlandish only because they’re different from the ones we’d make? ML mistakes often occur because systems lack context. When humans see a picture, we usually know what we’re supposed to look at. ML systems often mistake the background for the thing itself, so a bird feeder is identified as a bird, even if there isn’t a bird on it. Humans are very good at ignoring extraneous information.
  3. Humans are good at correcting mistakes. This is perhaps where ML systems and humans are most different. ML systems have trouble saying, “I don’t know.” More than that, they can’t say, “Oh, I see, that really isn’t a nail; it’s a mushroom.” They rarely get second chances.

The last point bears some thought. I don’t see why software can’t admit it made a mistake, particularly if it has a live video feed rather than a static image. As a system looks at something from different perspectives, it should be possible to say “that thing I thought was a nail, it’s really a mushroom.” It’s possible there are already systems that do this; the ability to correct mistaken judgements would be essential for an autonomous vehicle.

That ability is also essential for collaboration between humans and machine learning systems. Collaboration isn’t possible when one member of the team (or both) is an oracle that is never wrong, or can never admit it is wrong. When a system only presents a single answer that can’t be discussed, humans respond predictably. If they agree with the machine, the machine is useless because it didn’t tell them anything they didn’t already know. If they disagree, the machine is just wrong. And if the human decides what course of action to take (for example, a patient’s diagnosis and treatment), no one may ever find out whether the machine was right.

Part of the solution is exposing other possibilities: if the machine is classifying images, what classifications had high probabilities but were rejected? A machine might get bonus points for saying “why.” Explainability may never be one of ML’s strengths, but even neural networks can build a list of alternatives weighted by probabilities. How do we build an interface that exposes these alternatives and lets a human evaluate them? What kind of interface would let a human hold a discussion with an AI? An argument? I don’t mean a silly chatbot, like Siri; I mean a reasoned discussion about a situation that demands a decision. What would it mean for a human to convince a machine learning system that it’s wrong?

Adversarial images aren’t a problem; they’re an opportunity—and not just an opportunity to fix our classifiers. They’re an opportunity to explore new ways of interacting with AI. We need to move beyond the interfaces and experiences that have informed desktop apps, web apps, and mobile apps. We need to design for collaboration between machines and humans. That’s the big challenge facing AI designers.

Post topics: AI & ML

Get the O’Reilly Artificial Intelligence Newsletter

Get the O’Reilly Artificial Intelligence Newsletter