Artificial intelligence: summoning the demon

We need to understand our own intelligence is competition for our artificial, not-quite intelligences.

By Mike Loukides
November 3, 2014
HAL9000 HAL9000 (source: Mike Haller (Flickr))

A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.

There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

The problem with AI right now is that its achievements are greatly over-hyped. That’s not to say those achievements aren’t real, but they don’t mean what people think they mean. Researchers in deep learning are happy if they can recognize human faces with 80% accuracy. (I’m skeptical about claims that deep learning systems can reach 97.5% accuracy; I suspect that the problem has been constrained some way that makes it much easier. For example, asking “is there a face in this picture?” or “where is the face in this picture?” is much different from asking “what is in this picture?”) That’s a hard problem, a really hard problem. But humans recognize faces with nearly 100% accuracy. For a deep learning system, that’s an almost inconceivable goal. And 100% accuracy is orders of magnitude harder than 80% accuracy, or even 97.5%.

What kinds of applications can you build from technologies that are only accurate 80% of the time, or even 97.5% of the time? Quite a few. You might build an application that created dynamic travel guides from online photos. Or you might build an application that measures how long diners stay in a restaurant, how long it takes them to be served, whether they’re smiling, and other statistics. You might build an application that tries to identify who appears in your photos, as Facebook has. In all of these cases, an occasional error (or even a frequent error) isn’t a big deal. But you wouldn’t build, say, a face-recognition-based car alarm that was wrong 20% of the time — or even 2% of the time.

Similarly, much has been made of Google’s self-driving cars. That’s a huge technological achievement. But Google has always made it very clear that their cars rely on the accuracy of their highly detailed street view. As Peter Norvig has said, it’s a hard problem to pick a traffic light out of a scene and determine if it is red, yellow, or green. It is trivially easy to recognize the color of a traffic light that you already know is there. But keeping Google’s street view up to date isn’t simple. While the roads change infrequently, towns frequently add stop signs and traffic lights. Dealing with these changes to the map is extremely difficult, and only one of many challenges that remain to be solved: we know how to interpret traffic cones, we know how to think about cars or humans behaving erratically, we know what to do when the lane markings are covered by snow. That ability to think like a human when something unexpected happens makes a self-driving car a “moonshot” project. Humans certainly don’t perform perfectly when the unexpected happens, but we’re surprisingly good at it.

You wouldn’t build a face-recognition-based car alarm that was wrong 20% of the time.So, AI systems can do, with difficulty and partial accuracy, some of what humans do all the time without even thinking about it. I’d guess that we’re 20 to 50 years away from anything that’s more than a crude approximation to human intelligence. It’s not just that we need bigger and faster computers, which will be here sooner than we think. We don’t understand how human intelligence works at a fundamental level. (Though I wouldn’t assume that understanding the brain is a prerequisite for artificial intelligence.) That’s not a problem or a criticism, it’s just a statement of how difficult the problems are. And let’s not misunderstand the importance of what we’ve accomplished: this level of intelligence is already extremely useful. Computers don’t get tired, don’t get distracted, and don’t panic. (Well, not often.) They’re great for assisting or augmenting human intelligence, precisely because as an assistant, 100% accuracy isn’t required. We’ve had cars with computer-assisted parking for more than a decade, and they’ve gotten quite good. Larry Page has talked about wanting Google search to be like the Star Trek computer, which can understand context and anticipate what the humans wants. The humans remain firmly in control, though, whether we’re talking to the Star Trek computer or Google Now.

I’m not without concerns about the application of AI. First, I’m concerned about what happens when humans start relying on AI systems that really aren’t all that intelligent. AI researchers, in my experience, are fully aware of the limitations of their systems. But their customers aren’t. I’ve written about what happens when HR departments trust computer systems to screen resumes: you get some crude pattern matching that ends up rejecting many good candidates. Cathy O’Neil has written on several occasions about machine learning’s potential for dressing up prejudice as “science.”

The problem isn’t machine learning itself, but users who uncritically expect a machine to provide an oracular “answer,” and faulty models that are hidden from public view. In a not-yet published paper, DJ Patil and Hilary Mason suggest that you search Google for GPS and cliff; you might be surprised at the number of people who drive their cars off cliffs because the GPS told them to. I’m not surprised; a friend of mine owns a company that makes propellers for high-performance boats, and he’s told me similar stories about replacing the propellers for clients who run their boats into islands.

David Ferrucci and the other IBMers who built Watson understand that Watson’s potential in medical diagnosis isn’t to have the last word, or to replace a human doctor. It’s to be part of the conversation, offering diagnostic possibilities that the doctor hasn’t considered, and the reasons one might accept (or reject) those diagnoses. That’s a healthy and potentially important step forward in medical treatment, but do the doctors using an automated service to help make diagnoses understand that? Does our profit-crazed health system understand that? When will your health insurance policy say “you can only consult a doctor after the AI has failed”? Or “Doctors are a thing of the past, and if the AI is wrong 10% of the time, that’s acceptable; after all, your doctor wasn’t right all the time, anyway”? The problem isn’t the tool; it’s the application of the tool. More specifically, the problem is forgetting that an assistive technology is assistive, and assuming that it can be a complete stand-in for a human.

Second, I’m concerned about what happens if consumer-facing researchers get discouraged and leave the field. Although that’s not likely now, it wouldn’t be the first time that AI was abandoned after a wave of hype. If Google, Facebook, and IBM give up on their “moonshot” AI projects, what will be left? I have a thesis (which may eventually become a Radar post) that a technology’s future has a lot to do with its origins. Nuclear reactors were developed to build bombs, and as a consequence, promising technologies like Thorium reactors were abandoned. If you can’t make a bomb from it, what good is it?

The problem isn’t the tool; it’s the application of the tool.If I’m right, what are the implications for AI? I’m thrilled that Google and Facebook are experimenting with deep learning, that Google is building autonomous vehicles, and that IBM is experimenting with Watson. I’m thrilled because I have no doubt that similar work is going on in other labs, in other places, that we know nothing about. I don’t want the future of AI to be shortchanged because researchers hidden in government labs choose not to investigate ideas that don’t have military potential. And we do need a discussion about the role of AI in our lives: what are its limits, what applications are OK, what are unnecessarily intrusive, and what are just creepy. That conversation will never happen when the research takes place behind locked doors.

At the end of a long, glowing report about the state of AI, Kevin Kelly makes the point that every advance in AI, every time computers make some other achievement (playing chess, playing Jeopardy, inventing new recipes, maybe next year playing Go), we redefine the meaning of our own human intelligence. That sounds funny; I’m certainly suspicious when the rules of the game are changed every time it appears to be “won,” but who really wants to define human intelligence in terms of chess-playing ability? That definition leaves out most of what’s important in humanness.

Perhaps we need to understand our own intelligence is competition for our artificial, not-quite intelligences. And perhaps we will, as Kelly suggests, realize that maybe we don’t really want “artificial intelligence.” After all, human intelligence includes the ability to be wrong, or to be evasive, as Turing himself recognized. We want “artificial smartness”: something to assist and extend our cognitive capabilities, rather than replace them.

That brings us back to “summoning the demon,” and the one story that’s an exception to the rule. In Goethe’s Faust, Faust is admitted to heaven: not because he was a good person, but because he never ceased striving, never became complacent, never stopped trying to figure out what it means to be human. At the start, Faust mocks Mephistopheles, saying “What can a poor devil give me? When has your kind ever understood a Human Spirit in its highest striving?” (lines 1176-7, my translation). When he makes the deal, it isn’t the typical “give me everything I want, and you can take my soul”; it’s “When I lie on my bed satisfied, let it be over…when I say to the Moment ‘Stay! You are so beautiful,’ you can haul me off in chains” (1691-1702). At the end of this massive play, Faust is almost satisfied; he’s building an earthly paradise for those who strive for freedom every day, and dies saying “In anticipation of that blessedness, I now enjoy that highest Moment” (11585-6), even quoting the terms of his deal.

So, who’s won the bet? The demon or the summoner? Mephistopheles certainly thinks he has, but the angels differ, and take Faust’s soul to heaven, saying “Whoever spends himself striving, him we can save” (11936-7). Faust may be enjoying the moment, but it’s still in anticipation of a paradise that he hasn’t built. Mephistopheles fails at luring Faust into complacency; rather, he is the driving force behind his striving, a comic figure who never understands that by trying to drag Faust to hell, he was pushing him toward humanity. If AI, even in its underdeveloped state, can serve this function for us, calling up that demon will be well worth it.

Post topics: AI & ML
Share:

Get the O’Reilly Radar Trends to Watch newsletter