(Excerpted from the report What Is Artificial Intelligence?)
Defining artificial intelligence isn’t just difficult: it’s impossible, not the least because we don’t really understand human intelligence. Paradoxically, advances in AI will help more to define what human intelligence isn’t than what artificial intelligence is.
What we mean by “intelligence” is a fundamental question. In a Radar post from 2014, Beau Cronin did an excellent job of summarizing the many definitions of AI. What we expect from AI depends critically on what we want the AI to do.
If we assume that AI must be embodied in hardware that’s capable of motion, such as a robot or an autonomous vehicle, we get a different set of criteria. We’re asking the computer to perform a poorly defined task (like driving to the store) under its own control. We can already build AI systems that can do a better job of planning a route and driving than most humans. The one accident in which one of Google’s autonomous vehicles was at fault occurred because the algorithms were modified to drive more like a human, and to take risks that the AI system would not normally have taken.
We can define AI more simply by dispensing with the intricacies of conversational systems or autonomous robotic systems and saying that AI is solely about building systems that answer questions and solve problems. Systems that can answer questions and reason about complex logic are the “expert systems” that we’ve ...