Why we need to think differently about AI

General intelligence or creativity can only be properly imagined if we peel away the layers of abstractions.

By Mike Loukides
August 8, 2018
Artificial intelligence Artificial intelligence (source: Pixabay)

While AI researchers are racking up ever more impressive successes, in many respects AI seems to be stalled. Significant as the successes are, they’re all around solving specific, well-defined problems. It isn’t surprising that AI or ML can solve such problems. What would be more surprising, and what seems no closer than it was a decade ago, would be steps in the direction of general intelligence or creativity.

How do we get to the next step? When a problem can’t be solved, it’s often because you’re trying to solve the wrong problem. If that’s the situation we find ourselves in, what’s the right problem? How do we set about asking a different set of questions?

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

I am impressed with Ellen Ullman’s critique of abstraction in her book Life in Code. And I am impressed by the way that abstraction, such a critical concept in computing, has permeated our notion of artificial intelligence. One way to summarize the current state of artificial intelligence and machine learning is as a technique for automating abstraction. The myriad hyperparameters in a model represent a particular abstraction, a particular way of compressing the complexity of meatspace into rules that a machine can follow.

Those abstractions have led us to think about AI in inappropriate ways. We congratulate ourselves when we build systems that can identify images with greater accuracy than a human, but we often forget that those systems give incorrect, and possibly even harmful, answers. Unlike humans, they don’t know how to say, “I don’t know what that is.” Yes, humans can misidentify things, too. But we can also say, “I don’t know,” as Pete Warden has pointed out. And frequently, we do. Whether it represents cluelessness or wonder, “I don’t know” is what abstraction leaves out—at least in part because it’s difficult to represent.

In the Meaningness blog, David Chapman suggests that one problem with AI is that we don’t know how to evaluate progress. AI falls into the gap between five disciplines—science, engineering, math, philosophy, and design—plus an ill-defined sixth: spectacle. Briefly, “spectacle” means “gives good demos.” Science means developing accurate, predictive models; engineering means developing useful applications of those models.

Chapman points out that we only have very fuzzy ideas about what our own “intelligence” means. We don’t have the abstractions—either as scientific models or pragmatic engineering results, and certainly not as mathematical theorems—to understand intelligence. And since we don’t really know what intelligence is, we tend to say, “well, that sort of looks intelligent. We don’t know exactly what it’s doing or why, but we don’t understand our own intelligence, either. So, good enough.” And that isn’t good enough. That’s just good enough to deceive ourselves and, in the process, prevent us from understanding what we can really do. But if we don’t have appropriate abstractions, what else can we do but watch the demos?

So, how do we make progress? After all, our metrics for progress are, in essence, abstractions of abstractions. If our abstractions are the problem, then second-order abstractions to measure our progress are not going to point us in the right direction. I’m making a difficult problem more difficult, rather than less. But also: when you’re up against a wall, the most difficult path may be the only one to take. Is there a way to conceive of AI that is not an abstraction engine?

Kathryn Hume, in “Artificial Intelligence and the Fall of Eve,” writes about the concept of the “Great Chain of Being”: the Aristotelian idea that all beings, from rocks up to God, are linked in a hierarchy of intelligence. It’s also a hierarchy of abstraction. In Paradise Lost, John Milton’s Eve is committing a sin against that order, but in doing so, she’s also committing a sin against abstraction. The serpent tempts her to move up on the chain, to usurp a higher level of rationality (and hence abstraction) than she was given.

Hume’s radical suggestion is that Eve didn’t go far enough: the opportunity isn’t to move up in the Great Chain of Being, but to move beyond it. “Why not forget gods and angels and, instead, recognize these abstract precipitates as the byproducts of cognition?” What kinds of AI could we build if we weren’t obsessed with abstraction, and could (going back to Ullman) think about the organic, the live, the squishy? From the standpoint of science, engineering, and mathematics (and maybe also philosophy and design), that’s heresy. As it was for Eve. But where will that heresy lead us?

The phrase “Big data is people” has gained currency in the discussion of data ethics. In this context, it’s important because it’s a warning against abstraction. Data abstracts people; data is an abstraction that becomes a proxy for you and your concerns—forgetting that data is people is dangerous in many ways, not in the least because these abstractions are often the basis for decisions, ranging from loans and mortgages to bail and prison sentences. Training data is inherently historical and reflects a history of discrimination and bias. That history is inevitably reflected when our automated abstraction engines do their job. Building systems that are fair requires transcending, or rejecting, the way data abstracts people. And it requires thinking forward about a future we would like to build—not just a future that’s a projection of our abstracted and unjust past.

I’ve written a couple of times about AI, art, and music. On one level, music theory is a massive set of very formal abstractions for understanding Western music, but great advances in music take place when a musician breaks all those abstractions. If an AI system can compose significant music, as opposed to “elevator music,” it doesn’t just have to learn the abstractions; it has to learn how to struggle against them and break them. You can train an AI on the body of music written before 2000, but you can’t then expect it to come up with something new. It’s significant that this is exactly the same problem we have with training an AI to produce results that are “fair.”

The same is arguably true for literature. Is a poem just a cleverly arranged sequence of words, or is poetry important to us because of the human sensibility behind it? But the problem goes deeper. The notion that a poem is somehow about an author’s feelings only dates back to the late 18th century. We don’t hesitate to infer it in much earlier works, though it’s not something those authors would have recognized. So we have to ask: what makes Chaucer a great storyteller, if most of his stories were already floating around in the literary tradition? What makes Shakespeare a great playwright, as most of his plays relied heavily on other sources? It’s not the creation of the stories themselves, but something to do with the way they play against tradition, politics, and received ideas. Can that be represented as a set of abstractions? Maybe, but my instinct tells me that abstraction will give us little insight, though it may give us great poetry for our greeting cards.

I’ve talked with people recently who are doing amazing work using AI to attempt to understand animal cognition and language. There’s no shortage of animals that demonstrate “intelligence”: Alex the Parrot, Koko the gorilla, and so on. One problem with our search for animal intelligence is that we ask animals to be intelligent about things that humans are intelligent about. We ask them to be competent humans, but humans would almost certainly be terribly incompetent parrots or gorillas. We have no idea what a parrot thinks about the feel of air on its wings. We do know that they’re capable of abstraction, but do our abstractions map to theirs? Probably not—and if not, is our drive to abstraction helping, or getting in the way?

What else could we imagine if our desire for abstraction didn’t get in the way? The human mind’s ability to form abstractions is certainly one of our defining characteristics. For better or worse (and often for worse), we’re great at sorting things into categories. But we also do other things that aren’t modeled by our AI abstractions. We forget, and “forgetting” may be much more important to our mental abilities than we think; we form models with minimal data; we change our minds, sometimes freely, sometimes with great reluctance; we get bored; we find things “interesting” or “beautiful” for no reason other than that they please us. And we make mistakes—as does AI, though for different reasons. When we think about a future AI, should we be thinking about these capabilities?

If you go back to the grandparent of all AI metrics, Turing’s “Imitation Game,” and read his paper carefully, you find that Turing’s computer, playing a game with a human interlocutor, doesn’t always give the correct answer. A human talking to an oracle that is always right would have every reason to believe that the oracle wasn’t human. Even at the beginning of computing and AI, there’s a recognition that our abstractions (like “an artificial intelligence answers questions correctly”) are heading us in the wrong direction.

Perhaps those errors are where the real value lies in our quest for a richer AI. It’s fundamental that any trained system, whether you call it machine learning or artificial intelligence, will make errors. If you’ve trained a system so that it performs perfectly on training data, you have almost certainly overfitted, and it likely will perform very badly on real-world data. But if we don’t expect the system to be perfect on the training data, we clearly can’t expect it to perform better on real-world data. Let’s ask a more provocative question: what if the erroneous results are more important than the correct ones? That wouldn’t be the case if the AI is designed to detect disease in tomatoes. But if the AI is attempting to create interesting music, those might be precisely the results we need to look at. Not “here’s another piano piece that sounds sort of like Mozart, Beethoven, and Brahms,” but “here’s a piece that breaks the rules and makes you rethink how music is structured.” “Here’s a story that’s told in radical ways that Shakespeare could never have imagined.”

I’m afraid I’ve made Chapman’s compelling problem of evaluating progress worse. But this is the question we need to ask: what could we build if we stepped away from abstraction? As Hume suggests, Eve could have done so much more than aspire to a higher place on the hierarchy; she could have rejected the hierarchy altogether. If we tried to build our AIs along those lines, would it amaze us or scare us? Do we even know how to think about such software?

No one said it would be easy.

Post topics: AI & ML
Post tags: Commentary
Share:

Get the O’Reilly Radar Trends to Watch newsletter