The working relationship between AIs and humans isn’t master/slave

We need a new model for how AI systems and humans interact.

By Mike Loukides
January 23, 2018
The working relationship between AIs and humans isn't master:slave Mixed media photomanipulation of the Golden Gate Bridge (source: Nicolas Raymond on freestock.ca)

A recent article claimed that humans don’t trust results from AI systems—specifically, IBM’s Watson. They gave the example of a cancer doctor planning a course of a treatment. If the AI agrees with the doctor’s plan, well, that’s to be expected. If the AI disagrees with the doctor’s treatment, the doctor assumes that it’s “incompetent.” Whether the doctor’s analysis is based on careful reasoning or instinct, doctors went with their assumptions, rather than an opinion coming from the outside. It seems like a situation where the AI can’t win.

This isn’t terribly surprising, for better or for worse. At least as stated, I don’t think the problem will change. And perhaps it shouldn’t because we’re expecting the wrong thing from the AI.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Several years ago—shortly after Watson beat the Jeopardy champions—IBM invited me to an event where they showed off Watson’s capabilities. What impressed me at the demo wasn’t its ability to beat humans, but that fact that it could tell you why it came to a conclusion. While IBM hadn’t yet developed the user interface (which was irrelevant to Jeopardy), Watson could show probabilities that each potential answer (sorry, each potential question) was correct, based on the facts that supported each possible answer. To me, that’s really where the magic happened. Seeing the rationale behind the result raised the possibility of having an intelligent conversation with an AI.

I don’t know whether IBM continued to develop this feature. Watson in 2017 certainly differs from the Watson that won Jeopardy. But the ability to expose the rationale behind a recommendation is certainly a key to the problem of trust, whether the application is medicine, agriculture, finance, or something else.

Let’s change the doctor’s problem slightly. Instead of an AI, imagine another doctor came in on a consult and offered a differing opinion. Would that doctor say, “you’re wrong; here’s what I think; I’m not going to tell you why; if you disagree, that’s up to you.”? I don’t think so—and if that did happen, I wouldn’t be at all surprised if the first doctor stuck with the original treatment plan and dismissed the consulting doctor as incompetent. (And as a jerk.)

But that’s precisely the position in which we put our AI systems. We expect them to be oracles. We expect them to make a recommendation, and expect that doctor to implement that recommendation. That’s not how it works with human doctors, and that’s not how it should work with AI.

In real life, I would expect the doctors to have a discussion about what the treatment should be, comparing their thought processes in arriving at their recommendations. And I’d expect them to arrive at a better result. The irony is that, while you can’t spend a day without reading several articles about the problem of the inscrutability of AI, IBM had this problem solved back in the days of Jeopardy. (And yes, it’s an easier problem to solve with a system like the original Watson, and a much tougher problem for deep learning.)

The issue in medicine isn’t whether treatment A is better than treatment B; it’s all about the underlying rationale. Did the second doctor take into account factors that the first didn’t notice? Does the first doctor have a better understanding of the patient’s history? Does something in the patient suggest that the problem isn’t what it seems, and that a completely different diagnosis might be correct? That’s a discussion that human doctors can have with each other, and that they might be able to have with a machine.

I’m not writing another article saying, “we need AI that can explain itself.” We already know that. We need something different: we need a new model for how AI systems and humans interact. Whether we’re talking about doctors, lawyers, engineers, Go players, or taxi drivers, we shouldn’t expect AI systems to give us unchallengeable answers ex silico. We shouldn’t be told that we need to “trust AI.” What’s important is the conversation. AI that “explains itself” is only the first step. Humans also need to understand that the AI system’s answer isn’t absolute or final; it’s just another opinion, another consultant, and that the best outcomes are most likely to come from thinking about the answers and the reasoning behind them. The working relationship between AIs and humans isn’t master/slave; it’s between collaborators. Understanding that relationship properly may be more difficult than the technical challenge of exposing an AI system’s reasoning.

AI isn’t about supplanting humans; it’s about assisting us, making us more capable. That will happen through conversation. That conversation can only happen when we reject the sci-fi notion that AI systems are oracles, and when we learn to challenge them and accept them as collaborators, as we would any other human.

That’s not a technical problem. It’s a cultural one.

Post topics: AI & ML
Share:

Get the O’Reilly Radar Trends to Watch newsletter