Compass.
Compass. (source: Pixabay).

In this industry, it's a tired old cliche to say that we're building the future. But that's true now more than at any time since the Industrial Revolution. The proliferation of personal computers, laptops, and cell phones has changed our lives, but by replacing or augmenting systems that were already in place. Email supplanted the post office; online shopping replaced the local department store; digital cameras and photo sharing sites such as Flickr pushed out film and bulky, hard-to-share photo albums. AI presents the possibility of changes that are fundamentally more radical: changes in how we work, how we interact with each other, how we police and govern ourselves.

Fear of a mythical "evil AI" derived from reading too much sci-fi won't help. But we do need to ensure that AI works for us rather than against us; we need to think ethically about the systems that we're building. Microsoft's CEO, Satya Nadella, writes:

The debate should be about the values instilled in the people and institutions creating this technology. In his book Machines of Loving Grace, John Markoff writes, 'The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.' It's an intriguing question, and one that our industry must discuss and answer together.

What are our values? And what do we want our values to be? Nadella is deeply right in focusing on discussion. Ethics is about having an intelligent discussion, not about answers, as such—it's about having the tools to think carefully about real-world actions and their effects, not about prescribing what to do in any situation. Discussion leads to values that inform decision-making and action.

The word "ethics" comes from "ethos," which means character: what kind of a person you are. "Morals" comes from "mores," which basically means customs and traditions. If you want rules that tell you what to do in any situation, that's what customs are for. If you want to be the kind of person who executes good judgment in difficult situations, that's ethics. Doing what someone tells you is easy. Exercising good judgement in difficult situations is a much tougher standard.

Exercising good judgement is hard, in part, because we like to believe that a right answer has no bad consequences; but that's not the kind of world we have. We've damaged our sensibilities with medical pamphlets that talk about effects and side effects. There are no side effects; there are just effects, some of which you might not want. All actions have effects. The only question is whether the negative effects outweigh the positive ones. That's a question that doesn't have the same answer every time, and doesn't have to have the same answer for every person. And doing nothing because thinking about the effects makes us uncomfortable is, in fact, doing something.

The effects of most important decisions aren't reversible. You can't undo them. The myth of Pandora's box is right: once the box is opened, you can't put the stuff that comes out back inside. But the myth is right in another way: opening the box is inevitable. It will always be opened; if not by you, by someone else. Therefore, a simple "we shouldn't do this" argument is always dangerous, because someone will inevitably do it, for any possible "this." You may personally decide not to work on a project, but any ethics that assumes people will stay away from forbidden knowledge is a failure. It's far more important to think about what happens after the box has been opened. If we're afraid to do so, we will be the victims of whoever eventually opens the box.

Finally, ethics is about exercising judgement in real-world situations, not contrived situations and hypotheticals. Hypothetical situations are of very limited use, if not actually harmful. Decisions in the real world are always more complex and nuanced. I'm completely uninterested in whether a self-driving car should run over the grandmothers or the babies. An autonomous vehicle that can choose which pedestrian to kill surely has enough control to avoid the accident altogether. The real issue isn't who to kill, where either option forces you into unacceptable positions about the value of human lives, but how to prevent accidents in the first place. Above all, ethics must be realistic, and in our real world, bad things happen.

That's my rather abstract framework for an ethics of AI. I don't want to tell data scientists and AI developers what to do in any given situation. I want to give scientists and engineers tools for thinking about problems. We surely can't predict all the problems and ethical issues in advance; we need to be the kind of people who can have effective discussions about these issues as we anticipate and discover them.

Talking through some issues

What are some of the ethical questions that AI developers and researchers should be thinking about? Even though we're still in the earliest days of AI, we're already seeing important issues rise to the surface: issues about the kinds of people we want to be, and the kind of future we want to build. So, let's look at some situations that made the news.

Pedestrians and passengers

The self-driving car/grandmother versus babies thing is deeply foolish, but there's a variation of it that's very real. Should a self-driving car that's in an accident situation protect its passengers or the people outside the car? That's a question that is already being discussed in corporate board rooms, as it was at Mercedes recently, which decided that the company's duty was to protect the passengers rather than pedestrians. I suspect that Mercedes' decision was driven primarily by accounting and marketing: who will buy a car that will sacrifice the owner to avoid killing a pedestrian? But Mercedes made an argument that's at least ethically plausible: they have more control over what happens to the person inside the car, so better to save the passenger than to roll the dice on the pedestrians. One could also argue that Mercedes has an ethical committent to the passengers, who have put their lives in the hands of their AI systems.

The bigger issue is to design autonomous vehicles that can handle dangerous situations without accidents. That's the real ethical choice. How do you trade off cost, convenience, and safety? It's possible to make cars that are more safe or less safe; AI doesn't change that at all. It's impossible to make a car (or anything else) that's completely safe, at any price. So, the ethics here ultimately come down to a tradeoff between cost and safety, to ourselves and to others. How do we value others? Not grandmothers or babies (who will inevitably be victims, just as they are now, though hopefully in smaller numbers), but passengers and pedestrians, Mercedes' customers and non-customers? The answers to these questions aren't fixed, but they do say something important about who we are.

Crime and punishment

COMPAS is commercial software used in many state courts to recommend prison sentences, bail terms, and parole. In 2016, ProPublica published an excellent article showing that COMPAS consistently scores blacks as greater risks for re-offending than whites who committed similar or more serious crimes.

Although COMPAS has been secretive about the specifics of their software, ProPublica published the data on which their reports were based. Abe Gong, a data scientist, followed up with a multi-part study, using ProPublica's data, showing that the COMPAS results were not "biased." Abe is very specific: he means "biased" in a technical, statistical sense. Statistical bias is a statement about the relationship between the outputs (the risk scores) and the inputs (the data). It has little to do with whether we, as humans, think the outputs are fair.

Abe is by no means an apologist for COMPAS or its developers. As he says, "Powerful algorithms can be harmful and unfair, even when they're unbiased in a strictly technical sense." The results certainly had disproportionate effects that most of us would be uncomfortable with. In other words, they were "biased" in the non-technical sense. "Unfair" is a better word that doesn't bring in the trapping of statistics.

The output of a program reflects the data that goes into it. "Garbage in, garbage out" is a useful truism, especially for systems that build models based on terabytes of training data. Where does that data come from, and does it embody its own biases and prejudices? A program's analysis of the data may be unbiased, but if the data reflects arrests, and if police are more likely to arrest black suspects, while letting whites off with a warning, a statistically unbiased program will necessarily produce unfair results. The program also took into account factors that may be predictive, but that we might consider unfair: is it fair to set a higher bail because the suspect's parents separated soon after birth, or because the suspect didn't have access to higher education?

There's not a lot that we can do about bias in the data: arrest records are what they are, and we can't go back and un-arrest minority citizens. But there are other issues at stake here. As I've said before, I'm much more concerned about what happens behind closed doors than what happens in the open. Cathy O'Neil has frequently argued that secret algorithms and secret data models are the real danger. That's really what COMPAS shows. It is almost impossible to discuss whether a system is unfair if we don't know what the system is and how it works. We don't just need open data; we need to open up the models that are built from the data.

COMPAS demonstrates, first, that we need a discussion about fairness, and what that means. How do we account for the history that has shaped our statistics, a history that was universally unfair to minorities? How do we address bias when our data itself is biased? But we can't answer these questions if we don't also have a discussion about secrecy and openness. Openness isn't just nice; it's an ethical imperative. Only when we understand what the algorithms and the data are doing, can we take the next steps and build systems that are fair, not just statistically unbiased.

Child labor

One of the most penetrating remarks about the history of the internet is that it was "built on child labor." The IPv4 protocol suite, together with the first implementations of that suite, was developed in the 1980s, and was never intended for use as a public, worldwide, commercial network. It was released well before we understood what a 21st century public network would need. The developers couldn't forsee more than a few tens of thousands of computers on the internet; they didn't anticipate that it would be used for commerce, with stringent requirements for security and privacy; putting a system on the internet was difficult, requiring handcrafted static configuration files. Everything was immature; it was "child labor," technological babies doing adult work.

Now that we're in the first stages of deploying AI systems, the stakes are even higher. Technological readiness is an important ethical issue. But like any real ethical issue, it cuts both ways. If the public internet had waited until it was "mature," it probably would never have happened; if it had happened, it would have been an awful bureacratic mess, like the abandoned ISO-OSI protocols, and arguably no less problematic. Unleashing technological children on the world is irresponsible, but preventing those children from growing up is equally irresponsible.

To move that argument to the 21st century: my sense is that Uber is pushing the envelope too hard on autonomous vehicles. And we're likely to pay for that—in vehicles that perhaps aren't as safe as they should be, or that have serious security vulnerabilities. (In contrast, Google is being very careful, and that care may be why they've lost some key people to Uber.) But if you go to the other extreme and wait until autonomous vehicles are "safe" in every respect, you're likely to end up with nothing: the technology will never be deployed. Even if it is deployed, you will inevitably discover risk factors that you didn't forsee, and couldn't have forseen without real experience.

I'm not making an argument about whether autonomous vehicles, or any other AI, are ready to be deployed. I'm willing to discuss that, and if necessary, to disagree. What's more important is to realize that this discussion needs to happen. Readiness itself is an ethical issue, and one that we need to take seriously. Ethics isn't simply a matter of saying that any risk is acceptable, or (on the other hand) that no risk is acceptable. Readiness is an ethical issue precisely because it isn't obvious what the "right" answer is, or whether there is any "right" answer. Is it an "ethical gray area"? Yes, but that's precisely what ethics is about: discussing the gray areas.

The state of surveillance

In a chilling article, The Verge reports that police in Baltimore used a face identification application called Geofeedia, together with photographs shared on Instagram, Facebook, and Twitter, to identify and arrest protesters. The Verge's report is based on a more detailed analysis by the ACLU. Instagram and the other companies quickly terminated Geofeedia's account after the news went public, though they willingly provided the data before it was exposed by the press.

Applications of AI to criminal cases quickly get creepy. We should all be nervous about the consequences of building a surveillance state. People post pictures to Instagram without thinking of the consequences, even when they're at demonstrations. And, while it's easy to say "anything you post should be assumed to be public, so don't post anything that you wouldn't anyone to see," it's difficult, if not impossible, to think about all the contexts in which your posts can be put.

The ACLU suggests putting the burden on the social media companies: social media companies should have "clear, public, and transparent policies to prohibit developers from exploiting user data for surveillance." Unfortunately, this misses the point: just as you can't predict how your posts will be used or interpreted, who knows the applications to which software will be put? If we only have to worry about software that's designed for surveillance, our task is easy. It's more likely, though, that applications designed for innocent purposes, like finding friends in crowds, will become parts of surveillance suites.

The problem isn't so much the use or abuse of individual Facebook and Instagram posts, but the scale that's enabled by AI. People have always seen other people in crowds, and identified them. Law enforcement agencies have always done the same. What AI enables is identification at scale: matching thousands of photos from social media against photos from drivers' license databases, passport databases, and other sources, then taking the results and crossing them with other kinds of records. Suddenly, someone who participates in a demonstration can find themselves facing a summons over an old parking ticket. Data is powerful, and becomes much more powerful when you combine multiple data sources.

We don't want people to be afraid of attending public gatherings, or in terror that someone might take a photo of them. (A prize goes to anyone who can find me on the cover of Time. These things happen.) But it's also unreasonable to expect law enforcement to stick to methodologies from the 80s and earlier: crime has certainly moved on. So, we need to ask some hard questions—and "should law enforcement look at Instagram" is not one of them. How does automated face recognition at scale change the way we relate to each other, and are those changes acceptable to us? Where's the point at which AI becomes harassment? How will law enforcement agencies be held accountable for the use, and abuse, of AI technologies? Those are the ethical questions we need to discuss.

Our AIs are ourselves

Whether it's fear of losing jobs or fear of a superintelligence deciding that humans are no longer necessary, it's always been easy to conjure up fears of artificial intelligence.

But marching to the future in fear isn't going to end well. And unless someone makes some fantastic discoveries about the physics of time, we have no choice but to march into the future. For better or for worse, we will get the AI that we deserve. The bottom line of AI is simple: to build better AI, be better people.

That sounds trite, and it is trite. But it's also true. If we are unwilling to examine our prejudices, we will implement AI systems that are "unfair" even if they're statistically unbiased, merely because we won't have the interest to examine the data on which the system is trained. If we are willing to live under an authoritarian government, we will build AI systems that subject us to constant surveillance: not just through Instagrams of demonstrations, but in every interaction we take part in. If we're slaves to a fantasy of wealth, we won't object to entrepreneurs releasing AI systems before they're ready, nor will we object to autonomous vehicles that preferentially protect the lives of those wealthy enough to afford them.

But if we insist on open, reasoned discussion of the tradeoffs implicit in any technology; if we insist that both AI algorithms and models are open and public; and if we don't deploy technology that is grossly immature, but also don't suppress new technology because we fear it, we'll be able to have a healthy and fruitful relationship with the AIs we develop. We may not get what we want, but we'll be able to live with what we get.

Walt Kelly said it best, back in 1971: "we have met the enemy and he is us." In a nutshell, that's the future of AI. It may be the enemy, but only if we make it so. I have no doubt that AI will be abused and that "evil AI" (whatever that may mean) will exist. As Tim O'Reilly has argued, large parts of our economy are already managed by unintelligent systems that aren't under our control in any meaningful way. But evil AI won't be built by people who think seriously about their actions and the consequences of their actions. We don't need to forsee everything that might happen in the future, and we won't have a future if we refuse to take risks. We don't even need complete agreement on issues such as fairness, surveillance, openness, and safety. We do need to talk about these issues, and to listen to each other carefully and respectfully. If we think seriously about ethical issues and build these discussions into the process of developing AI, we'll come out OK.

To create better AI, we must be better people.

Article image: Compass. (source: Pixabay).