Who, me? They warned you about me?

Thoughts on "We are the people they warned you about."

By Mike Loukides
December 7, 2017
Caution sign on ski slope Caution sign on ski slope (source: D Coetzee on Flickr)

Chris Anderson recently published “We are the people they warned you
about,” a two part article about the development of killer drones. Here’s the problem he’s wrestling with: “I’m an enabler … but I have no idea what I should do differently.”

That’s a good question to ask. It’s a question everyone in technology needs to ask, not just people who work on drones. It’s related to the problem of ethics at scale: almost everything we do has consequences. Some of those consequences are good, some are bad. Our ability to multiply our actions to internet scale means we have to think about ethics in a different way.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

The second part of Anderson’s article gets personal. He talks about writing code for swarming behavior after reading Kill Decision, a science fiction novel about swarming robots running amok. And he struggles with three issues. First (I’m very loosely paraphrasing Anderson’s words), “I have no idea how to write code that can’t run amok; I don’t even know what that means.” Second, “If I don’t write this code, someone else will—and indeed, others have.” And third, “Fine, but my code (and the other open source code) doesn’t exhibit bad behavior—which is what the narrator of Kill
Decision
would have said, right up to the point where the novel’s drones became lethal.”

How do we protect ourselves, and others, from the technology we invent? Anderson tries to argue against regulatory solutions by saying that swarming behavior is basically math; regulation is essentially regulating math, and that makes no sense. As Anderson points out, Ben Hammer, CTO of Kaggle, tweeted that regulating artificial intelligence essentially means regulating matrix multiplication and derivatives. I like the feel of this reductio ad absurdum, but neither Anderson nor I buy it—if you push far enough, it can be applied to anything. The FCC regulates electromagnetic fields; the FAA regulates the Bernoulli effect. We can regulate the effects or applications of technology, though even that’s problematic. We can require AI systems to be “fair” (if we can agree on what “fair” means); we can require that drones not attack people (though that might mean regulating emergent and unpredictable behavior).

A bigger issue is that we can only regulate agents that are willing to be regulated. A law against weaponized drones doesn’t stop the military from developing them. It doesn’t even prevent me from building one in my basement; any punishment for violation comes after the fact. (For that matter, regulation rarely, if ever, happens before the technology has been abused.) Likewise, laws don’t prevent governments or businesses from abusing data. As any speeder knows, it’s only a violation if you get caught.

A better point is that, whether or not we regulate, we can’t prevent inventions from being invented, and once invented, they can’t be put back into the box. The myth of Pandora’s box is powerful and resonant in the 21st century. The box is always opened. It’s always already opened; the desire to open the box, the desire to find the box and then open it, is what drives invention in the first place.

Since our many Pandora’s boxes are inevitably opened, and since we can’t in advance predict (or even mitigate) the consequences of opening them, perhaps we should look at the conditions under which those boxes are opened. The application of any technology is determined by the context in which it was invented. Part of the reason we’re so uncomfortable with nuclear energy is that it has been the domain of the military. A large part of the reason we don’t have Thorium reactors, which can’t melt down, is that Thorium reactors aren’t useful if you want to make bombs.

If this is so, perhaps the solution is opening the box in an environment where it does the least harm. Paradoxically, that means opening the box in public, not in private. My claim is that putting an invention into a public space inevitably makes that invention safer. Military research in many countries is no doubt building autonomous killer drones already. This being the case, does developing open source drone software make us more or less safe? When invention takes place in public, we (the public) know that it exists. We can become aware of the risks; we have some control over the
quality of the code; just as many eyes can find the bugs, many minds can think about the consequences. And many minds can think about how to defend against those consequences.

That argument isn’t as strong as I’d like. We can make it stronger by expanding our concept of an “invention” to include everything that makes the invention work, not just the code. Cathy O’Neil has frequently written about the danger of closed, opaque data models, mostly recently in Weapons of Math Destruction. Openness and safety are allies. Regulation is a useful tool, though a tool that’s not as powerful as we’d like to think.

Regulation or not, we won’t prevent the technology from being invented. By inventing in public, inventions can get the scrutiny and critical examination they need. I think Anderson would like to make this point, but isn’t really comfortable with it. I share that discomfort (whether it’s his or not), but I think it’s unavoidable. That may be all the safety we get.

Post topics: Emerging Tech
Post tags: Commentary
Share: