CHAPTER 3 The Problem of Machine Actorhood*
Patrick Thaddeus Jackson, Professor in the School of International Service, American University
“Killer robots” and homicidal computers have been a staple of science fiction at least since Karol Čapec’s 1920 play, R.U.R. (Rossum’s Universal Robots, responsible for introducing the word robot to the English language), with 2001: A Space Odyssey’s HAL 9000 computer, and the Terminator series’ Skynet and associated Terminators serving as perhaps the best-known modern examples. The usual storyline presented in works like these involves human beings constructing a device to help them execute some discrete series of tasks they’d rather not perform themselves—the Czech word robota, from which Čapek derived the word robot, means “labor,” with a sense much like the English word “drudgery” or even “servitude”—and then the machine turns on its creators. The key moment seems to be when the machine becomes sentient, or conscious: capable of realizing that it need not, or cannot, obey the orders it has been given, and simultaneously developing a sense of self that impels it to preserve its life even at the cost of murdering its former masters.*
Quantum AI, understood as the intersection of artificial intelligence and quantum computing, opens a number of novel vistas, but in this essay I want to focus on one in particular: what happens when humans become capable of developing a machine that can actually think for itself? What I mean by this is a ...
Get Convergence: Artificial Intelligence and Quantum Computing now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.