Chapter 6. Rules and Rationality
Christof Wolf Brenner
In Isaac Asimov’s famous science fiction stories, a hierarchical set of laws acts as the centerpiece to ensure ethical behavior of artificial moral agents. These robots—part computer, part machine—can efficiently handle complex tasks that would otherwise require human-level minds to complete.
Asimov argues that his ruleset is the only suitable foundation for the interaction between rational human beings and robots that adapt and flexibly choose their own course of action. Today, almost 80 years after the first iteration of the laws was devised in 1942, die-hard fans still argue that Asimov’s laws are sufficient to guide moral decision making. However, looking at the ruleset as finalized by Asimov in 1985, it becomes clear that, applied exclusively, these laws might not produce what we would call “good decisions”:
- Zeroth Law
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
- First Law
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- Second Law
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- Third Law
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Asimov’s autonomous ethical agents can ...
Get 97 Things About Ethics Everyone in Data Science Should Know now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.