7CAN A MACHINE BE A MORAL AGENT? SHOULD ANY MACHINES BE MORAL AGENTS?
7.1 Machine Ethics
7.1 In the previous chapter, we saw that Christian List claims that “it should be clear that, while there are significant technological challenges here, conceptually, there is no reason why an AI system could not qualify as a moral agent” (List 2021: p. 1229). As List sees things, AI systems might in principle even be able to be morally responsible for their decisions. The kinds of examples List has in mind are technologies acting in what he calls “high‐stakes settings.” That is to say, List is talking about technologies—like self‐driving cars or military robots—that have to make decisions that might have good or bad effects on human beings. We can here also imagine machines making decisions that might have good or bad effects on animals or on the natural environment. Such technologies that are operating in high‐stakes settings need to make their decisions in a way that is sensitive to moral considerations, List argues. This general idea—the idea of technologies as moral agents—is the topic we will be exploring in this chapter.
7.2 This is by no means an idea that is unique to List. In fact, there is a whole field of interdisciplinary research called “machine ethics.”1 The aim of machine ethics is to create what some authors call “artificial moral agents” or AMAs, as this is sometimes abbreviated. This field of research is spearheaded by the computer scientist Michael Anderson and the philosopher ...
Get This is Technology Ethics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.