CHAPTER 3The Ways AI Goes Wrong, and the Legal Implications
Coauthor: William Goodrum
If we want to understand why cases of “AI gone wrong” occur, we must first understand that they do not occur at random. Rather, harms caused by AI models can arise either as an intentional result of an AI model accomplishing a harmful objective or as an unintended or unanticipated consequence of an AI model accomplishing an otherwise useful objective.
In Chapter 1, “Why Data Science Should Be Ethical,” and Chapter 2, “Background—Modeling and the Black-Box Algorithm,” we discussed a brief history of ethical concerns about the use of statistics and introduced the technical topics necessary for the remainder of the book.
Now, we delve deeper into the quagmire of irresponsible AI in the present day. In this chapter, we present illustrations of the harms arising from both intentionally malicious uses of AI and honest uses of AI that nonetheless end up causing harm. We develop an understanding of the various different contexts and forms in which these harms occur as well as who experiences these harms. We then transition to a discussion of how these harms are viewed internationally in legal and regulatory contexts. We do not attempt to comprehensively cover the law on the subject, but instead focus on providing some familiarity with the legal considerations that should inform our quest to better understand and control our own use of such powerful algorithms.
AI and Intentional Consequences by ...
Get Responsible Data Science now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.