Chapter 55. Responsible Design and Use of AI: Managing Safety, Risk, and Transparency

Pamela Passman

AI is having a growing impact on markets and business practices around the world. And its potential is even greater. The IDC found in September 2019 that “spending on AI systems will reach $97.9 billion in 2023, more than two and one half times the $37.5 billion that will be spent in 2019.” According to the McKinsey Global Institute, AI could deliver additional global economic output of $13 trillion per year by 2030.

Yet even as it unleashes business potential and broader societal benefits, the use of AI can also result in a host of unwanted and sometimes serious consequences. These considerations have given rise to no fewer than 32 different industry, NGO, and government AI ethics codes, which outline steps that organizations should take to develop, implement, and use AI in ways that support societal values and manage risks.

Many forward-thinking companies—some with firsthand experience in dealing with unintended consequences of AI—have also developed their own codes of ethical AI. While these codes can vary quite a bit, nine common responsibilities have been identified. These responsibilities can be divided into three groups: responsible design and use, lawful use, and ethical use. Here we take ...

Get 97 Things About Ethics Everyone in Data Science Should Know now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.