CHAPTER 6Ethical and Responsible Use
INTRODUCTION
In a 2017 survey 55% of US human resource managers said AI would be a regular part of their work within five years. There was no corresponding question about how these managers would work to make sure their smart tech is actively anti-biased in engaging and empowering all people fairly equally.1 This fast commercialization of smart tech products is outpacing our collective understanding of their ethical and responsible usage. As Shalini Kantayya, the director of Coded Bias, said to us, “We could roll back fifty years in civil rights advances if we blindly trust the algorithms.”2
Spiderman's motto, “With great power comes great responsibility,” applies to users of smart tech.3 Whether you are an executive, team member, consultant, or customer, using smart tech ethically is one of your essential responsibilities.
Machines making decisions for people in ways that are largely invisible creates opportunities for intentional and unintentional misuse that need to be carefully watched and mitigated. For instance, bankruptcy claims gathered from the web by financial companies can then be used to offer predatory deals.
In this chapter, we will examine three critically important areas: responsible use, bias, and privacy. We will discuss real-life circumstances where these concerns have created ethical dilemmas. Finally, we will provide guidance for your organization to ensure that smart tech does far more good than harm in your organization ...
Get The Smart Nonprofit now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.