6Ethics, Safety, and Security Concerns
Who drives policymaking in artificial intelligence (AI), and how can we ensure that it is ethical? This chapter aims to answer those questions.
The development and implementation of policies related to AI is a complex process that requires collaboration among industry, government, and other stakeholders. AI has the potential to transform our lives in ways both positive and negative, so it is essential that those who create, deploy, and regulate AI ensure that it is developed and used ethically.
At the international level, the OECD's Recommendation of the Council on Artificial Intelligence is a major source of guidance in this area. The United States has been an advocate for this approach and has taken several steps to support its implementation. In 2019, the United States joined together with like‐minded democracies of the world in adopting this recommendation, which sets out a series of intergovernmental principles for trustworthy AI. The OECD's Recommendation outlines principles including inclusive growth, human‐centered values, transparency, safety and security, and accountability.
The Biden–Harris Administration's Office of Science and Technology Policy also released a blueprint for a “Bill of Rights” in October 2022 to provide guidance on the design, development, and deployment of AI and other automated systems so that they protect the rights of the American public. At the state level, legislation relating to AI was introduced in ...
Get Our Planet Powered by AI now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.