14Artificial Intelligence Governance Standards
Sam De Silva, LLB, BMS, MBS, DBA, FCIPS, CITP, FBCS, FRSA
Partner, CMS Cameron McKenna Nabarro Olswang LLP
Barbara Zapisetskaya, BA, LLB
Senior Associate, CMS Cameron McKenna Nabarro Olswang LLP
Introduction
Artificial Intelligence (AI) is evolving at an ever-quickening pace with the technology being introduced across the world to solve problems, streamline processes, and improve performance. The global AI market size is projected to grow from USD 387.45 billion in 2022 to USD 1,394.30 billion in 2029 at a compound annual growth rate of 20.1 percent.1 With the exponential growth of the use of foundational AI models (such as ChatGPT and Bard), the spotlight on AI as a whole has intensified. According to a recent Gartner, Inc. survey, 56 percent of marketing leaders see greater reward than risk in generative AI.2 Further, 55 percent of organizations that have previously deployed AI always consider AI for every new use case that they are evaluating. More than half of organizations (52 percent) report that risk factors are a critical consideration when evaluating new AI use cases.3
Against this backdrop, in this chapter we will discuss common issues associated with AI implementation by organizations, ISO standards as an effective means of addressing AI risks within an organization, and practical steps organizations can take to ensure that their AI governance processes are effective and robust.
How Is Regulation of AI Developing? ...
Get The Handbook of Board Governance, 3rd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.