Chapter 8. The Culture of Ethics: Responsible AI
As a business or technical leader, you will play a big role in the sustainable development of AI. Putting in place a strong culture of responsibility in your organization will be critical to developing AI that can be trusted by your employees and your customers, and ultimately to contributing to the overall future impact of AI in society.
The first step in this journey is to understand and acknowledge the challenges associated with AI, and define the principles that will guide how your company will address those challenges.
Responsible AI Principles
We went through this same journey at Microsoft, which led us to define the six principles that we believe should guide the development of AI: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We defined those principles very early in our AI transformation journey, which has allowed us to learn valuable insights into the process and develop technology that we also offer to our customers to help them design and operate AI responsibly. Let’s examine those six principles in more detail.
Fairness
AI is designed by humans and trained on data coming from the real world, and unfortunately the real world can be unfair and biased. If an AI model for a loan assessment is trained on historical data where credit agents were biased in their decisions, the AI model will also carry that bias. If another AI model that identifies sales opportunities ...
Get The AI Organization now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.