Chapter 3. Responsible AI

As noted in Chapter 1, sustaining AI in production involves coordination with a variety of key stakeholders (e.g., end users, risk managers) to ensure it is accountable to their interests.

Additionally, sustainable AI development should seek to automate this coordination to minimize the burden on the developer and ensure that both the stakeholder and developer share the responsibility.

It becomes increasingly important to prioritize building systems in a responsible manner as you operationalize AI to solve problems at an enterprise scale. Failure to do so can lead to unanticipated risks, implications, and consequences for your organization.

Today’s responsible AI conversation highlights issues that tend to land companies on a newspaper’s front page. But these broad conversations fail to offer specific recommendations for addressing underlying responsibility challenges. Adding to the challenge is that there are no easily defined metrics for measurement of responsible AI. To build this foundation, developers, managers, and senior leaders alike should understand their unique role and contributions to this process.

In short, accountability for responsible AI belongs to everyone in the organization. This is being further demonstrated by the introduction of more guidance, including the recent release by the US deputy secretary of defense, “Implementing Responsible Artificial Intelligence in the Department of Defense.”1 The memo included provisions for implementing ...

Get Enterprise AIOps now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.