© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
T. DukeBuilding Responsible AI Algorithmshttps://doi.org/10.1007/978-1-4842-9306-5_7

7. Explainability

Toju Duke1  
(1)
London, UK
 

While the previous chapter covered Human-in-the-Loop and its importance in building responsible AI algorithms, it’s paramount to ensure the transparency and explainability of ML models following HITL processes. This chapter reviews “explainability,” also known as XAI and its implementation.

With the steady advancements of AI in the past few years and decades, we’ve seen a gradual progression from logical, knowledge-based approaches to algorithms made of neural networks. Introduced by two researchers from the University of Chicago, ...

Get Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.