CHAPTER 6Explainability, Bias Detection, and AI Responsibility in Causal AI

In this chapter, we will focus on the critical AI issues of being able to explain what a model is doing and detecting problems such as bias. Given the far-reaching and increasing implications of causal AI and AI generally, it is critical that what the systems are doing, and why, is understandable. If models are difficult to understand, they cannot be trusted. In addition to being able to understand what these systems are doing and why, businesses also must be able to demonstrate that the results are unbiased, responsible, and fair. The importance of these issues will only increase as time goes on.


Google Trends shows that interest in the term explainable AI increased from a popularity index of 0 in February 2011 to the highest level of 100 in February 2020 (and remained at 88 as of March 2023). Therefore, it is no wonder that understanding AI-based models is growing in importance. How do machine learning (ML) models make decisions, and can they be trusted, or are they biased? One of the biggest problems facing businesses is that managers are increasingly relying on AI systems to automate decision-making. Some of the decisions are straightforward, such as the loan applicant’s income and whether they have provided proof of ownership. However, many of the decisions that these AI applications are making include complex algorithms and data. The ability to understand how decisions are made ...

Get Causal Artificial Intelligence now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.