© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
P. MishraPractical Explainable AI Using Pythonhttps://doi.org/10.1007/978-1-4842-7158-2_8

8. AI Model Fairness Using a What-If Scenario

Pradeepta Mishra1  
(1)
Sobha Silicon Oasis, Bangalore, Karnataka, India
 

This chapter explains the use of a What-If Tool (WIT) to explain biases in AI models, such as machine learning-based regression models, classification models, and multi-class classification models. As a data scientist, you are responsible for not only developing a machine learning model, but also for ensuring that the model is not biased and that new observations are treated fairly. It is imperative to probe the decisions and verify the algorithmic fairness. ...

Get Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-based Libraries, Extensions, and Frameworks now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.