This chapter explains the use of a What-If Tool (WIT) to explain biases in AI models, such as machine learning-based regression models, classification models, and multi-class classification models. As a data scientist, you are responsible for not only developing a machine learning model, but also for ensuring that the model is not biased and that new observations are treated fairly. It is imperative to probe the decisions and verify the algorithmic fairness. ...
8. AI Model Fairness Using a What-If Scenario
Get Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-based Libraries, Extensions, and Frameworks now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.