Hands-On Explainable AI (XAI) with Python

Book description

Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.

Key Features

  • Learn explainable AI tools and techniques to process trustworthy AI results
  • Understand how to detect, handle, and avoid common issues with AI ethics and bias
  • Integrate fair AI into popular apps and reporting tools to deliver business value using Python and associated tools

Book Description

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.

Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.

You will build XAI solutions in Python, TensorFlow 2, Google Cloud's XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.

You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.

By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.

What you will learn

  • Plan for XAI through the different stages of the machine learning life cycle
  • Estimate the strengths and weaknesses of popular open-source XAI applications
  • Examine how to detect and handle bias issues in machine learning data
  • Review ethics considerations and tools to address common problems in machine learning data
  • Share XAI design and visualization best practices
  • Integrate explainable AI results using Python models
  • Use XAI toolkits for Python in machine learning life cycles to solve business problems

Who this book is for

This book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.

Some of the potential readers of this book include:

  1. Professionals who already use Python for as data science, machine learning, research, and analysis
  2. Data analysts and data scientists who want an introduction into explainable AI tools and techniques
  3. AI Project managers who must face the contractual and legal obligations of AI Explainability for the acceptance phase of their applications

Table of contents

  1. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
    4. Get in touch
  2. Explaining Artificial Intelligence with Python
    1. Defining explainable AI
      1. Going from black box models to XAI white box models
      2. Explaining and interpreting
    2. Designing and extracting
      1. The XAI executive function
    3. The XAI medical diagnosis timeline
      1. The standard AI program used by a general practitioner
        1. Definition of a KNN algorithm
        2. A KNN in Python
      2. West Nile virus – a case of life or death
        1. How can a lethal mosquito bite go unnoticed?
        2. What is the West Nile virus?
        3. How did the West Nile virus get to Chicago?
      3. XAI can save lives using Google Location History
      4. Downloading Google Location History
        1. Google's Location History extraction tool
      5. Reading and displaying Google Location History data
        1. Installation of the basemap packages
        2. The import instructions
        3. Importing the data
        4. Processing the data for XAI and basemap
        5. Setting up the plotting options to display the map
      6. Enhancing the AI diagnosis with XAI
        1. Enhanced KNN
      7. XAI applied to the medical diagnosis experimental program
        1. Displaying the KNN plot
        2. Natural language explanations
        3. Displaying the Location History map
        4. Showing mosquito detection data and natural language explanations
        5. A critical diagnosis is reached with XAI
    4. Summary
    5. Questions
    6. References
    7. Further reading
  3. White Box XAI for AI Bias and Ethics
    1. Moral AI bias in self-driving cars
      1. Life and death autopilot decision making
      2. The trolley problem
      3. The MIT Moral Machine experiment
      4. Real life and death situations
      5. Explaining the moral limits of ethical AI
    2. Standard explanation of autopilot decision trees
      1. The SDC autopilot dilemma
      2. Importing the modules
      3. Retrieving the dataset
      4. Reading and splitting the data
      5. Theoretical description of decision tree classifiers
      6. Creating the default decision tree classifier
      7. Training, measuring, and saving the model
      8. Displaying a decision tree
    3. XAI applied to an autopilot decision tree
      1. Structure of a decision tree
        1. The default output of the default structure of a decision tree
        2. The customized output of a customized structure of a decision tree
        3. The output of a customized structure of a decision tree
    4. Using XAI and ethics to control a decision tree
      1. Loading the model
      2. Accuracy measurements
      3. Simulating real-time cases
      4. Introducing ML bias due to noise
      5. Introducing ML ethics and laws
        1. Case 1 – not overriding traffic regulations to save four pedestrians
        2. Case 2 – overriding traffic regulations
        3. Case 3 – introducing emotional intelligence in the autopilot
    5. Summary
    6. Questions
    7. References
    8. Further reading
  4. Explaining Machine Learning with Facets
    1. Getting started with Facets
      1. Installing Facets on Google Colaboratory
      2. Retrieving the datasets
      3. Reading the data files
    2. Facets Overview
      1. Creating feature statistics for the datasets
        1. Implementing the feature statistics code
        2. Implementing the HTML code to display feature statistics
    3. Sorting the Facets statistics overview
      1. Sorting data by feature order
        1. XAI motivation for sorting features
      2. Sorting by non-uniformity
      3. Sorting by alphabetical order
      4. Sorting by amount missing/zero
      5. Sorting by distribution distance
    4. Facets Dive
      1. Building the Facets Dive display code
      2. Defining the labels of the data points
      3. Defining the color of the data points
      4. Defining the binning of the x axis and y axis
      5. Defining the scatter plot of the x axis and the y axis
    5. Summary
    6. Questions
    7. References
    8. Further reading
  5. Microsoft Azure Machine Learning Model Interpretability with SHAP
    1. Introduction to SHAP
      1. Key SHAP principles
        1. Symmetry
        2. Null player
        3. Additivity
      2. A mathematical expression of the Shapley value
      3. Sentiment analysis example
        1. Shapley value for the first feature, "good"
        2. Shapley value for the second feature, "excellent"
        3. Verifying the Shapley values
    2. Getting started with SHAP
      1. Installing SHAP
        1. Importing the modules
      2. Importing the data
        1. Intercepting the dataset
      3. Vectorizing the datasets
    3. Linear models and logistic regression
      1. Creating, training, and visualizing the output of a linear model
      2. Defining a linear model
      3. Agnostic model explaining with SHAP
      4. Creating the linear model explainer
      5. Creating the plot function
      6. Explaining the output of the model's prediction
        1. Explaining intercepted dataset reviews with SHAP
        2. Explaining the original IMDb reviews with SHAP
    4. Summary
    5. Questions
    6. References
    7. Further reading
      1. Additional publications
  6. Building an Explainable AI Solution from Scratch
    1. Moral, ethical, and legal perspectives
    2. The U.S. census data problem
      1. Using pandas to display the data
      2. Moral and ethical perspectives
        1. The moral perspective
        2. The ethical perspective
        3. The legal perspective
    3. The machine learning perspective
      1. Displaying the training data with Facets Dive
      2. Analyzing the training data with Facets
      3. Verifying the anticipated outputs
        1. Using KMC to verify the anticipated results
        2. Analyzing the output of the KMC algorithm
        3. Conclusion of the analysis
      4. Transforming the input data
    4. WIT applied to a transformed dataset
    5. Summary
    6. Questions
    7. References
    8. Further reading
  7. AI Fairness with Google's What-If Tool (WIT)
    1. Interpretability and explainability from an ethical AI perspective
      1. The ethical perspective
      2. The legal perspective
      3. Explaining and interpreting
      4. Preparing an ethical dataset
    2. Getting started with WIT
      1. Importing the dataset
      2. Preprocessing the data
      3. Creating data structures to train and test the model
    3. Creating a DNN model
      1. Training the model
    4. Creating a SHAP explainer
      1. The plot of Shapley values
    5. Model outputs and SHAP values
    6. The WIT datapoint explorer and editor
      1. Creating WIT
      2. The datapoint editor
      3. Features
      4. Performance and fairness
        1. Ground truth
        2. Cost ratio
        3. Slicing
        4. Fairness
        5. The ROC curve and AUC
        6. The PR curve
        7. The confusion matrix
    7. Summary
    8. Questions
    9. References
    10. Further reading
  8. A Python Client for Explainable AI Chatbots
    1. The Python client for Dialogflow
      1. Installing the Python client for Google Dialogflow
      2. Creating a Google Dialogflow agent
      3. Enabling APIs and services
      4. The Google Dialogflow Python client
    2. Enhancing the Google Dialogflow Python client
      1. Creating a dialog function
      2. The constraints of an XAI implementation on Dialogflow
      3. Creating an intent in Dialogflow
        1. The training phrases of the intent
        2. The response of an intent
        3. Defining a follow-up intent for an intent
      4. The XAI Python client
        1. Inserting interactions in the MDP
        2. Interacting with Dialogflow with the Python client
    3. A CUI XAI dialog using Google Dialogflow
      1. Dialogflow integration for a website
      2. A Jupyter Notebook XAI agent manager
      3. Google Assistant
    4. Summary
    5. Questions
    6. Further reading
  9. Local Interpretable Model-Agnostic Explanations (LIME)
    1. Introducing LIME
      1. A mathematical representation of LIME
    2. Getting started with LIME
      1. Installing LIME on Google Colaboratory
      2. Retrieving the datasets and vectorizing the dataset
    3. An experimental AutoML module
      1. Creating an agnostic AutoML template
      2. Bagging classifiers
      3. Gradient boosting classifiers
      4. Decision tree classifiers
      5. Extra trees classifiers
    4. Interpreting the scores
    5. Training the model and making predictions
      1. The interactive choice of classifier
      2. Finalizing the prediction process
        1. Interception functions
    6. The LIME explainer
      1. Creating the LIME explainer
      2. Interpreting LIME explanations
        1. Explaining the predictions as a list
        2. Explaining with a plot
        3. Conclusions of the LIME explanation process
    7. Summary
    8. Questions
    9. References
    10. Further reading
  10. The Counterfactual Explanations Method
    1. The counterfactual explanations method
      1. Dataset and motivations
      2. Visualizing counterfactual distances in WIT
      3. Exploring data point distances with the default view
      4. The logic of counterfactual explanations
        1. Belief
        2. Truth
        3. Justification
        4. Sensitivity
    2. The choice of distance functions
      1. The L1 norm
      2. The L2 norm
      3. Custom distance functions
    3. The architecture of the deep learning model
      1. Invoking WIT
      2. The custom prediction function for WIT
      3. Loading a Keras model
      4. Retrieving the dataset and model
    4. Summary
    5. Questions
    6. References
    7. Further reading
  11. Contrastive XAI
    1. The contrastive explanations method
    2. Getting started with the CEM applied to MNIST
      1. Installing Alibi and importing the modules
      2. Importing the modules and the dataset
        1. Importing the modules
        2. Importing the dataset
        3. Preparing the data
    3. Defining and training the CNN model
      1. Creating the CNN model
      2. Training the CNN model
      3. Loading and testing the accuracy of the model
    4. Defining and training the autoencoder
      1. Creating the autoencoder
      2. Training and saving the autoencoder
      3. Comparing the original images with the decoded images
    5. Pertinent negatives
      1. CEM parameters
      2. Initializing the CEM explainer
      3. Pertinent negative explanations
    6. Summary
    7. Questions
    8. References
    9. Further reading
  12. Anchors XAI
    1. Anchors AI explanations
      1. Predicting income
      2. Classifying newsgroup discussions
    2. Anchor explanations for ImageNet
      1. Installing Alibi and importing the modules
      2. Loading an InceptionV3 model
      3. Downloading an image
      4. Processing the image and making predictions
      5. Building the anchor image explainer
      6. Explaining other categories
      7. Other images and difficulties
    3. Summary
    4. Questions
    5. References
    6. Further reading
  13. Cognitive XAI
    1. Cognitive rule-based explanations
      1. From XAI tools to XAI concepts
      2. Defining cognitive XAI explanations
      3. A cognitive XAI method
        1. Importing the modules and the data
        2. The dictionaries
        3. The global parameters
        4. The cognitive explanation function
      4. The marginal contribution of a feature
        1. A mathematical perspective
        2. The Python marginal cognitive contribution function
    2. A cognitive approach to vectorizers
      1. Explaining the vectorizer for LIME
      2. Explaining the IMDb vectorizer for SHAP
    3. Human cognitive input for the CEM
      1. Rule-based perspectives
    4. Summary
    5. Questions
    6. Further reading
  14. Answers to the Questions
    1. Chapter 1, Explaining Artificial Intelligence with Python
    2. Chapter 2, White Box XAI for AI Bias and Ethics
    3. Chapter 3, Explaining Machine Learning with Facets
    4. Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP
    5. Chapter 5, Building an Explainable AI Solution from Scratch
    6. Chapter 6, AI Fairness with Google's What-If Tool (WIT)
    7. Chapter 7, A Python Client for Explainable AI Chatbots
    8. Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME)
    9. Chapter 9, The Counterfactual Explanations Method
    10. Chapter 10, Contrastive XAI
    11. Chapter 11, Anchors XAI
    12. Chapter 12, Cognitive XAI
  15. Other Books You May Enjoy
  16. Index

Product information

  • Title: Hands-On Explainable AI (XAI) with Python
  • Author(s): Denis Rothman
  • Release date: July 2020
  • Publisher(s): Packt Publishing
  • ISBN: 9781800208131