Explainable AI for Practitioners

Book description

Most intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does.

Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow.

This essential book provides:

  • A detailed look at some of the most useful and commonly used explainability techniques, highlighting pros and cons to help you choose the best tool for your needs
  • Tips and best practices for implementing these techniques
  • A guide to interacting with explainability and how to avoid common pitfalls
  • The knowledge you need to incorporate explainability in your ML workflow to help build more robust ML systems
  • Advice about explainable AI techniques, including how to apply techniques to models that consume tabular, image, or text data
  • Example implementation code in Python using well-known explainability libraries for models built in Keras and TensorFlow 2.0, PyTorch, and HuggingFace

Publisher resources

View/Submit Errata

Table of contents

  1. Foreword
  2. Preface
    1. Who Should Read This Book?
    2. What Is and What Is Not in This Book?
    3. Code Samples
    4. Navigating This Book
    5. Conventions Used in This Book
    6. O’Reilly Online Learning
    7. How to Contact Us
    8. Acknowledgments
  3. 1. Introduction
    1. Why Explainable AI
    2. What Is Explainable AI?
    3. Who Needs Explainability?
    4. Challenges in Explainability
    5. Evaluating Explainability
    6. How Has Explainability Been Used?
      1. How LinkedIn Uses Explainable AI
      2. PwC Uses Explainable AI for Auto Insurance Claims
      3. Accenture Labs Explains Loan Decisions
      4. DARPA Uses Explainable AI to Build “Third-Wave AI”
    7. Summary
  4. 2. An Overview of Explainability
    1. What Are Explanations?
    2. Interpretability and Explainability
    3. Explainability Consumers
      1. Practitioners—Data Scientists and ML Engineers
      2. Observers—Business Stakeholders and Regulators
      3. End Users—Domain Experts and Affected Users
    4. Types of Explanations
      1. Premodeling Explainability
      2. Intrinsic Versus Post Hoc Explainability
      3. Local, Cohort, and Global Explanations
      4. Attributions, Counterfactual, and Example-Based Explanations
    5. Themes Throughout Explainability
      1. Feature Attributions
      2. Surrogate Models
      3. Activation
    6. Putting It All Together
    7. Summary
  5. 3. Explainability for Tabular Data
    1. Permutation Feature Importance
      1. Permutation Feature Importance from Scratch
      2. Permutation Feature Importance in scikit-learn
    2. Shapley Values
      1. SHAP (SHapley Additive exPlanations)
      2. Visualizing Local Feature Attributions
      3. Visualizing Global Feature Attributions
      4. Interpreting Feature Attributions from Shapley Values
      5. Managed Shapley Values
    3. Explaining Tree-Based Models
      1. From Decision Trees to Tree Ensembles
      2. SHAP’s TreeExplainer
    4. Partial Dependence Plots and Related Plots
      1. Partial Dependence Plots (PDPs)
      2. Individual Conditional Expectation Plots (ICEs)
      3. Accumulated Local Effects (ALE)
    5. Summary
  6. 4. Explainability for Image Data
    1. Integrated Gradients (IG)
      1. Choosing a Baseline
      2. Accumulating Gradients
      3. Improvements on Integrated Gradients
    2. XRAI
      1. How XRAI Works
      2. Implementing XRAI
    3. Grad-CAM
      1. How Grad-CAM Works
      2. Implementing Grad-CAM
      3. Improving Grad-CAM
    4. LIME
      1. How LIME Works
      2. Implementing LIME
    5. Guided Backpropagation and Guided Grad-CAM
      1. Guided Backprop and DeConvNets
      2. Guided Grad-CAM
    6. Summary
  7. 5. Explainability for Text Data
    1. Overview of Building Models with Text
      1. Tokenization
      2. Word Embeddings and Pretrained Embeddings
    2. LIME
      1. How LIME Works with Text
    3. Gradient x Input
      1. Intuition from Linear Models
      2. From Linear to Nonlinear and Text Models
      3. Grad L2-norm
    4. Layer Integrated Gradients
      1. A Variation on Integrated Gradients
    5. Layer-Wise Relevance Propagation (LRP)
      1. How LRP Works
      2. Deriving Explanations from Attention
    6. Which Method to Use?
      1. Language Interpretability Tool
    7. Summary
  8. 6. Advanced and Emerging Topics
    1. Alternative Explainability Techniques
      1. Alternate Input Attribution
      2. Explainability by Design
    2. Other Modalities
      1. Time-Series Data
      2. Multimodal Data
    3. Evaluation of Explainability Techniques
      1. A Theoretical Approach
      2. Empirical Approaches
    4. Summary
  9. 7. Interacting with Explainable AI
    1. Who Uses Explainability?
    2. How to Effectively Present Explanations
      1. Clarify What, How, and Why the ML Performed the Way It Did
      2. Accurately Represent the Explanations
      3. Build on the ML Consumer’s Existing Understanding
    3. Common Pitfalls in Using Explainability
      1. Assuming Causality
      2. Overfitting Intent to a Model
      3. Overreaching for Additional Explanations
    4. Summary
  10. 8. Putting It All Together
    1. Building with Explainability in Mind
      1. The ML Life Cycle
    2. AI Regulations and Explainability
    3. What to Look Forward To in Explainable AI
      1. Natural and Semantic Explanations
      2. Interrogative Explanations
      3. Targeted Explanations
    4. Summary
  11. A. Taxonomy, Techniques, and Further Reading
    1. ML Consumers
    2. Taxonomy of Explainability
    3. XAI Techniques
      1. Tabular Models
      2. Image Models
      3. Text Models
      4. Advanced and Emerging Techniques
    4. Interacting with Explainability
    5. Putting It All Together
    6. Further Reading
      1. Explainable AI
      2. Interacting with Explainability
      3. Technical Accuracy of XAI techniques
      4. Brittleness of XAI techniques
      5. XAI for DNNs
  12. Index
  13. About the Authors

Product information

  • Title: Explainable AI for Practitioners
  • Author(s): Michael Munn, David Pitman
  • Release date: October 2022
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781098119133