Video description
In Video Editions the narrator reads the book while the content, figures, code listings, diagrams, and text appear on the screen. Like an audiobook that you can also watch as a video.
AI doesn’t have to be a black box. These practical techniques help shine a light on your model’s mysterious inner workings. Make your AI more transparent, and you’ll improve trust in your results, combat data leakage and bias, and ensure compliance with legal requirements.
In Interpretable AI, you will learn:
- Why AI models are hard to interpret
- Interpreting white box models such as linear regression, decision trees, and generalized additive models
- Partial dependence plots, LIME, SHAP and Anchors, and other techniques such as saliency mapping, network dissection, and representational learning
- What fairness is and how to mitigate bias in AI systems
- Implement robust AI systems that are GDPR-compliant
Interpretable AI opens up the black box of your AI models. It teaches cutting-edge techniques and best practices that can make even complex AI systems interpretable. Each method is easy to implement with just Python and open source libraries. You’ll learn to identify when you can utilize models that are inherently transparent, and how to mitigate opacity when your problem demands the power of a hard-to-interpret deep learning model.
About the Technology
It’s often difficult to explain how deep learning models work, even for the data scientists who create them. Improving transparency and interpretability in machine learning models minimizes errors, reduces unintended bias, and increases trust in the outcomes. This unique book contains techniques for looking inside “black box” models, designing accountable algorithms, and understanding the factors that cause skewed results.
About the Book
Interpretable AI teaches you to identify the patterns your model has learned and why it produces its results. As you read, you’ll pick up algorithm-specific approaches, like interpreting regression and generalized additive models, along with tips to improve performance during training. You’ll also explore methods for interpreting complex deep learning models where some processes are not easily observable. AI transparency is a fast-moving field, and this book simplifies cutting-edge research into practical methods you can implement with Python.
What's Inside
- Techniques for interpreting AI models
- Counteract errors from bias, data leakage, and concept drift
- Measuring fairness and mitigating bias
- Building GDPR-compliant AI systems
About the Reader
For data scientists and engineers familiar with Python and machine learning.
About the Author
Ajay Thampi is a machine learning engineer focused on responsible AI and fairness.
Quotes
A sound introduction for practitioners to the exciting field of interpretable AI.
- Pablo Roccatagliata, Torcuato Di Tella University
Ajay Thampi explains in an easy-to-understand way the importance of interpretability in machine learning.
- Ariel Gamiño, Athenahealth
Effectively demystifies interpretable AI for novice and pro alike.
- Vijayant Singh, Razorpay
Concrete examples help the understanding and building of interpretable AI systems.
- Izhar Haq, Long Island University
Table of contents
- Part 1. Interpretability basics
- Chapter 1. Introduction
- Chapter 1. Types of machine learning systems
- Chapter 1. Building Diagnostics+ AI
- Chapter 1. Gaps in Diagnostics+ AI
- Chapter 1. Building a robust Diagnostics+ AI system
- Chapter 1. Interpretability vs. explainability
- Chapter 1. What will I learn in this book?
- Chapter 1. Summary
- Chapter 2. White-box models
- Chapter 2. Diagnostics+—diabetes progression
- Chapter 2. Linear regression
- Chapter 2. Decision trees
- Chapter 2. Generalized additive models (GAMs)
- Chapter 2. Looking ahead to black-box models
- Chapter 2. Summary
- Part 2. Interpreting model processing
- Chapter 3. Model-agnostic methods: Global interpretability
- Chapter 3. Tree ensembles
- Chapter 3. Interpreting a random forest
- Chapter 3. Model-agnostic methods: Global interpretability
- Chapter 3. Summary
- Chapter 4. Model-agnostic methods: Local interpretability
- Chapter 4. Exploratory data analysis
- Chapter 4. Deep neural networks
- Chapter 4. Interpreting DNNs
- Chapter 4. LIME
- Chapter 4. SHAP
- Chapter 4. Anchors
- Chapter 4. Summary
- Chapter 5. Saliency mapping
- Chapter 5. Exploratory data analysis
- Chapter 5. Convolutional neural networks
- Chapter 5. Interpreting CNNs
- Chapter 5. Vanilla backpropagation
- Chapter 5. Guided backpropagation
- Chapter 5. Other gradient-based methods
- Chapter 5. Grad-CAM and guided Grad-CAM
- Chapter 5. Which attribution method should I use?
- Chapter 5. Summary
- Part 3. Interpreting model representations
- Chapter 6. Understanding layers and units
- Chapter 6. Convolutional neural networks: A recap
- Chapter 6. Network dissection framework
- Chapter 6. Interpreting layers and units
- Chapter 6. Summary
- Chapter 7. Understanding semantic similarity
- Chapter 7. Exploratory data analysis
- Chapter 7. Neural word embeddings
- Chapter 7. Interpreting semantic similarity
- Chapter 7. Summary
- Part 4. Fairness and bias
- Chapter 8. Fairness and mitigating bias
- Chapter 8. Fairness notions
- Chapter 8. Interpretability and fairness
- Chapter 8. Mitigating bias
- Chapter 8. Datasheets for datasets
- Chapter 8. Summary
- Chapter 9. Path to explainable AI
- Chapter 9. Counterfactual explanations
- Chapter 9. Summary
- Appendix A. Getting set up
- Appendix A. Git code repository
- Appendix A. Conda environment
- Appendix A. Jupyter notebooks
- Appendix A. Docker
- Appendix B. PyTorch
- Appendix B. Installing PyTorch
- Appendix B. Tensors
- Appendix B. Dataset and DataLoader
- Appendix B. Modeling
Product information
- Title: Interpretable AI, Video Edition
- Author(s):
- Release date: July 2022
- Publisher(s): Manning Publications
- ISBN: None
You might also like
video
Quick Start Guide to Large Language Models (LLMs): ChatGPT, Llama, Embeddings, Fine-Tuning, and Multimodal AI
13+ Hours of Video Instruction Learn how to use and launch large language models (LLMs) like …
article
Run Llama-2 Models Locally with llama.cpp
Llama is Meta’s answer to the growing demand for LLMs. Unlike its well-known technological relative, ChatGPT, …
article
Creating Online Videos That Engage Viewers
The Holy Grail of modern online marketing is video content that “goes viral,” meaning that it …
video
Math and Architectures of Deep Learning, Video Edition
Shine a spotlight into the deep learning “black box”. This comprehensive and detailed guide reveals the …