Deep Learning and XAI Techniques for Anomaly Detection

Book description

Create interpretable AI models for transparent and explainable anomaly detection with this hands-on guide Purchase of the print or Kindle book includes a free PDF eBook

Key Features

  • Build auditable XAI models for replicability and regulatory compliance
  • Derive critical insights from transparent anomaly detection models
  • Strike the right balance between model accuracy and interpretability

Book Description

Despite promising advances, the opaque nature of deep learning models makes it difficult to interpret them, which is a drawback in terms of their practical deployment and regulatory compliance.

Deep Learning and XAI Techniques for Anomaly Detection shows you state-of-the-art methods that’ll help you to understand and address these challenges. By leveraging the Explainable AI (XAI) and deep learning techniques described in this book, you’ll discover how to successfully extract business-critical insights while ensuring fair and ethical analysis.

This practical guide will provide you with tools and best practices to achieve transparency and interpretability with deep learning models, ultimately establishing trust in your anomaly detection applications. Throughout the chapters, you’ll get equipped with XAI and anomaly detection knowledge that’ll enable you to embark on a series of real-world projects. Whether you are building computer vision, natural language processing, or time series models, you’ll learn how to quantify and assess their explainability.

By the end of this deep learning book, you’ll be able to build a variety of deep learning XAI models and perform validation to assess their explainability.

What you will learn

  • Explore deep learning frameworks for anomaly detection
  • Mitigate bias to ensure unbiased and ethical analysis
  • Increase your privacy and regulatory compliance awareness
  • Build deep learning anomaly detectors in several domains
  • Compare intrinsic and post hoc explainability methods
  • Examine backpropagation and perturbation methods
  • Conduct model-agnostic and model-specific explainability techniques
  • Evaluate the explainability of your deep learning models

Who this book is for

This book is for anyone who aspires to explore explainable deep learning anomaly detection, tenured data scientists or ML practitioners looking for Explainable AI (XAI) best practices, or business leaders looking to make decisions on trade-off between performance and interpretability of anomaly detection applications. A basic understanding of deep learning and anomaly detection–related topics using Python is recommended to get the most out of this book.

Table of contents

  1. Deep Learning and XAI Techniques for Anomaly Detection
  2. Foreword
  3. Contributors
  4. About the author
  5. About the reviewers
  6. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
    4. Download the example code files
    5. Download the color images
    6. Conventions used
    7. Get in touch
    8. Share Your Thoughts
    9. Download a free PDF copy of this book
  7. Part 1 – Introduction to Explainable Deep Learning Anomaly Detection
  8. Chapter 1: Understanding Deep Learning Anomaly Detection
    1. Technical requirements
    2. Exploring types of anomalies
    3. Discovering real-world use cases
      1. Detecting fraud
      2. Predicting industrial maintenance
      3. Diagnosing medical conditions
      4. Monitoring cybersecurity threats
      5. Reducing environmental impact
      6. Recommending financial strategies
    4. Considering when to use deep learning and what for
    5. Understanding challenges and opportunities
    6. Summary
  9. Chapter 2: Understanding Explainable AI
    1. Understanding the basics of XAI
      1. Differentiating explainability versus interpretability
      2. Contextualizing stakeholder needs
      3. Implementing XAI
    2. Reviewing XAI significance
      1. Considering the right to explanation
      2. Driving inclusion with XAI
      3. Mitigating business risks
    3. Choosing XAI techniques
    4. Summary
  10. Part 2 – Building an Explainable Deep Learning Anomaly Detector
  11. Chapter 3: Natural Language Processing Anomaly Explainability
    1. Technical requirements
    2. Understanding natural language processing
      1. Reviewing AutoGluon
    3. Problem
    4. Solution walk-through
    5. Exercise
  12. Chapter 4: Time Series Anomaly Explainability
    1. Understanding time series
    2. Understanding explainable deep anomaly detection for time series
    3. Technical requirements
    4. The problem
    5. Solution walkthrough
    6. Exercise
    7. Summary
  13. Chapter 5: Computer Vision Anomaly Explainability
    1. Reviewing visual anomaly detection
      1. Reviewing image-level visual anomaly detection
      2. Reviewing pixel-level visual anomaly detection
    2. Integrating deep visual anomaly detection with XAI
    3. Technical requirements
    4. Problem
    5. Solution walkthrough
    6. Exercise
    7. Summary
  14. Part 3 – Evaluating an Explainable Deep Learning Anomaly Detector
  15. Chapter 6: Differentiating Intrinsic and Post Hoc Explainability
    1. Technical requirements
    2. Understanding intrinsic explainability
      1. Intrinsic global explainability
      2. Intrinsic local explainability
    3. Understanding post hoc explainability
      1. Post hoc global explainability
      2. Post hoc local explainability
    4. Considering intrinsic versus post hoc explainability
    5. Summary
  16. Chapter 7: Backpropagation versus Perturbation Explainability
    1. Reviewing backpropagation explainability
      1. Saliency maps
    2. Reviewing perturbation explainability
      1. LIME
    3. Comparing backpropagation and perturbation XAI
    4. Summary
  17. Chapter 8: Model-Agnostic versus Model-Specific Explainability
    1. Technical requirements
    2. Reviewing model-agnostic explainability
      1. Explaining AutoGluon with Kernel SHAP
    3. Reviewing model-specific explainability
      1. Interpreting saliency with Guided IG
    4. Choosing an XAI method
    5. Summary
  18. Chapter 9: Explainability Evaluation Schemes
    1. Reviewing the System Causability Scale (SCS)
    2. Exploring Benchmarking Attribution Methods (BAM)
    3. Understanding faithfulness and monotonicity
    4. Human-grounded evaluation framework
    5. Summary
  19. Index
    1. Why subscribe?
  20. Other Books You May Enjoy
    1. Packt is searching for authors like you
    2. Share Your Thoughts
    3. Download a free PDF copy of this book

Product information

  • Title: Deep Learning and XAI Techniques for Anomaly Detection
  • Author(s): Cher Simon
  • Release date: January 2023
  • Publisher(s): Packt Publishing
  • ISBN: 9781804617755