Skip to Content
View all events

Model Debugging for Machine Learning Systems

Published by O'Reilly Media, Inc.

Beginner to intermediate content levelBeginner to intermediate

Systematic methods to test and fix machine learning models ![New Online Training](https://cdn.oreillystatic.com/images/live-online-training/new-tag.svg "New Online Training")

Anything that goes wrong with a model in production will ultimately affect an organization, leading to loss of money or even reputation. So model debugging—an emergent discipline focused on finding and fixing problems in ML systems—should be a critical part of any data science pipeline. Model debugging attempts to test ML models like code (because they are usually code) and probe complex ML response functions and decision boundaries to systematically detect and correct accuracy, fairness, and security problems in ML systems.

Join expert Navdeep Gill to explore common strategies involved in model debugging. You’ll get hands-on experience applying key techniques in Python that you can put into practice immediately. Plus, you’ll learn how to better communicate your findings with stakeholders.

Hands-on learning with Jupyter notebooks

All exercises and labs are provided as Jupyter notebooks—interactive documents that combine live code, equations, visualizations, and narrative text. There's nothing to install or configure; just click a link and get started! And you can revisit them anytime after class ends to practice and refine your skills.

What you’ll learn and how you can apply it

By the end of this live online course, you’ll understand:

  • Model debugging techniques and how to apply them in Python
  • Methods to promote trust in ML systems

And you’ll be able to:

  • Develop model debugging techniques in Python
  • Communicate model debugging findings in a very practical, concise manner

This live event is for you because...

  • You’re an ML engineer or data scientist, or you work with machine learning models and data science pipelines.
  • You want to become well-versed in the latest techniques to test and fix machine learning models.

Prerequisites

  • No preparation or local installation needed—all exercises will be provided using Jupyter notebooks
  • Familiarity with Python (e.g., loops, if and else statements, and functions)
  • Experience building and evaluating supervised machine learning models (e.g., model performance)

Recommended follow-up:

Schedule

The time frames are only estimates and may vary according to how the class is progressing.

Introduction and environment setup (15 minutes)

  • Presentation: Introduction to machine learning; Jupyter Notebook environment walk-through
  • Jupyter notebook: Set Up and Explore the Environment
  • Q&A

Introduction to model debugging (30 minutes)

  • Presentation: Introduction to (and a note on) trust and understanding ML models, which play key roles in model debugging
  • Jupyter notebook: Prepare Data and Build the Machine Learning Model for Further Exercises
  • Q&A
  • Break

Sensitivity (what-if) analysis (25 minutes)

  • Presentation: Sensitivity analysis
  • Jupyter notebook: Conduct Sensitivity Analysis on Your Model
  • Q&A

Residual analysis (35 minutes)

  • Presentation: Residual analysis
  • Jupyter notebook: Conduct Residual Analysis on Your Model
  • Q&A
  • Break

Benchmark models (20 minutes)

  • Presentation: Benchmark models
  • Jupyter notebook: Build Benchmark Models and Compare with Previous Models
  • Q&A

Security auditing for ML attacks (30 minutes)

  • Presentation: ML security; auditing ML models
  • Jupyter notebook: Implement ML Security and Audit Techniques on Your Model
  • Q&A
  • Break

Remediation strategies (20 minutes)

  • Presentation: Common remediation strategies to undertake when your model debugging shows adverse or unwanted results
  • Jupyter notebook: Implement Remediation Strategies

Wrap-up and Q&A (5 minutes)

Your Instructor

  • Navdeep Gill

    linkedinXlinksearch

Skill covered

Machine Learning