O'Reilly logo
live online training icon Live Online training

Building Explainable Machine Learning Models

Creating equitable models and data science workflows

Topic: Data
Ayodele Odubela

A rash of recent public AI incidents have cost companies millions of dollars, harmed individuals in the general public, and caused users to lose trust in AI technology. Regaining this trust will require the creation of more explainable and equitable models. By building models that are easy for humans to interpret and with the goal of creating equity, you can create a more responsible AI culture in your organization and work to ensure a degree of fairness in your model outcomes.

Join expert Ayodele Odubela to learn how to make the models you create more interpretable and equitable. As you assess a past ML project, you’ll discover how to incorporate fairness and explainability into your workflow and analyze the kind of harm your models can cause. Understanding how to gauge harm will go a long way toward building an appropriate harm mitigation strategy into your workflow.

What you'll learn-and how you can apply it

By the end of this live online course, you’ll understand:

  • How to analyze a new machine learning project for its impact and equity opportunities
  • Steps you can take to measure equity and perform an impact analysis
  • How to redesign the data science lifecycle to account for fairness, explainability, and bias mitigation

And you’ll be able to:

  • Assess the equity creation of a past project and suggest improvements
  • Create an interface to offer human-readable explanations
  • Create a plan to mitigate harm caused by machine learning models

This training course is for you because...

  • You’re a data scientist or researcher trying to make your work more explainable and ethical.
  • You work with machine learning code and large datasets.
  • You want to become an ethical advocate for accountable machine learning in your organization.

Prerequisites

  • A basic understanding of machine learning modeling and data science development workflows
  • Experience with a recent machine learning project

Recommended preparation:

Recommended follow-up:

About your instructor

  • Ayodele Odubela is a Data Science Advocate for Comet ML. She combines her background in marketing and passion for data and analytics to educate Data Scientists on model reproducibility and experiment tracking. She earned her Master's degree in Data Science from Regis University after working in various digital marketing roles. She's passionate about data justice, kayaking, and hockey.

Schedule

The timeframes are only estimates and may vary according to how the class is progressing

Introduction to ML explainability (55 minutes)

  • Presentation: ML explainability
  • Group discussion: What’s your data science workflow?
  • Jupyter Notebook exercise: Redesign your DS workflow

Break (5 minutes)

Building explainable ML (55 minutes)

  • Presentation: Getting started with explainability tools
  • Jupyter Notebook exercise: Follow the code to create a model explanation
  • Group discussion: How can you create ML explainability interfaces?
  • Q&A

Break (5 minutes)

Measuring equity (55 minutes)

  • Presentation: Product equity checklist
  • Jupyter Notebook exercise: Apply the product equity checklist to a product
  • Group discussion: What does equity mean in your vertical?
  • Q&A

Break (5 minutes)

Workflow and productizing (55 minutes)

  • Presentation: Ethical workflow recommendations
  • Jupyter Notebook exercise: Make changes to your technical workflow

Wrap-up and Q&A (5 minutes)