Video description
ML methods have been causing a revolution in several fields, including science and technology, finance, healthcare, cybersecurity, etc. For instance, ML can identify objects in images, perform language translation, enable web search, perform medical diagnosis, classify fraudulent transactions—all with surprising accuracy. Unfortunately, much of this progress has come with ML models, especially ones based on deep neural networks, getting more complex and opaque. An overarching question that arises is why the model made its prediction. This question is of importance to developers in debugging (mis-)predictions, evaluators in assessing the robustness and fairness of the model, and end users in deciding whether they can trust the model.
Ankur Taly (Fiddler) explores the problem of understanding individual predictions by attributing them to input features—a problem that’s received a lot of attention in the last couple of years. Ankur details an attribution method called integrated gradients that’s applicable to a variety of deep neural networks (object recognition, text categorization, machine translation, etc.) and is backed by an axiomatic justification, and he covers applications of the method to debug model predictions, increase model transparency, and assess model robustness. He also dives into a classic result from cooperative game theory called the Shapley values, which has recently been extensively applied to explaining predictions made by nondifferentiable models such as decision trees, random forests, gradient-boosted trees, etc. Time permitting, you’ll get a sneak peak of the Fiddler platform and how it incorporates several of these techniques to demystify models.
Prerequisite knowledge
- A basic understanding of machine learning
What you'll learn
- Understand the risks of black box machine learning models
- Learn techniques to mitigates some of the risks
This session is from the 2019 O'Reilly Artificial Intelligence Conference in San Jose, CA.
Product information
- Title: Executive Briefing: Explaining machine learning models
- Author(s):
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920370826
You might also like
book
Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners
With big data analytics comes big insights into profitability Big data is big business. But having …
video
Machine Learning and Data Monetization
Presented by Robert Welborn
video
Data Science Fundamentals Part 2: Machine Learning and Statistical Analysis
21+ Hours of Video Instruction Data Science Fundamentals Part II teaches you the foundational concepts, theory, …
book
An Introduction to Machine Learning Interpretability, 2nd Edition
Innovation and competition are driving analysts and data scientists toward increasingly complex predictive modeling and machine …