Video description
ML methods have been causing a revolution in several fields, including science and technology, finance, healthcare, cybersecurity, etc. For instance, ML can identify objects in images, perform language translation, enable web search, perform medical diagnosis, classify fraudulent transactions—all with surprising accuracy. Unfortunately, much of this progress has come with ML models, especially ones based on deep neural networks, getting more complex and opaque. An overarching question that arises is why the model made its prediction. This question is of importance to developers in debugging (mis-)predictions, evaluators in assessing the robustness and fairness of the model, and end users in deciding whether they can trust the model.
Ankur Taly (Fiddler) explores the problem of understanding individual predictions by attributing them to input features—a problem that’s received a lot of attention in the last couple of years. Ankur details an attribution method called integrated gradients that’s applicable to a variety of deep neural networks (object recognition, text categorization, machine translation, etc.) and is backed by an axiomatic justification, and he covers applications of the method to debug model predictions, increase model transparency, and assess model robustness. He also dives into a classic result from cooperative game theory called the Shapley values, which has recently been extensively applied to explaining predictions made by nondifferentiable models such as decision trees, random forests, gradient-boosted trees, etc. Time permitting, you’ll get a sneak peak of the Fiddler platform and how it incorporates several of these techniques to demystify models.
Prerequisite knowledge
- A basic understanding of machine learning
What you'll learn
- Understand the risks of black box machine learning models
- Learn techniques to mitigates some of the risks
This session is from the 2019 O'Reilly Artificial Intelligence Conference in San Jose, CA.
Product information
- Title: Executive Briefing: Explaining machine learning models
- Author(s):
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920370826
You might also like
book
Analytical Skills for AI and Data Science
While several market-leading companies have successfully transformed their business models by following data- and AI-driven paths, …
video
Framing business problems as machine learning (ML) problems (sponsored by Amazon Web Services)
Successful businesses today are turning to ML as a new approach to solve business challenges. However, …
book
Designing Data-Intensive Applications
Data is at the center of many challenges in system design today. Difficult issues need to …
book
40 Algorithms Every Programmer Should Know
Learn algorithms for solving classic computer science problems with this concise guide covering everything from fundamental …