Video description
The concepts of “undesired bias” and “black box models” in machine learning have become a highly discussed topic due to the numerous high profile incidents that have been covered by the media. It’s certainly a challenging topic, as it could even be said that the concept of societal bias is inherently biased in itself, depending on an individual’s (or group’s) perspective.
Alejandro Saucedo (The Institute for Ethical AI & Machine Learning) doesn’t reinvent the wheel; he simplifies the issue of AI explainability so it can be solved using traditional methods. He covers the high-level definitions of bias in machine learning to remove ambiguity and demystifies it through a hands-on example, in which the objective is to automate the loan-approval process for a company using machine learning, which allows you to go through this challenge step by step and use key tools and techniques from the latest research together with domain expert knowledge at the right points to enable you to explain decisions and mitigate undesired bias in machine learning models.
Alejandro breaks undesired bias down into two constituent parts: a priori societal bias and a posteriori statistical bias, with tangible examples of how undesired bias is introduced in each step, and you’ll learn some very interesting research findings in this topic. Spoiler alert: Alejandro takes a pragmatic approach, showing how any nontrivial system will always have an inherent bias, so the objective is not to remove bias, but to make sure you can get as close as possible to your objectives and make sure your objectives are as close as possible to the ideal solution.
You’ll assess bias in machine learning through three key steps, each with a real-life example: data analysis, inference result analysis, and production metrics analysis. Automating the loan-approval process will show bias that affects your results in a negative way, and you’ll learn techniques to ensure you perform a reasonable analysis as well as key touch points and metrics to ensure the right domain experts are involved. By the time you leave, you’ll have covered the fundamental topics in data science, such as feature importance analysis, class imbalance assessment, model evaluation metrics, partial dependence, feature correlation, etc. More importantly, you’ll understand how these fundamentals can interact at different touch points with the right domain experts to ensure undesired bias is identified and documented. All will be covered with a hands-on example through a practical Jupyter notebook experience.
Prerequisite knowledge
- Experience with a machine learning project
What you'll learn
- Gain a high-level overview of bias in machine learning
- Learn key tools and techniques to assess, identify, and mitigate risks that arise from the unavoidable bias present
This session is from the 2019 O'Reilly Strata Conference in New York, NY.
Product information
- Title: A practical guide to algorithmic bias and explainability in machine learning
- Author(s):
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920372356
You might also like
video
A practical guide toward explainability and bias evaluation in AI and machine learning
The concepts of “undesired bias” and “black box models” in machine learning have become a highly …
video
Hands on Inquiry into Algorithmic Bias and Machine Learning Interpretability
Presented by Fatih Akici – Manager, Risk Analytics and Data Science at Populus Financial Group As …
book
An Introduction to Machine Learning Interpretability, 2nd Edition
Innovation and competition are driving analysts and data scientists toward increasingly complex predictive modeling and machine …
book
Responsible Machine Learning
Like other powerful technologies, AI and machine learning present significant opportunities. To reap the full benefits …