Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that’s not the case. In this report, business, analytics, and data science leaders will examine the challenges of defining fairness and reducing unfair bias throughout the machine learning pipeline.
Trisha Mahoney, Kush R. Varshney, and Michael Hind from IBM explain why you need to engage early and authoritatively when building AI you can trust. You’ll learn how your organization should approach fairness and bias, including trade-offs you need to make between model accuracy and model bias. This report also introduces you to AI Fairness 360, an extensible open source toolkit for measuring, understanding, and reducing AI bias.
In this report, you’ll explore:
- Legal, ethical, and trust factors you need to consider when defining fairness for your use case
- Different ways to measure and remove unfair bias, using the most relevant metrics for the particular use case
- How to define acceptable thresholds for model accuracy and unfair model bias
Table of contents
- 1. Understanding and Measuring Bias with AIF 360
- 2. Algorithms for Bias Mitigation
- 3. Python Tutorial
- 4. Conclusion
- Title: AI Fairness
- Release date: April 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781492077657
You might also like
Spark: The Definitive Guide
Learn how to use, deploy, and maintain Apache Spark with this comprehensive guide, written by the …
Kubernetes in Action
Kubernetes in Action teaches you to use Kubernetes to deploy container-based distributed applications. You'll start with …
Python for Finance, 2nd Edition
The financial industry has recently adopted Python at a tremendous rate, with some of the largest …
Python Data Science Handbook
For many researchers, Python is a first-class tool mainly because of its libraries for storing, manipulating, …