Why it’s hard to design fair machine learning models

The O’Reilly Data Show Podcast: Sharad Goel and Sam Corbett-Davies on the limitations of popular mathematical formalizations of fairness.

By Ben Lorica
September 27, 2018
Lady Justice Lady Justice (source: Pixabay)

Why it’s hard to design fair machine learning models
Data Show Podcast

 
 
00:00 / 00:34:24
 
1X
 

In this episode of the Data Show, I spoke with Sharad Goel, assistant professor at Stanford, and his student Sam Corbett-Davies. They recently wrote a survey paper, “A Critical Review of Fair Machine Learning,” where they carefully examined the standard statistical tools used to check for fairness in machine learning models. It turns out that each of the standard approaches (anti-classification, classification parity, and calibration) has limitations, and their paper is a must-read tour through recent research in designing fair algorithms. We talked about their key findings, and, most importantly, I pressed them to list a few best practices that analysts and industrial data scientists might want to consider.

Here are some highlights from our conversation:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Calibration and other standard metrics

Sam Corbett-Davies: The problem with many of the standard metrics is that they fail to take into account how different groups might have different distributions of risk. In particular, if there are people who are very low risk or very high risk, then it can throw off these measures in a way that doesn’t actually change what the fair decision should be. … The upshot is that if you end up enforcing or trying to enforce one of these measures, if you try to equalize false positive rates, or you try to equalize some other classification parity metric, you can end up hurting both the group you’re trying to protect and any other groups for which you might be changing the policy.

… A layman’s definition of calibration would be, if an algorithm gives a risk score—maybe it gives a score from one to 10, and one is very low risk and 10 is very high risk—calibration says the scores should mean the same thing for different groups (where the groups are defined based on some protected variable like gender, age, or race). We basically say in our paper that calibration is necessary for fairness, but it’s not good enough. Just because your scores are calibrated doesn’t mean you aren’t doing something funny that could be harming certain groups.

The need to interrogate data

Sharad Goel: One way to operationalize this is if you have a set of reasonable measures to be your label, you can see how much your algorithm changes if you use different measures. If your algorithm is changing a lot using these different measures, then you really have to worry about determining the right measure. What is the right thing to predict. If it’s the case that under a variety of reasonable measures everything looks kind of stable, maybe it’s less of an issue. This is very hard to carry out in practice, but I do think it’s one of the most important things to understand and to be aware of when designing these types of algorithms.

… There are a lot of subtleties to these different types of metrics that are important to be aware of when designing these algorithms in an equitable way. … But fundamentally, these are hard problems. It’s not particularly surprising that we don’t have an algorithm to help us make all of these algorithms fair. … What is most important is that we really interrogate the data.

Related resources:

Post topics: AI & ML, Data, O'Reilly Data Show Podcast
Post tags: Podcast
Share:

Get the O’Reilly Radar Trends to Watch newsletter