Chapter 6. Fairness Post-Processing

Post-processing methods are fairness interventions that come at the last stage of the data modeling pipeline. Data has already been selected and pre-processed. A model has already been trained. You may know nothing about that model. Perhaps it has come to you as a black box, and you are to take that black box and find a way to make it fairer according to some metric before deploying.

If this sounds like an unrealistic limitation, you have likely not worked in the real world for long enough. Proprietary software is deployed even in governmental applications. Consider the COMPAS algorithm used by many states to predict criminal recidivism. This prediction has real consequences, as it may decide whether a defendant can receive bail to be free while awaiting trial, whether a convicted defendant will receive a lighter or harsher sentence, and whether an imprisoned defendant is eligible for parole.

What’s more, while proprietary models are often criticized, there are of course many justifications for these models. Though this is not the main point of this chapter, it’s important to be aware of some of these justifications, in part so you can determine whether they apply to a model you might receive so that your reasons for objections to the black-box nature of a model would be met with related rebuttals. That doesn’t mean the black-box model is always right, but it’s good to understand the logic. Here are some of the justifications:

Intellectual ...

Get Practical Fairness now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.