Chapter 56. Blatantly Discriminatory Algorithms
Eric Siegel
Imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask how the decision process works, you inform them, “For one thing, our algorithm penalized your score by seven points because you’re black.”
We are headed in that direction. Distinguished experts are now campaigning for discriminatory algorithms in law enforcement and beyond. They argue that computers should be authorized to make life-altering decisions based directly on race and other protected classes. This would mean that computers could explicitly penalize black defendants for being black.
In most cases, data scientists intentionally design algorithms to be blind to protected classes. This is accomplished by prohibiting predictive models from inputting such factors. Doing so does not eliminate machine bias, the well-known phenomenon wherein models falsely flag one group more than another via “surrogate” variables (discussed in my article in Part VII of this book, “To Fight Bias in Predictive Policing, Justice Can’t Be Color-Blind”). But suppressing such model inputs is a fundamental first step, without which models are discriminatory.
I use the term “discriminatory” for decisions that are based in part on a protected class, such as when profiling by race or religion to determine police ...
Get 97 Things About Ethics Everyone in Data Science Should Know now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.