Chapter 89. To Fight Bias in Predictive Policing, Justice Can’t Be Color-Blind

Eric Siegel

Crime-predicting models are caught in a quagmire doomed to controversy because, on their own, they cannot realize racial equity. It’s an intrinsically unsolvable problem. It turns out that although such models succeed in flagging (i.e., assigning higher probabilities to) both black and white defendants with equal precision, as a result of doing so they also falsely flag black defendants more often than white ones.1

But despite this seemingly paradoxical predicament, we are witnessing an unprecedented opportunity to advance social justice by turning predictive policing around to actively affect more fairness, rather than passively reinforcing today’s inequities.

Predictive policing introduces a quantitative element to weighty law enforcement decisions made by humans, such as whether to investigate or detain, how long a sentence to set, and whether to parole. When making such decisions, judges and officers take into consideration the calculated probability that a suspect or defendant will be convicted of a crime in the future. Calculating predictive probabilities from data is the job of predictive modeling (a.k.a. machine learning) software. It automatically establishes patterns by combing historical conviction records, and in turn these patterns—together, ...

Get 97 Things About Ethics Everyone in Data Science Should Know now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.