CHAPTER 73Is AI Ready for Morality?
By Samiran Ghosh1
1Independent Consultant
The modern world increasingly runs on intelligent algorithms created by humans. The data-hungry, self-improving computer programs that underly the AI revolution already determine Google search results, Facebook newsfeeds and online shopping recommendations. Increasingly, they also decide how easily we get a mortgage or a job interview, the chances we will get stopped and searched by the police on our way home, and what penalties we face if we commit a crime, too. So, these systems would have to be beyond reproach in their decision-making, correct? Wrong. Bad input data, skewed logic or simply the prejudices of their creators mean AI systems all too easily reproduce and even amplify human biases – as the following examples show.
COMPAS
COMPAS is an algorithm widely used in the US to guide sentencing by predicting the likelihood of a criminal reoffending. In perhaps the most notorious case of AI prejudice, the US news organization ProPublica reported in May 2016 that COMPAS is racially biased. According to the analysis, the system predicted that black defendants pose a higher risk than they do, and the reverse for white defendants.
PredPol
Another algorithm, PredPol, has been designed to predict when and where crimes will take place (a real world Minority Report), with the aim of helping to reduce human bias in policing. But in 2016, the Human Rights Data Analysis Group found that the software could ...
Get The AI Book now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.