CHAPTER 5Bias, Fairness, and Vulnerability in Decision-Making
“…you do not really understand a topic until you can teach it to a mechanical robot.”
—Judea Pearl and Dana Mackenzie
In recent times, topics related to the ethical use of artificial intelligence (AI) and machine learning have been much debated by industry practitioners, regulators, and the public. It is now well understood that one of the unintended consequences of AI and machine learning is the risk of making biased decisions that can lead to unfair outcomes. The biases can stem from the historical and societal biases in the data that is used for training the models or introduced in the model development process itself. Biased models and decisions can lead to unintended consequences such as disparity or lack of access to the issuance of credit and insurance policies for minority groups.
One of the promises of AI and machine learning is to reduce bias and unfairness in financial decision-making, facilitated through automation. However, despite the promise, we do not need to look far to find examples where AI and machine learning reinforced bias and amplified unfairness in decision-making: from prioritizing job applications based on gender, to computer vision that discriminates based on race, to credit limit approvals that are gender biased.1
Data input and data capture by people, source systems, and applications that vary across different geographic localities can result in inaccuracies due to mislabeling, misinterpretation, ...
Get Risk Modeling now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.