Summary of Part 3
Chapter 14: AI Won't Save Us (Unless We Save It First)
As our society becomes increasingly dependent on AI, a pernicious assumption is spreading that software will allow us to escape the influence of bias. This is often not the case. If we code fairness in, though, AI could help us build more equity-centered workplaces. Failure to do so can embed bias into these systems for years, or decades.
To ensure that automated intelligence is inclusive and equations are equitable, developers should answer questions that would determine whether their code meets standards of fairness:
- Who, or what, is missing? The categories that an algorithm allows for can also allow certain demographics to go statistically missing.
- Who is creating the model? Diverse teams are more likely to provide diverse insights, which decreases the odds of exclusive designs. In addition, to avoid building a model based on feedback tainted by the interviewer effect, match interviewers' demographics to those of your participants.
- Are you evaluating impact through an equity lens? Be aware of the point of view of your model and don't rely only on randomized control trials.
Even when you find a source of potential bias in your model, you can't always change the data set or circumstances you're working with. Two mitigations can help to counter this potential:
- “Blind” applications or resumes by removing biasing characteristics before an algorithm is applied to them.
- Instead of using a proxy for a missing ...
Get Inclusion, Inc. now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.