13
Exploring Bias and Fairness
A biased machine learning model produces and amplifies unfair or discriminatory predictions against certain groups. Such models can produce biased predictions that lead to negative consequences such as social or economic inequality. Fortunately, some countries have discrimination and equality laws that protect minority groups against unfavorable treatment. One of the worst scenarios a machine learning practitioner or anyone who deploys a biased model could face is either receiving a legal notice imposing a heavy fine or receiving a lawyer letter from being sued and forced to shut down their deployed model. Here are a few examples of such situations:
- The ride-hailing app Uber faced legal action from two unions ...
Get The Deep Learning Architect's Handbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.