Chapter 12. Laws for Machine Learning
I start with a provocation: law is the ultimate arbiter of what matters for fairness in machine learning, and that’s a good thing.
We have had ample opportunity to see that fairness is not a problem that will be solved by pure market competition or reputational mechanisms. This is for a variety of reasons. Some important failings in waiting for economic or social pressures to enhance fairness in our digital environment are as follows:
-
People make buying decisions for economic reasons, not fairness reasons.
-
People don’t have the information or technical training they would need to make fairness-aware decisions even if they wanted to.
-
The people most affected by unfairness in digital products may not even be the people most influential in that particular market.
-
Digital products make and influence markets and culture as much as they are, in turn, influenced by these forces.
I now elaborate on each of these factors in more detail.
First, people simply don’t behave in the market as their ideological values would predict they should. Much behavioral research has shown that people may have core ideological values, but bottom-line economic factors tend to drive day-to-day decision making. One well-known example is the privacy paradox. People tend to state that they care deeply about privacy, but in lab or field experiments, they generally don’t demonstrate a willingness to pay for their privacy when asked to spend even small amounts of ...
Get Practical Fairness now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.