Overview
Presented by Manojit Nand – Senior Data Scientist at JPMorgan Chase & Co.
Understanding how algorithms can reinforce societal biases has become an important topic in data science. Recent work for auditing models for fairness often requires access to potentially sensitive demographic information, placing algorithmic fairness in conflict with individual privacy.
For example, gender recognition technology struggles to recognize the gender of transgender individuals. To develop more accurate models, we require information that could “out” these individuals, putting their social, psychological, and physical safety at risk.
We will discuss social science perspectives on privacy and how these paradigms can be incorporated into statistical measures of anonymity. I will emphasize the importance of ensuring safety and privacy of all individuals represented in our data, even at the cost of model fairness.
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Watch now
Unlock full access