Presented by Manojit Nand – Senior Data Scientist at JPMorgan Chase & Co.
Understanding how algorithms can reinforce societal biases has become an important topic in data science. Recent work for auditing models for fairness often requires access to potentially sensitive demographic information, placing algorithmic fairness in conflict with individual privacy.
For example, gender recognition technology struggles to recognize the gender of transgender individuals. To develop more accurate models, we require information that could “out” these individuals, putting their social, psychological, and physical safety at risk.
We will discuss social science perspectives on privacy and how these paradigms can be incorporated into statistical measures of anonymity. I will emphasize the importance of ensuring safety and privacy of all individuals represented in our data, even at the cost of model fairness.
Table of contents
- Privacy and Algorithmic Fairness 00:29:39
- Title: Privacy and Algorithmic Fairness
- Release date: September 2019
- Publisher(s): Data Science Salon
- ISBN: None
You might also like
Building Microservices, 2nd Edition
Distributed systems have become more fine-grained as organizations shift from code-heavy monolithic applications to smaller, self-contained …
Python for Data Analysis, 2nd Edition
Get complete instructions for manipulating, processing, cleaning, and crunching datasets in Python. Updated for Python 3.6, …
Python Crash Course, 2nd Edition
This is the second edition of the best selling Python book in the world. Python Crash …
Spark: The Definitive Guide
Learn how to use, deploy, and maintain Apache Spark with this comprehensive guide, written by the …