CHAPTER 8Auditing for Neural Networks
We now return our focus to one of the frontiers of data science: the use of neural networks in image and text processing. As we have seen throughout this book across other examples, the promise of neural network approaches cannot be separated from the dangers that they can pose when misused. This misuse happens easily and often in spite of the absence of any malicious intent. As a reminder, we have already seen one example of how deep learning models produced résumé-review technology for Amazon that basically judged applications on the basis of whether they were male or female.
In the field of image processing, some of the examples of harm include the following:
- Use of image processing by authoritarian governments to surveil populations (e.g., with the goal of suppressing dissent or minority movements)
- Flawed facial recognition approaches employed by law enforcement falsely identifying individuals as wanted criminals
- Insufficiently trained computer vision systems for automated driving that were unable to react appropriately in a given situation due to being unable to recognize certain objects or environmental conditions
- Generation of “deepfaked” images and distributing them with the intent to cause harm (e.g., revenge porn)
- Generation of images and videos that are difficult to distinguish from the real thing (e.g., to steal identities or create believable fake online personas for misleading or trolling other individuals)
- Harvesting of ...
Get Responsible Data Science now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.