4
Poisoning Attacks
In the previous chapter, we explored AI security and traditional cybersecurity limitations when defending against adversarial AI. We staged our first adversarial attack against a deployed model and discussed an overview of adversarial AI and its types of attacks.
In this chapter, we will delve deeper into adversarial AI and, more specifically, attacks during the development of an ML model. These are known as poisoning attacks aiming to compromise the model’s integrity. We will cover the following topics:
- The basics of poisoning attacks
- Staging a simple poisoning attack
- Backdoor poisoning attacks
- Hidden-trigger backdoor attacks
- Clean-label attacks
- Advanced poisoning attacks
- Mitigation and defenses
By the end of this chapter, ...
Get Adversarial AI Attacks, Mitigations, and Defense Strategies now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.