July 2024
Intermediate to advanced
602 pages
16h 31m
English
In the previous chapter, we looked at adversarial AI poisoning attacks, which tamper with training data so that they can compromise the model’s output at inference time. We looked at how an attacker could mislabel samples, inject perturbations to create backdoors that can be triggered at inference time, or inject subtle perturbations without changing labels or being detected.
We assumed that these would happen in our environment, but these attacks will not just occur in our data science environment in an increasingly interconnected digital landscape.
Supply chain risks are a critical concern regarding staging poisoning attacks and adversarial AI in general. While supply chain vulnerabilities in software ...
Read now
Unlock full access