6

Supply Chain Attacks and Adversarial AI

In the previous chapter, we looked at adversarial AI poisoning attacks, which tamper with training data so that they can compromise the model’s output at inference time. We looked at how an attacker could mislabel samples, inject perturbations to create backdoors that can be triggered at inference time, or inject subtle perturbations without changing labels or being detected.

We assumed that these would happen in our environment, but these attacks will not just occur in our data science environment in an increasingly interconnected digital landscape.

Supply chain risks are a critical concern regarding staging poisoning attacks and adversarial AI in general. While supply chain vulnerabilities in software ...

Get Adversarial AI Attacks, Mitigations, and Defense Strategies now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.