Part 3: Attacks on Deployed AI
In this part, you will learn how to attack AI after its development and deployment. We will learn what evasion attacks are, the role of carefully crafted payloads called perturbations to evade AI, and popular techniques to generate perturbations. You will use ART to stage evasion attacks in image recognition and TextAttack on NLP. We will also cover privacy attacks, and you will learn approaches to steal models by creating good approximations with model extraction attacks, as well as reconstructing training data from output or using advanced adversarial techniques to infer sensitive data from model responses. We will look at mitigations and defenses, and you will learn both basic and advanced techniques to protect ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access