July 2024
Intermediate to advanced
602 pages
16h 31m
English
In this part, we will cover adversarial attacks targeting model development in AI. You will learn the basics of poisoning attacks to change model behavior and create backdoors. You will learn how to use the Adversarial Robustness Toolbox (ART) to implement different poisoning attacks and implement defenses. We will also look at other approaches to affect a model, such as tampering it with Trojan horses, and we will build an Android app to demonstrate it in action. Finally, we will look at how attackers can use packages, pre-trained models, pickle serialization, and public datasets to attack model integrity without having direct access to our development environment. You will learn how to mitigate these threats ...
Read now
Unlock full access