3
Mitigating Inference Risk by Avoiding Adversarial Machine Learning Attacks
Many adversarial attacks don’t occur directly through data, as described in Chapter 2. Instead, they rely on attacking the machine learning (ML) algorithms or, more often than not, the resulting models. Such an attack is termed adversarial ML because it relies on someone purposely attacking the software. In other words, unlike data attacks where accidental damage, inappropriate selection of models or algorithms, or human mistakes come into play, this form of adversarial attack is all about someone purposely causing damage to achieve some goal.
Attacking an ML algorithm or model is meant to elicit a particular result. The result isn’t always achieved, but there is a ...
Get Machine Learning Security Principles now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.