14
Analyzing Adversarial Performance
An adversary, in the context of machine learning models, refers to an entity or system that actively seeks to exploit or undermine the performance, integrity, or security of these models. They can be malicious actors, algorithms, or systems designed to target vulnerabilities within machine learning models. Adversaries perform adversarial attacks, where they intentionally input misleading or carefully crafted data to deceive the model and cause it to make incorrect or unintended predictions.
Adversarial attacks can range from subtle perturbations of input data to sophisticated methods that exploit the vulnerabilities of specific algorithms. The objectives of adversaries can vary depending on the context. They ...
Get The Deep Learning Architect's Handbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.