Chapter 2. Attack Motivations
DNN technology is now part of our lives. For example, digital assistants (such as Amazon Alexa, Apple’s Siri, Google Home, and Microsoft’s Cortana) use deep learning models to extract meaning from speech audio. Many algorithms that enable and curate online interactions (such as web searching) exploit DNNs to understand the data being managed. Increasingly, deep learning models are being used in safety-critical applications, such as autonomous vehicles.
Many AI technologies take data directly from the physical world (from cameras, for example) or from digital representations of that data intended for human consumption (such as images uploaded to social media sites). This is potentially problematic, as when any computer system processes data from an untrusted source it may open a vulnerability. Motivations for creating adversarial input to exploit these vulnerabilities are diverse, but we can divide them into the following broad categories:
- Evasion
-
Hiding content from automated digital analysis. For example, see “Circumventing Web Filters”, “Camouflage from Surveillance”, or “Personal Privacy Online”.
- Influence
-
Affecting automated decisions for personal, commercial, or organizational gain. See for example “Online Reputation and Brand Management”.
- Confusion
-
Creating chaos to discredit or disrupt an organization. See for example “Autonomous Vehicle Confusion” or “Voice Controlled Devices”.
This chapter presents some possible motivations for creating ...
Get Strengthening Deep Neural Networks now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.