Chapter 8. Physical-World Attacks

The previous chapters focused on how adversarial input might be generated through, for example, perturbation of a digital image or distortion of digital audio data. However, there are many occasions when an attacker does not have access to a digital format of the data; the attacker may only be able to affect the physical world from which the data will be generated. The distinction depends on whether the target processing system takes its input from the outside world in the form of digital content (uploads to a social media site, for example) or directly from a sensor (such as a surveillance camera). The resulting threat in the physical-world scenario is quite different from the digital scenarios previously discussed.

Generating adversarial examples in the physical world poses a new set of challenges to the adversary. Now the attacker needs to create, or alter, something that exists in real life so that it incorporates a physical manifestation of an adversarial perturbation or patch. In the case of adversarial data received via a camera, the thing being altered may be a 2D print or a 3D object. Similarly, a microphone might receive adversarial distortion from crafted audio samples that are played in the environment, perhaps through a digital device such as a computer or television. How, for example, does the attacker ensure that an adversarial object remains robust to lighting conditions or camera position? Or how could it be possible to fool a ...

Get Strengthening Deep Neural Networks now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.