November 2019
Intermediate to advanced
346 pages
9h 36m
English
We begin by importing a pre-trained ABS model (Step 1). In Steps 2 and 3, we defined a convenience function to predict a batch of MNIST images and to verify that the model is working properly. Next, we wrapped the model using Foolbox in preparation for testing its adversarial robustness (Step 4). Note that Foolbox facilitates the attacking of either TensorFlow or PyTorch models using the same API once wrapped. Nice! In Step 5, we select an MNIST image to use as the medium for our attack. To clarify, this image gets tweaked and mutated until the result fools the model. In Step 6, we select the attack type we want to implement. We select a boundary attack, which is a decision-based attack that starts from a large adversarial ...