The following are some of the most commonly used methods to develop adversarial attacks:
- Fast gradient sign method (FGSM): To generate adversarial examples, this method exploits the sign of the gradient associated with the backpropagation method used by the DNN's victim.
- Jacobian-based saliency map attack (JSMA): This attack methodology iteratively modifies information (such as the most significant pixel of an image) to create adversarial examples, based on a JSMA that characterizes the existing relationship between the input and output returned by the target neural network.
- Carlini and Wagner (C and W): This adversarial attack methodology is perhaps the most reliable, and the most difficult to detect. The ...