Strengthening Deep Neural Networks

Book description

As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data.

Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you.

  • Delve into DNNs and discover how they could be tricked by adversarial input
  • Investigate methods used to generate adversarial input capable of fooling DNNs
  • Explore real-world scenarios and model the adversarial threat
  • Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data
  • Examine some ways in which AI might become better at mimicking human perception in years to come

Publisher resources

View/Submit Errata

Table of contents

  1. Preface
    1. Who Should Read This Book
    2. How This Book Is Organized
    3. Conventions Used in This Book
    4. Using Code Examples
    5. The Mathematics in This Book
    6. O’Reilly Online Learning
    7. How to Contact Us
    8. Acknowledgments
  2. I. An Introduction to Fooling AI
  3. 1. Introduction
    1. A Shallow Introduction to Deep Learning
    2. A Very Brief History of Deep Learning
    3. AI “Optical Illusions”: A Surprising Revelation
    4. What Is “Adversarial Input”?
      1. Adversarial Perturbation
      2. Unnatural Adversarial Input
      3. Adversarial Patches
      4. Adversarial Examples in the Physical World
    5. The Broader Field of “Adversarial Machine Learning”
    6. Implications of Adversarial Input
  4. 2. Attack Motivations
    1. Circumventing Web Filters
    2. Online Reputation and Brand Management
    3. Camouflage from Surveillance
    4. Personal Privacy Online
    5. Autonomous Vehicle Confusion
    6. Voice Controlled Devices
  5. 3. Deep Neural Network (DNN) Fundamentals
    1. Machine Learning
    2. A Conceptual Introduction to Deep Learning
    3. DNN Models as Mathematical Functions
      1. DNN Inputs and Outputs
      2. DNN Internals and Feed-Forward Processing
      3. How a DNN Learns
    4. Creating a Simple Image Classifier
  6. 4. DNN Processing for Image, Audio, and Video
    1. Image
      1. Digital Representation of Images
      2. DNNs for Image Processing
      3. Introducing CNNs
    2. Audio
      1. Digital Representation of Audio
      2. DNNs for Audio Processing
      3. Introducing RNNs
      4. Speech Processing
    3. Video
      1. Digital Representation of Video
      2. DNNs for Video Processing
    4. Adversarial Considerations
    5. Image Classification Using ResNet50
  7. II. Generating Adversarial Input
  8. 5. The Principles of Adversarial Input
    1. The Input Space
      1. Generalizations from Training Data
      2. Experimenting with Out-of-Distribution Data
    2. What’s the DNN Thinking?
    3. Perturbation Attack: Minimum Change, Maximum Impact
    4. Adversarial Patch: Maximum Distraction
    5. Measuring Detectability
      1. A Mathematical Approach to Measuring Perturbation
      2. Considering Human Perception
    6. Summary
  9. 6. Methods for Generating Adversarial Perturbation
    1. White Box Methods
      1. Searching the Input Space
      2. Exploiting Model Linearity
      3. Adversarial Saliency
      4. Increasing Adversarial Confidence
      5. Variations on White Box Approaches
    2. Limited Black Box Methods
    3. Score-Based Black Box Methods
    4. Summary
  10. III. Understanding the Real-World Threat
  11. 7. Attack Patterns for Real-World Systems
    1. Attack Patterns
      1. Direct Attack
      2. Replica Attack
      3. Transfer Attack
      4. Universal Transfer Attack
    2. Reusable Patches and Reusable Perturbation
    3. Bringing It Together: Hybrid Approaches and Trade-offs
  12. 8. Physical-World Attacks
    1. Adversarial Objects
      1. Object Fabrication and Camera Capabilities
      2. Viewing Angles and Environment
    2. Adversarial Sound
      1. Audio Reproduction and Microphone Capabilities
      2. Audio Positioning and Environment
    3. The Feasibility of Physical-World Adversarial Examples
  13. IV. Defense
  14. 9. Evaluating Model Robustness to Adversarial Inputs
    1. Adversarial Goals, Capabilities, Constraints, and Knowledge
      1. Goals
      2. Capabilities, Knowledge, and Access
    2. Model Evaluation
      1. Empirically Derived Robustness Metrics
      2. Theoretically Derived Robustness Metrics
    3. Summary
  15. 10. Defending Against Adversarial Inputs
    1. Improving the Model
      1. Gradient Masking
      2. Adversarial Training
      3. Out-of-Distribution Confidence Training
      4. Randomized Dropout Uncertainty Measurements
    2. Data Preprocessing
      1. Preprocessing in the Broader Processing Chain
      2. Intelligently Removing Adversarial Content
    3. Concealing the Target
    4. Building Strong Defenses Against Adversarial Input
      1. Open Projects
      2. Taking a Holistic View
  16. 11. Future Trends: Toward Robust AI
    1. Increasing Robustness Through Outline Recognition
    2. Multisensory Input
    3. Object Composition and Hierarchy
    4. Finally…
  17. A. Mathematics Terminology Reference
  18. Index

Product information

  • Title: Strengthening Deep Neural Networks
  • Author(s): Katy Warr
  • Release date: July 2019
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781492044956