Book description
We're in the midst of an AI research explosion. Deep learning has unlocked superhuman perception to power our push toward creating selfdriving vehicles, defeating human experts at a variety of difficult games including Go, and even generating essays with shockingly coherent prose. But deciphering these breakthroughs often takes a PhD in machine learning and mathematics.
The updated second edition of this book describes the intuition behind these innovations without jargon or complexity. Pythonproficient programmers, software engineering professionals, and computer science majors will be able to reimplement these breakthroughs on their own and reason about them with a level of sophistication that rivals some of the best developers in the field.
 Learn the mathematics behind machine learning jargon
 Examine the foundations of machine learning and neural networks
 Manage problems that arise as you begin to make networks deeper
 Build neural networks that analyze complex images
 Perform effective dimensionality reduction using autoencoders
 Dive deep into sequence analysis to examine language
 Explore methods in interpreting complex machine learning models
 Gain theoretical and practical knowledge on generative modeling
 Understand the fundamentals of reinforcement learning
Publisher resources
Table of contents
 Preface
 1. Fundamentals of Linear Algebra for Deep Learning
 2. Fundamentals of Probability
 3. The Neural Network
 4. Training FeedForward Neural Networks
 5. Implementing Neural Networks in PyTorch

6. Beyond Gradient Descent
 The Challenges with Gradient Descent
 Local Minima in the Error Surfaces of Deep Networks
 Model Identifiability
 How Pesky Are Spurious Local Minima in Deep Networks?
 Flat Regions in the Error Surface
 When the Gradient Points in the Wrong Direction
 MomentumBased Optimization
 A Brief View of SecondOrder Methods
 Learning Rate Adaptation
 The Philosophy Behind Optimizer Selection
 Summary

7. Convolutional Neural Networks
 Neurons in Human Vision
 The Shortcomings of Feature Selection
 Vanilla Deep Neural Networks Donât Scale
 Filters and Feature Maps
 Full Description of the Convolutional Layer
 Max Pooling
 Full Architectural Description of Convolution Networks
 Closing the Loop on MNIST with Convolutional Networks
 Image Preprocessing Pipelines Enable More Robust Models
 Accelerating Training with Batch Normalization
 Group Normalization for Memory Constrained Learning Tasks
 Building a Convolutional Network for CIFAR10
 Visualizing Learning in Convolutional Networks
 Residual Learning and Skip Connections for Very Deep Networks
 Building a Residual Network with Superhuman Vision
 Leveraging Convolutional Filters to Replicate Artistic Styles
 Learning Convolutional Filters for Other Problem Domains
 Summary

8. Embedding and Representation Learning
 Learning LowerDimensional Representations
 Principal Component Analysis
 Motivating the Autoencoder Architecture
 Implementing an Autoencoder in PyTorch
 Denoising to Force Robust Representations
 Sparsity in Autoencoders
 When Context Is More Informative than the Input Vector
 The Word2Vec Framework
 Implementing the SkipGram Architecture
 Summary

9. Models for Sequence Analysis
 Analyzing VariableLength Inputs
 Tackling seq2seq with Neural NGrams
 Implementing a PartofSpeech Tagger
 Dependency Parsing and SyntaxNet
 Beam Search and Global Normalization
 A Case for Stateful Deep Learning Models
 Recurrent Neural Networks
 The Challenges with Vanishing Gradients
 Long ShortTerm Memory Units
 PyTorch Primitives for RNN Models
 Implementing a Sentiment Analysis Model
 Solving seq2seq Tasks with Recurrent Neural Networks
 Augmenting Recurrent Networks with Attention
 Dissecting a Neural Translation Network
 SelfAttention and Transformers
 Summary
 10. Generative Models
 11. Methods in Interpretability

12. Memory Augmented Neural Networks
 Neural Turing Machines
 AttentionBased Memory Access
 NTM Memory Addressing Mechanisms
 Differentiable Neural Computers
 InterferenceFree Writing in DNCs
 DNC Memory Reuse
 Temporal Linking of DNC Writes
 Understanding the DNC Read Head
 The DNC Controller Network
 Visualizing the DNC in Action
 Implementing the DNC in PyTorch
 Teaching a DNC to Read and Comprehend
 Summary

13. Deep Reinforcement Learning
 Deep Reinforcement Learning Masters Atari Games
 What Is Reinforcement Learning?
 Markov Decision Processes
 Explore Versus Exploit
 Policy Versus Value Learning
 PoleCart with Policy Gradients
 TrustRegion Policy Optimization
 Proximal Policy Optimization

QLearning and Deep QNetworks
 The Bellman Equation
 Issues with Value Iteration
 Approximating the QFunction
 Deep QNetwork
 Training DQN
 Learning Stability
 Target QNetwork
 Experience Replay
 From QFunction to Policy
 DQN and the Markov Assumption
 DQNâs Solution to the Markov Assumption
 Playing Breakout with DQN
 Building Our Architecture
 Stacking Frames
 Setting Up Training Operations
 Updating Our Target QNetwork
 Implementing Experience Replay
 DQN Main Loop
 DQNAgent Results on Breakout
 Improving and Moving Beyond DQN
 Summary
 Index
 About the Authors
Product information
 Title: Fundamentals of Deep Learning, 2nd Edition
 Author(s):
 Release date: May 2022
 Publisher(s): O'Reilly Media, Inc.
 ISBN: 9781492082187
You might also like
book
Generative Deep Learning, 2nd Edition
Generative AI is the hottest topic in tech. This practical book teaches machine learning engineers and …
book
HandsOn Machine Learning with ScikitLearn, Keras, and TensorFlow, 2nd Edition
Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. …
book
HandsOn Machine Learning with ScikitLearn, Keras, and TensorFlow, 3rd Edition
Through a recent series of breakthroughs, deep learning has boosted the entire field of machine learning. …
book
Grokking Deep Learning
Grokking Deep Learning teaches you to build deep learning neural networks from scratch! In his engaging …