Book Description
Your onestop guide to learning and implementing artificial neural networks with Keras effectively
Key Features
 Design and create neural network architectures on different domains using Keras
 Integrate neural network models in your applications using this highly practical guide
 Get ready for the future of neural networks through transfer learning and predicting multi network models
Book Description
Neural networks are used to solve a wide range of problems in different areas of AI and deep learning.
HandsOn Neural Networks with Keras will start with teaching you about the core concepts of neural networks. You will delve into combining different neural network models and work with realworld use cases, including computer vision, natural language understanding, synthetic data generation, and many more. Moving on, you will become well versed with convolutional neural networks (CNNs), recurrent neural networks (RNNs), long shortterm memory (LSTM) networks, autoencoders, and generative adversarial networks (GANs) using realworld training datasets. We will examine how to use CNNs for image recognition, how to use reinforcement learning agents, and many more. We will dive into the specific architectures of various networks and then implement each of them in a handson manner using industrygrade frameworks.
By the end of this book, you will be highly familiar with all prominent deep learning models and frameworks, and the options you have when applying deep learning to realworld scenarios and embedding artificial intelligence as the core fabric of your organization.
What you will learn
 Understand the fundamental nature and workflow of predictive data modeling
 Explore how different types of visual and linguistic signals are processed by neural networks
 Dive into the mathematical and statistical ideas behind how networks learn from data
 Design and implement various neural networks such as CNNs, LSTMs, and GANs
 Use different architectures to tackle cognitive tasks and embed intelligence in systems
 Learn how to generate synthetic data and use augmentation strategies to improve your models
 Stay on top of the latest academic and commercial developments in the field of AI
Who this book is for
This book is for machine learning practitioners, deep learning researchers and AI enthusiasts who are looking to get well versed with different neural network architecture using Keras. Working knowledge of Python programming language is mandatory.
Publisher Resources
Table of Contents
 Title Page
 Copyright and Credits
 About Packt
 Contributors
 Preface
 Section 1: Fundamentals of Neural Networks
 Overview of Neural Networks
 A Deeper Dive into Neural Networks

Signal Processing  Data Analysis with Neural Networks
 Processing signals
 Images as numbers
 Feeding a neural network
 Examples of tensors
 Building a model
 Compiling the model
 Evaluating model performance
 Implementing weight regularization in Keras
 Weight regularization experiments
 Implementing dropout regularization in Keras
 Language processing
 The internet movie reviews dataset
 Plotting a single training instance
 Onehot encoding
 Vectorizing features
 Vectorizing labels
 Building a network
 Callbacks
 Accessing model predictions
 Probing the predictions
 Featurewise normalization
 Cross validation with scikitlearn API
 Summary
 Exercises
 Section 2: Advanced Neural Network Architectures

Convolutional Neural Networks
 Why CNNs?
 The birth of vision
 Understanding biological vision
 Conceptualizing spatial invariance
 Defining receptive fields of neurons
 Implementing a hierarchy of neurons
 The birth of the modern CNN
 Designing a CNN
 The convolution operation
 Visualizing feature extraction with filters
 Looking at complex filters
 Summarizing the convolution operation
 Understanding pooling layers
 Implementing CNNs in Keras
 Convolutional layer
 Leveraging a fully connected layer for classification
 Summarizing our model
 Checking model accuracy
 The problem with detecting smiles
 Introducing Keras's functional API
 Verifying the number of channels per layer
 Understanding saliency
 Visualizing saliency maps with ResNet50
 Loading pictures from a local directory
 Using Keras's visualization module
 Searching through layers
 Exercise
 Gradient weighted class activation mapping
 Visualizing class activations with Kerasvis
 Using the pretrained model for prediction
 Visualizing maximal activations per output class
 Converging a model
 Using multiple filter indices to hallucinate
 Problems with CNNs
 Neural network pareidolia
 Summary

Recurrent Neural Networks
 Modeling sequences
 Using RNNs for sequential modeling
 Summarizing different types of sequence processing tasks
 Predicting an output per time step
 Backpropagation through time
 Exploding and vanishing gradients
 GRUs
 Building characterlevel language models in Keras
 Statistics of character modeling
 The purpose of controlling stochasticity
 Testing different RNN models
 Building a SimpleRNN
 Building GRUs
 On processing reality sequentially
 Bidirectional layer in Keras
 Visualizing output values
 Summary
 Further reading
 Exercise

Long ShortTerm Memory Networks
 On processing complex sequences
 The LSTM network
 Dissecting the LSTM
 LSTM memory block
 Visualizing the flow of information
 Computing contender memory
 Computing activations per timestep
 Variations of LSTM and performance
 Understanding peephole connections
 Importance of timing and counting
 Putting our knowledge to use
 On modeling stock market data
 Denoising the data
 Implementing exponential smoothing
 The problem with onestepahead predictions
 Creating sequences of observations
 Building LSTMs
 Closing comments
 Summary
 Exercises

Reinforcement Learning with Deep QNetworks
 On reward and gratification
 Conditioning machines with reinforcement learning
 The exploreexploit dilemma
 Path to artificial general intelligence
 Simulating environments
 A selfdriving taxi cab
 Tradeoff between immediate and future rewards
 Discounting future rewards
 Markov decision process
 Understanding policy functions
 Assessing the value of a state
 Assessing the quality of an action
 Using the Bellman equation
 Updating the Bellman equation iteratively
 Why use neural networks?
 Performing a forward pass in Qlearning
 Performing a backward pass in QLearning
 Deep Qlearning in Keras
 Balancing exploration with exploitation
 Initializing the deep Qlearning agent
 Double Qlearning
 Dueling network architecture
 Exercise
 Summary
 Section 3: Hybrid Model Architecture

Autoencoders
 Why autoencoders?
 Automatically encoding information
 Understanding the limitations of autoencoders
 Breaking down the autoencoder
 Training an autoencoder
 Overviewing autoencoder archetypes
 Network size and representational power
 Understanding regularization in autoencoders
 Regularization with sparse autoencoders
 Probing the data
 Building the verification model
 Designing a deep autoencoder
 Using functional API to design autoencoders
 Deep convolutional autoencoder
 Compiling and training the model
 Testing and visualizing the results
 Denoising autoencoders
 Training the denoising network
 Summary
 Exercise

Generative Networks
 Replicating versus generating content
 Understanding the notion of latent space
 Diving deeper into generative networks
 Using randomness to augment outputs
 Sampling from the latent space
 Understanding types of generative networks
 Understanding VAEs
 Designing a VAE in Keras
 Building the encoding module in a VAE
 Building the decoder module
 Visualizing the latent space
 Latent space sampling and output generation
 Exploring GANs
 Diving deeper into GANs
 Designing a GAN in Keras
 Designing the generator module
 Designing the discriminator module
 Putting the GAN together
 The training function
 Defining the discriminator labels
 Training the generator per batch
 Executing the training session
 Conclusion
 Summary
 Section 4: Road Ahead

Contemplating Present and Future Developments
 Sharing representations with transfer learning
 Concluding our experiments
 Learning representations
 Limits of current neural networks
 Encouraging sparse representation learning
 Tuning hyperparameters
 Automatic optimization and evolutionary algorithms
 Multinetwork predictions and ensemble models
 The future of AI and neural networks
 The road ahead
 Problems with classical computing
 The advent of quantum computing
 Quantum neural networks
 Technology and society
 Contemplating our future
 Summary 
 Other Books You May Enjoy
Product Information
 Title: HandsOn Neural Networks with Keras
 Author(s):
 Release date: March 2019
 Publisher(s): Packt Publishing
 ISBN: 9781789536089