Book description
Your one-stop guide to learning and implementing artificial neural networks with Keras effectively
Key Features
- Design and create neural network architectures on different domains using Keras
- Integrate neural network models in your applications using this highly practical guide
- Get ready for the future of neural networks through transfer learning and predicting multi network models
Book Description
Neural networks are used to solve a wide range of problems in different areas of AI and deep learning.
Hands-On Neural Networks with Keras will start with teaching you about the core concepts of neural networks. You will delve into combining different neural network models and work with real-world use cases, including computer vision, natural language understanding, synthetic data generation, and many more. Moving on, you will become well versed with convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, autoencoders, and generative adversarial networks (GANs) using real-world training datasets. We will examine how to use CNNs for image recognition, how to use reinforcement learning agents, and many more. We will dive into the specific architectures of various networks and then implement each of them in a hands-on manner using industry-grade frameworks.
By the end of this book, you will be highly familiar with all prominent deep learning models and frameworks, and the options you have when applying deep learning to real-world scenarios and embedding artificial intelligence as the core fabric of your organization.
What you will learn
- Understand the fundamental nature and workflow of predictive data modeling
- Explore how different types of visual and linguistic signals are processed by neural networks
- Dive into the mathematical and statistical ideas behind how networks learn from data
- Design and implement various neural networks such as CNNs, LSTMs, and GANs
- Use different architectures to tackle cognitive tasks and embed intelligence in systems
- Learn how to generate synthetic data and use augmentation strategies to improve your models
- Stay on top of the latest academic and commercial developments in the field of AI
Who this book is for
This book is for machine learning practitioners, deep learning researchers and AI enthusiasts who are looking to get well versed with different neural network architecture using Keras. Working knowledge of Python programming language is mandatory.
Table of contents
- Title Page
- Copyright and Credits
- About Packt
- Contributors
- Preface
- Section 1: Fundamentals of Neural Networks
- Overview of Neural Networks
- A Deeper Dive into Neural Networks
-
Signal Processing - Data Analysis with Neural Networks
- Processing signals
- Images as numbers
- Feeding a neural network
- Examples of tensors
- Building a model
- Compiling the model
- Evaluating model performance
- Implementing weight regularization in Keras
- Weight regularization experiments
- Implementing dropout regularization in Keras
- Language processing
- The internet movie reviews dataset
- Plotting a single training instance
- One-hot encoding
- Vectorizing features
- Vectorizing labels
- Building a network
- Callbacks
- Accessing model predictions
- Probing the predictions
- Feature-wise normalization
- Cross validation with scikit-learn API
- Summary
- Exercises
- Section 2: Advanced Neural Network Architectures
-
Convolutional Neural Networks
- Why CNNs?
- The birth of vision
- Understanding biological vision
- Conceptualizing spatial invariance
- Defining receptive fields of neurons
- Implementing a hierarchy of neurons
- The birth of the modern CNN
- Designing a CNN
- The convolution operation
- Visualizing feature extraction with filters
- Looking at complex filters
- Summarizing the convolution operation
- Understanding pooling layers
- Implementing CNNs in Keras
- Convolutional layer
- Leveraging a fully connected layer for classification
- Summarizing our model
- Checking model accuracy
- The problem with detecting smiles
- Introducing Keras's functional API
- Verifying the number of channels per layer
- Understanding saliency
- Visualizing saliency maps with ResNet50
- Loading pictures from a local directory
- Using Keras's visualization module
- Searching through layers
- Exercise
- Gradient weighted class activation mapping
- Visualizing class activations with Keras-vis
- Using the pretrained model for prediction
- Visualizing maximal activations per output class
- Converging a model
- Using multiple filter indices to hallucinate
- Problems with CNNs
- Neural network pareidolia
- Summary
-
Recurrent Neural Networks
- Modeling sequences
- Using RNNs for sequential modeling
- Summarizing different types of sequence processing tasks
- Predicting an output per time step
- Backpropagation through time
- Exploding and vanishing gradients
- GRUs
- Building character-level language models in Keras
- Statistics of character modeling
- The purpose of controlling stochasticity
- Testing different RNN models
- Building a SimpleRNN
- Building GRUs
- On processing reality sequentially
- Bi-directional layer in Keras
- Visualizing output values
- Summary
- Further reading
- Exercise
-
Long Short-Term Memory Networks
- On processing complex sequences
- The LSTM network
- Dissecting the LSTM
- LSTM memory block
- Visualizing the flow of information
- Computing contender memory
- Computing activations per timestep
- Variations of LSTM and performance
- Understanding peephole connections
- Importance of timing and counting
- Putting our knowledge to use
- On modeling stock market data
- Denoising the data
- Implementing exponential smoothing
- The problem with one-step-ahead predictions
- Creating sequences of observations
- Building LSTMs
- Closing comments
- Summary
- Exercises
-
Reinforcement Learning with Deep Q-Networks
- On reward and gratification
- Conditioning machines with reinforcement learning
- The explore-exploit dilemma
- Path to artificial general intelligence
- Simulating environments
- A self-driving taxi cab
- Trade-off between immediate and future rewards
- Discounting future rewards
- Markov decision process
- Understanding policy functions
- Assessing the value of a state
- Assessing the quality of an action
- Using the Bellman equation
- Updating the Bellman equation iteratively
- Why use neural networks?
- Performing a forward pass in Q-learning
- Performing a backward pass in Q-Learning
- Deep Q-learning in Keras
- Balancing exploration with exploitation
- Initializing the deep Q-learning agent
- Double Q-learning
- Dueling network architecture
- Exercise
- Summary
- Section 3: Hybrid Model Architecture
-
Autoencoders
- Why autoencoders?
- Automatically encoding information
- Understanding the limitations of autoencoders
- Breaking down the autoencoder
- Training an autoencoder
- Overviewing autoencoder archetypes
- Network size and representational power
- Understanding regularization in autoencoders
- Regularization with sparse autoencoders
- Probing the data
- Building the verification model
- Designing a deep autoencoder
- Using functional API to design autoencoders
- Deep convolutional autoencoder
- Compiling and training the model
- Testing and visualizing the results
- Denoising autoencoders
- Training the denoising network
- Summary
- Exercise
-
Generative Networks
- Replicating versus generating content
- Understanding the notion of latent space
- Diving deeper into generative networks
- Using randomness to augment outputs
- Sampling from the latent space
- Understanding types of generative networks
- Understanding VAEs
- Designing a VAE in Keras
- Building the encoding module in a VAE
- Building the decoder module
- Visualizing the latent space
- Latent space sampling and output generation
- Exploring GANs
- Diving deeper into GANs
- Designing a GAN in Keras
- Designing the generator module
- Designing the discriminator module
- Putting the GAN together
- The training function
- Defining the discriminator labels
- Training the generator per batch
- Executing the training session
- Conclusion
- Summary
- Section 4: Road Ahead
-
Contemplating Present and Future Developments
- Sharing representations with transfer learning
- Concluding our experiments
- Learning representations
- Limits of current neural networks
- Encouraging sparse representation learning
- Tuning hyperparameters
- Automatic optimization and evolutionary algorithms
- Multi-network predictions and ensemble models
- The future of AI and neural networks
- The road ahead
- Problems with classical computing
- The advent of quantum computing
- Quantum neural networks
- Technology and society
- Contemplating our future
- Summary 
- Other Books You May Enjoy
Product information
- Title: Hands-On Neural Networks with Keras
- Author(s):
- Release date: March 2019
- Publisher(s): Packt Publishing
- ISBN: 9781789536089
You might also like
book
Advanced Deep Learning with Keras
A comprehensive guide to advanced deep learning techniques, including Autoencoders, GANs, VAEs, and Deep Reinforcement Learning, …
book
Hands-On Neural Networks with TensorFlow 2.0
A comprehensive guide to developing neural network-based solutions using TensorFlow 2.0 Key Features Understand the basics …
book
TensorFlow Machine Learning Projects
Implement TensorFlow's offerings such as TensorBoard, TensorFlow.js, TensorFlow Probability, and TensorFlow Lite to build smart automation …
book
Recurrent Neural Networks with Python Quick Start Guide
Learn how to develop intelligent applications with sequential learning and apply modern methods for language modeling …