A practical tour of prediction and control in Reinforcement Learning using OpenAI Gym, Python, and TensorFlow
About This Video
- Learn how to solve Reinforcement Learning problems with a variety of strategies.
- Use Python, TensorFlow, NumPy, and OpenAI Gym to understand Reinforcement Learning theory.
- Fast-paced approach to learning about RL concepts, frameworks, and algorithms and implementing models using Reinforcement Learning.
Reinforcement learning (RL) is hot! This branch of machine learning powers AlphaGo and Deepmind's Atari AI. It allows programmers to create software agents that learn to take optimal actions to maximize reward, through trying out different strategies in a given environment.
This course will take you through all the core concepts in Reinforcement Learning, transforming a theoretical subject into tangible Python coding exercises with the help of OpenAI Gym. The videos will first guide you through the gym environment, solving the CartPole-v0 toy robotics problem, before moving on to coding up and solving a multi-armed bandit problem in Python. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. Lastly, we take the Blackjack challenge and deploy model free algorithms that leverage Monte Carlo methods and Temporal Difference (TD, more specifically SARSA) techniques.
The scope of Reinforcement Learning applications outside toy examples is immense. Reinforcement Learning can optimize agricultural yield in IoT powered greenhouses, and reduce power consumption in data centers. It's grown in demand to the point where its applications range from controlling robots to extracting insights from images and natural language data. By the end of this course, you will not only be able to solve these problems but will also be able to use Reinforcement Learning as a problem-solving strategy and use different algorithms to solve these problems.
This course uses Python 3.6, while not the latest version available, it provides relevant and informative content for legacy users of Python.
Table of Contents
- Chapter 1 : Getting Started With Reinforcement Learning Using OpenAI Gym
- Chapter 2 : Lights, Camera, Action – Building Blocks of Reinforcement Learning
- Chapter 3 : The Multi-Armed Bandit
- Chapter 4 : The Contextual Bandit
Chapter 5 : Dynamic Programming – Prediction, Control, and Value Approximation
- Visualizing Dynamic Programming in GridWorld in Your Browser 00:11:40
- Understanding Prediction Through Building a Policy Evaluation Algorithm 00:11:07
- Understanding Control Through Building a Policy Iteration Algorithm 00:11:07
- Building a Value Iteration Algorithm 00:09:45
- Linking It All Together in the Web-Based GridWorld Visualization 00:05:49
Chapter 6 : Markov Decision Processes and Neural Networks
- Understanding Markov Decision Process and Dynamic Programming in CartPole-v0 00:07:00
- Crafting a Neural Network Using TensorFlow 00:09:33
- Crafting a Neural Network to Predict the Value of Being in Different Environment States 00:08:22
- Training the Agent in CartPole-v0 00:11:45
- Visualizing and Understanding How Your Software Agent Has Performed 00:06:14
Chapter 7 : Model-Free Prediction and Control With Monte Carlo (MC)
- Running the Blackjack Environment From the OpenAI Gym 00:04:36
- Tallying Every Outcome of an Agent Playing Blackjack Using MC 00:08:58
- Visualizing the Outcomes of a Simple Blackjack Strategy 00:08:22
- Control – Building a Very Simple Epsilon-Greedy Policy 00:08:17
- Visualizing the Outcomes of the Epsilon-Greedy Policy 00:04:59
- Chapter 8 : Model-Free Prediction and Control with Temporal Difference (TD)
- Title: Hands - On Reinforcement Learning with Python
- Release date: March 2018
- Publisher(s): Packt Publishing
- ISBN: 9781788392402