Chapter 7. DQN Extensions

In the previous chapter, we implemented the Deep Q-Network (DQN) model published by DeepMind in 2015. This paper had a significant effect on the Reinforcement Learning (RL) field by demonstrating that, despite common belief, it's possible to use nonlinear approximators in RL. This proof of concept stimulated large interest in the deep Q-learning field in particular and in deep RL in general.

Since then, many improvements have been proposed, along with tweaks to the basic architecture, which significantly improve convergence, stability and sample efficiency of the basic DQN invented by DeepMind. In this chapter, we'll take a deeper look at some of those ideas. Very conveniently, in October 2017, DeepMind published a paper ...

Get Deep Reinforcement Learning Hands-On now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.