Double DQN
The next fruitful idea on how to improve a basic DQN came from DeepMind researchers in a paper titled Deep Reinforcement Learning with Double Q-Learning ([3] van Hasselt, Guez, and Silver, 2015). In the paper, the authors demonstrated that the basic DQN has a tendency to overestimate values for Q, which may be harmful to training performance and sometimes can lead to suboptimal policies. The root cause of this is the max operation in the Bellman equation, but the strict proof is too complicated to write down here. As a solution to this problem, the authors proposed modifying the Bellman update a bit.
In the basic DQN, our target value for Q looked like this:
Q(t+1, a) was Q-values calculated using our target network, so we update with ...
Get Deep Reinforcement Learning Hands-On now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.