The next fruitful idea on how to improve a basic DQN came from DeepMind researchers in a paper titled Deep Reinforcement Learning with Double Q-Learning ( van Hasselt, Guez, and Silver, 2015). In the paper, the authors demonstrated that the basic DQN has a tendency to overestimate values for Q, which may be harmful to training performance and sometimes can lead to suboptimal policies. The root cause of this is the max operation in the Bellman equation, but the strict proof is too complicated to write down here. As a solution to this problem, the authors proposed modifying the Bellman update a bit.
In the basic DQN, our target value for Q looked like this:
Q(t+1, a) was Q-values calculated using our target network, so we update with ...