A simple technique of freezing the Q-network for a fixed number of steps and then using that to generate the Q learning targets to update the parameters of the deep Q-network was shown to be considerably effective in reducing the oscillations and stabilize Q learning with neural network approximation. This technique is a relatively simpler one, but it turns out to be very helpful for stable learning.
The implementation is going to be straightforward and simple. We have to make two changes or updates to our existing deep Q-learner class:
- Create a target Q-network and sync/update it with the original Q-network periodically
- Use the target Q-network to generate the Q-learning targets ...