Using the target Q-network to stabilize an agent's learning

A simple technique of freezing the Q-network for a fixed number of steps and then using that to generate the Q learning targets to update the parameters of the deep Q-network was shown to be considerably effective in reducing the oscillations and stabilize Q learning with neural network approximation. This technique is a relatively simpler one, but it turns out to be very helpful for stable learning.

The implementation is going to be straightforward and simple. We have to make two changes or updates to our existing deep Q-learner class:

  1. Create a target Q-network and sync/update it with the original Q-network periodically
  2. Use the target Q-network to generate the Q-learning targets ...

Get Hands-On Intelligent Agents with OpenAI Gym now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.