Prioritized replay buffer

The next very useful idea on how to improve DQN training was proposed in 2015 in the paper, Prioritized Experience Replay ([7] Schaul and others, 2015). This method tries to improve the efficiency of samples in the replay buffer by prioritizing those samples according to the training loss.

The basic DQN used the replay buffer to break the correlation between immediate transitions in our episodes. As we discussed in Chapter 6, Deep Q-Networks, the examples we experience during the episode will be highly correlated, as most of the time the environment is "smooth" and doesn't change much according to our actions. However, the SGD method assumes that the data we use for training has a i.i.d. property. To solve this problem, ...

Get Deep Reinforcement Learning Hands-On now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.