Skip to Main Content
Deep Reinforcement Learning Hands-On
book

Deep Reinforcement Learning Hands-On

by Oleg Vasilev, Maxim Lapan, Martijn van Otterlo, Mikhail Yurushkin, Basem O. F. Alijla
June 2018
Intermediate to advanced content levelIntermediate to advanced
546 pages
13h 30m
English
Packt Publishing
Content preview from Deep Reinforcement Learning Hands-On

Prioritized replay buffer

The next very useful idea on how to improve DQN training was proposed in 2015 in the paper, Prioritized Experience Replay ([7] Schaul and others, 2015). This method tries to improve the efficiency of samples in the replay buffer by prioritizing those samples according to the training loss.

The basic DQN used the replay buffer to break the correlation between immediate transitions in our episodes. As we discussed in Chapter 6, Deep Q-Networks, the examples we experience during the episode will be highly correlated, as most of the time the environment is "smooth" and doesn't change much according to our actions. However, the SGD method assumes that the data we use for training has a i.i.d. property. To solve this problem, ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Grokking Deep Reinforcement Learning

Grokking Deep Reinforcement Learning

Miguel Morales

Publisher Resources

ISBN: 9781788834247Supplemental Content