Skip to Main Content
Deep Reinforcement Learning Hands-On
book

Deep Reinforcement Learning Hands-On

by Oleg Vasilev, Maxim Lapan, Martijn van Otterlo, Mikhail Yurushkin, Basem O. F. Alijla
June 2018
Intermediate to advanced content levelIntermediate to advanced
546 pages
13h 30m
English
Packt Publishing
Content preview from Deep Reinforcement Learning Hands-On

Dueling DQN

This improvement to DQN was proposed in 2015, in the paper called Dueling Network Architectures for Deep Reinforcement Learning ([8] Wang et al., 2015). The core observation of this paper lies in the fact that the Q-values Q(s, a) our network is trying to approximate can be divided into quantities: the value of the state V(s) and the advantage of actions in this state A(s, a). We've seen quantity V(s) before, as it was the core of the value iteration method from Chapter 5, Tabular Learning and the Bellman Equation. It just equals to the discounted expected reward achievable from this state. The advantage A(s, a) is supposed to bridge the gap from A(s) to Q(s, a), as, by definition: Q(s, a) = V(s) + A(s, a). In other words, the advantage ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Grokking Deep Reinforcement Learning

Grokking Deep Reinforcement Learning

Miguel Morales

Publisher Resources

ISBN: 9781788834247Supplemental Content