O'Reilly logo

Deep Reinforcement Learning Hands-On by Maxim Lapan

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Dueling DQN

This improvement to DQN was proposed in 2015, in the paper called Dueling Network Architectures for Deep Reinforcement Learning ([8] Wang et al., 2015). The core observation of this paper lies in the fact that the Q-values Q(s, a) our network is trying to approximate can be divided into quantities: the value of the state V(s) and the advantage of actions in this state A(s, a). We've seen quantity V(s) before, as it was the core of the value iteration method from Chapter 5, Tabular Learning and the Bellman Equation. It just equals to the discounted expected reward achievable from this state. The advantage A(s, a) is supposed to bridge the gap from A(s) to Q(s, a), as, by definition: Q(s, a) = V(s) + A(s, a). In other words, the advantage ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required