Categorical DQN

The last and the most complicated method in our DQN improvements toolbox is from the very recent paper published by DeepMind in June 2017 called A Distributional Perspective on Reinforcement Learning ([9] Bellemare, Dabney and Munos 2017).

In the paper, the authors questioned the fundamental piece of Q-learning: Q-values and tried to replace them with more generic Q-value probability distribution. Let's try to understand the idea. Both the Q-learning and value iteration methods are working with the values of actions or states represented as simple numbers and showing how much total reward we can achieve from state or action. However, is it practical to squeeze all future possible reward into one number? In complicated environments, ...

Get Deep Reinforcement Learning Hands-On now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.