4. Deep Q-Networks (DQN)
This chapter introduces the Deep Q-Networks algorithm (DQN) proposed by Mnih et al. [88] in 2013. Like SARSA, DQN is a value-based temporal difference (TD) algorithm that approximates the Q-function. The learned Q-function is then used by an agent to select actions. DQN is only applicable to environments with discrete action spaces. However, DQN learns a different Q-function compared to SARSA—the optimal Q-function instead of the Q-function for the current policy. This small but crucial change improves the stability and speed of learning.
In Section 4.1, we first discuss why DQN learns the optimal Q-function by looking at the Bellman equation for DQN. One important implication is that this makes DQN an off-policy algorithm. ...
Get Foundations of Deep Reinforcement Learning: Theory and Practice in Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.