Planning

The preceding network forms part of a Deep Q-Network (DQN), which takes state information as the input and stores the experiences in an experience buffer. Sample data from this experience buffer is used to train the deep neural network used in DQN, which in turn predicts state-action values. The state action values help in deriving the optimal policy, that is, plan out best actions for a given state.

The DQN-based approach is suitable for continuous state spaces but it requires the action spaces to be discrete. Therefore, in case of continuous action space, actor-critic algorithms are preferred. Recalling the actor-critic algorithm from Chapter 4, Policy Gradients, the following is a diagram of the actor-critic algorithm:

 

An actor-critic ...

Get Reinforcement Learning with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.