Asynchronous advantage actor critic

In the architecture of asynchronous advantage actor-critic, each learning agent contains an actor-critic learner that combines the benefits of both value- and policy-based methods. The actor network takes in the state as input and predicts the best action of that state, while the critic network takes in the state and action as the inputs and outputs the action score to quantify how good the action is for that state. The actor network updates its weight parameters using policy gradients, while the critic network updates its weight parameters using TD(0), in other words, the difference of value estimates between two time steps, as discussed in Chapter 4Policy Gradients.

In Chapter 4Policy Gradients, we ...

Get Reinforcement Learning with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.