In the architecture of asynchronous advantage actor-critic, each learning agent contains an actor-critic learner that combines the benefits of both value- and policy-based methods. The actor network takes in the state as input and predicts the best action of that state, while the critic network takes in the state and action as the inputs and outputs the action score to quantify how good the action is for that state. The actor network updates its weight parameters using policy gradients, while the critic network updates its weight parameters using TD(0), in other words, the difference of value estimates between two time steps, as discussed in Chapter 4, Policy Gradients.
In Chapter 4, Policy Gradients, we ...