Actor-critic (AC) is a family of policy gradient algorithms similar to the temporal difference (TD) methods (Chapter 8, Reinforcement Learning Theory). That is, unlike Monte Carlo, an AC method doesn't have to play whole episodes to update the policy parameters θ. AC has two components:
- The actor, which is the parameterized policy . The actor (agent) will use the policy to make decisions on what action to take next.
- The critic, which is the state- or action value function approximation or (we introduced ...