One of the drawbacks of AC (and policy-based methods in general) is the high variance of . To understand this, note that we update the policy parameters θ by observing the rewards of multiple episodes. Let's focus on a single episode. The agent starts at initial state s and then takes a series of actions following the policy . These actions lead to new states and their corresponding rewards. When the terminal state is reached, the episode has accumulated some total reward. We'll use these rewards to update the policy ...
Actor-Critic with advantage
Get Python Deep Learning - Second Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.