Actor-Critic with advantage

One of the drawbacks of AC (and policy-based methods in general) is the high variance of  . To understand this, note that we update the policy parameters θ by observing the rewards of multiple episodes. Let's focus on a single episode. The agent starts at initial state s and then takes a series of actions following the policy   . These actions lead to new states and their corresponding rewards. When the terminal state is reached, the episode has accumulated some total reward. We'll use these rewards to update the policy ...

Get Python Deep Learning - Second Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.