January 2019
Intermediate to advanced
386 pages
11h 13m
English
One of the drawbacks of AC (and policy-based methods in general) is the high variance of
. To understand this, note that we update the policy parameters θ by observing the rewards of multiple episodes. Let's focus on a single episode. The agent starts at initial state s and then takes a series of actions following the policy
. These actions lead to new states and their corresponding rewards. When the terminal state is reached, the episode has accumulated some total reward. We'll use these rewards to update the policy ...