REINFORCE has the nice property of being unbiased, due to the MC return, which provides the true return of a full trajectory. However, the unbiased estimate is to the detriment of the variance, which increases with the length of the trajectory. Why? This effect is due to the stochasticity of the policy. By executing a full trajectory, you would know its true reward. However, the value that is assigned to each state-action pair may not be correct, since the policy is stochastic, and executing it another time may lead to a new state, and consequently, a different reward. Moreover, you can see that the higher the number of actions in a trajectory, the more stochasticity you will have introduced into the system, therefore, ...
REINFORCE with baseline
Get Reinforcement Learning Algorithms with Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.