January 2019
Intermediate to advanced
386 pages
11h 13m
English
TD methods rely on their experience for policy evaluation. But unlike MC, they don't have to wait until the end of an episode. Instead, they can update the action-value function after each step of the episode. In its most basic form, a TD algorithm uses the following formula to perform a state-value update:

Where α is called step size (learning rate) and it's in the range of [0, 1]. Let's analyze this equation. We're going to update the value of the st state and we're following a policy, π, that has led the agent to transition from the st state to the st+1 state. During the transition, the agent has received a reward, r