December 2018
Beginner to intermediate
684 pages
21h 9m
English
The solution to an RL problem is a policy that optimizes the cumulative reward. Policies and value functions are closely connected: an optimal policy yields a value estimate for each state, vπ (s), or state-action pair, qπ (s, a), that is at least as high as for any other policy since the value is the cumulative reward under the given policy. Hence, the optimal value functions, v * (s) = maxπ vπ (s) and q * (s, a) = maxπ qπ (s, a), implicitly define optimal policies and solve the MDP.
The optimal value functions, v * and q *, also satisfy the Bellman equations from the previous section. These Bellman optimality equations can omit the explicit reference to a policy as it is implied by v * and q*