December 2018
Beginner to intermediate
684 pages
21h 9m
English
Policy iteration requires the evaluation of the policy for all states after each iteration, and the evaluation can be costly for search-tree based policies, for example.
Value iteration simplifies the process by collapsing the policy evaluation and improvement step. At each time step, it iterates over all states and selects the best greedy action based on the current value estimate for the next state. Then, it uses the one-step lookahead implied by the Bellman optimality equation to update the value function for the current state.
The corresponding update rule for the value function, vk+1(s), is almost identical to the policy evaluation update—it just adds the maximization over the available actions:
The algorithm stops when ...