December 2018
Beginner to intermediate
684 pages
21h 9m
English
Policy iterations involves separate evaluation and improvement steps. We define the improvement part by selecting the action that maximizes the sum of expected reward and next-state value. Note that we temporarily fill in the rewards for the terminal states to avoid ignoring actions that would lead us there:
def policy_improvement(value, transitions): for state, reward in absorbing_states.items(): value[state] = reward return np.argmax(np.sum(transitions * value, 2),0)
We initialize the value function, as, mentioned previously, and include a random starting policy:
V = np.random.rand(num_states)V[skip_states] = 0pi = np.random.choice(list(range(num_actions)), size=num_states)
The algorithm alternates between policy evaluation ...