With policy evaluation under our belt, it is time to move on to improving the policy by looking ahead. Recall we do this by looking at one state ahead of the current state and then evaluating all possible actions. Let's look at how this works in code. Open up the Chapter_2_6.py example and follow the exercise:
- For brevity, the following code excerpt from Chapter_2_6.py shows just the new sections of code that were added to that last example:
def evaluate(V, action_values, env, gamma, state): for action in range(env.nA): for prob, next_state, reward, terminated in env.P[state][action]: action_values[action] += prob * (reward + gamma * V[next_state]) return action_valuesdef lookahead(env, state, V, gamma): action_values ...