Exploring starts policy improvement

MC policy improvement follows the same general pattern as DP. That is, we alternate evaluation and improvement steps until convergence. But because we don't have a model of the environment, it is better to estimate the action-value function, (state-action pairs), instead of the state-value function. If we had a model, we could just follow a greedy policy and choose the combination of action/reward and next state value with the highest expected return (similar to DP). But here, the action values will be better for choosing new policy. Therefore, to find the optimal policy, we have to estimate . With MC, ...

Get Python Deep Learning - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.