Exploring starts policy improvement

MC policy improvement follows the same general pattern as DP. That is, we alternate evaluation and improvement steps until convergence. But because we don't have a model of the environment, it is better to estimate the action-value function,  (state-action pairs), instead of the state-value function. If we had a model, we could just follow a greedy policy and choose the combination of action/reward and next state value with the highest expected return (similar to DP). But here, the action values will be better for choosing new policy. Therefore, to find the optimal policy, we have to estimate   . With MC, ...

Get Python Deep Learning - Second Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.