17 Forward ADP II: Policy Optimization
We are now ready to tackle the problem of searching for good policies while simultaneously trying to produce good value function approximations. The guiding principle in this chapter is that we can find good policies if we can find good value function approximations. The problem is that finding good value function approximations requires that we be simulating “good” policies (using the methods of chapter 16). It is the interaction between the two that creates all the complications.
The algorithmic strategies presented in this chapter are all based on algorithms we first presented in chapter 14, with two notable exceptions:
- We never take expectations – Random variables are always handled through either Monte Carlo simulation, historical trajectories, or direct field observations.
- We use machine learning to approximate functions – This means we have to deal with estimation errors due to noise, errors due to biased observations, and structural errors from the chosen approximating architecture.
The statistical tools presented in chapter 3 focused on finding the best statistical fit of a function that we can only observe with noise, but where we assumed that the observations are unbiased. In chapter 16, we saw that the sampled estimate of the value of being in state could be biased for several reasons:
- If we are using approximate value iteration, the value functions have to steadily accumulate downstream values (recall the slow convergence ...
Get Reinforcement Learning and Stochastic Optimization now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.