June 2018
Intermediate to advanced
546 pages
13h 30m
English
TRPO was proposed in 2015 by the Berkeley researchers in the paper by John Schulman et al called Trust Region Policy Optimization (arXiv:1502.05477). This paper was a step towards improving the stability and consistency of the stochastic policy gradient optimization and has shown good results on various control tasks.
Unfortunately, the paper and the method are quite math-heavy, so it can be hard to understand the details of the method. The same could be said about the implementation, which uses the conjugate gradients method to efficiently solve the constrained optimization problem.
As the first step, the TRPO method defines the discounted visitation frequencies of the state: . In this equation, equals to the sampled ...