Trust Region Policy Optimization
TRPO was proposed in 2015 by the Berkeley researchers in the paper by John Schulman et al called Trust Region Policy Optimization (arXiv:1502.05477). This paper was a step towards improving the stability and consistency of the stochastic policy gradient optimization and has shown good results on various control tasks.
Unfortunately, the paper and the method are quite math-heavy, so it can be hard to understand the details of the method. The same could be said about the implementation, which uses the conjugate gradients method to efficiently solve the constrained optimization problem.
As the first step, the TRPO method defines the discounted visitation frequencies of the state: . In this equation, equals to the sampled ...
Get Deep Reinforcement Learning Hands-On now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.