7. Proximal Policy Optimization (PPO)
One challenge when training agents with policy gradient algorithms is that they are susceptible to performance collapse in which an agent suddenly starts to perform badly. This scenario can be hard to recover from because an agent will start to generate poor trajectories which are then used to further train the policy. We have also seen that on-policy algorithms are sample-inefficient because they cannot reuse data.
Proximal Policy Optimization (PPO) by Schulman et al. [124] is a class of optimization algorithms that addresses these two issues. The main idea behind PPO is to introduce a surrogate objective which avoids performance collapse by guaranteeing monotonic policy improvement. This objective also ...
Get Foundations of Deep Reinforcement Learning: Theory and Practice in Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.