On-policy

In the case of on-policy learning, the behavior policy and the target policy for the agent are one and the same. So, naturally the current policy (before the update) that the agent is using to collect samples is going to be , which is the behavior policy, and therefore the objective function becomes this:

The changed terms in the preceding equation compared to the previous equation are shown in red.

TRPO optimizes the previous object function with a trust region constraint, which using the KL divergence metric given by the following ...

Get Hands-On Intelligent Agents with OpenAI Gym now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.