ES on HalfCheetah

In the next example, we'll go beyond the simplest ES implementation and look at how this method can be parallelized efficiently using the shared seed strategy proposed by the paper [1]. To show this approach, we'll use the environment from the roboschool library that we already experimented with in Chapter 15, Trust Regions – TRPO, PPO, and ACKTR, HalfCheetah, which is a continuous action problem where a weird two-legged creature gains reward by running forward without injuring itself.

First, let's discuss the idea of shared seeds. The performance of the ES algorithm is mostly determined by the speed that we can gather our training batch, which consists of sampling the noise and checking the total reward of the perturbed noise. ...

Get Deep Reinforcement Learning Hands-On now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.