In the previous chapter, we saw a (not very successful) attempt to solve our favorite Pong environment with PG. Let's try it again with the actor-critic method at hand.
GAMMA = 0.99 LEARNING_RATE = 0.001 ENTROPY_BETA = 0.01 BATCH_SIZE = 128 NUM_ENVS = 50 REWARD_STEPS = 4 CLIP_GRAD = 0.1
We're starting, as usual, by defining hyperparameters (imports are omitted). These values are not tuned, as we'll do this in the next section of this chapter. We have one new value here:
CLIP_GRAD. This hyperparameter is specifying the threshold for gradient clipping, which, basically, prevents our gradients at optimization stage from becoming too large and pushing our policy too far. Clipping is implemented using the PyTorch functionality, but the idea ...