Updating the actor-critic model

After we have calculated the losses for the actor and critic, the next and final step in the learning process is to update the actor and critic parameters based on their losses. Since we use the awesome PyTorch library, which takes care of the partial differentiation, back-propagation of errors, and gradient calculations automatically, the implementation is simple and straightforward using the results from the previous steps, as shown in the following code sample:

def learn(self, n_th_observation, done):        td_targets = self.calculate_n_step_return(self.rewards, n_th_observation, done, self.gamma)        actor_loss, critic_loss = self.calculate_loss(self.trajectory, td_targets)        self.actor_optimizer.zero_grad() actor_loss.backward(retain_graph=True) ...

Get Hands-On Intelligent Agents with OpenAI Gym now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.