Building actor-critic with experience replay

We have come to a point in this book where we have learned about all the major concepts of DRL. There will be more tools we will throw at you in later chapters, such as the one we showed in this section, but if you have made it this far, you should consider yourself knowledgeable of DRL. As such, consider building your own tools or enhancements to DRL, not unlike the one we'll show in this section. If you are wondering if you need to have the math worked out first, then the answer is no. It can often be more intuitive to build these models in code first and then understand the math later.

Actor-critic with experience replay (ACER) provides another advantage by adjusting sampling based on past experiences. ...

Get Hands-On Reinforcement Learning for Games now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.