9
Ways to Speed up RL
In Chapter 8, DQN Extensions, you saw several practical tricks to make the deep Q-network (DQN) method more stable and converge faster. They involved the basic DQN method modifications (like injecting noise into the network or unrolling the Bellman equation) to get a better policy, with less time spent on training. But there is another way: tweaking the implementation details of the method to improve the speed of the training. This is a pure engineering approach, but it's also important in practice.
In this chapter, we will:
- Take the Pong environment from Chapter 8 and try to get it solved as fast as possible
- In a step-by-step manner, get Pong solved 3.5 times faster using exactly the same commodity hardware
- Discuss fancier ...
Get Deep Reinforcement Learning Hands-On - Second Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.