Working with a DQN on Atari

Now that we've looked at the output CNNs produce in terms of filters, the best way to understand how this works is to look at the code that constructs them. Before we get to that, though, let's begin a new exercise where we use a new form of DQN to solve Atari:

  1. Open this chapter's sample code, which can be found in the Chapter_7_DQN_CNN.py file. The code is fairly similar to Chapter_6_lunar.py but with some critical differences. We will just focus on the differences in this exercise. If you need a better explanation of the code, review Chapter 6, Going Deep with DQN:
from wrappers import *
  1. Starting at the top, the only change is a new import from a local file called wrappers.py. We will examine what this does ...

Get Hands-On Reinforcement Learning for Games now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.