Putting everything together

Now that we have implemented the two main components of AlphaGo Zero—the PolicyValueNetwork and the MCTS algorithm—we can build the controller that handles training. At the very beginning of the training procedure, we initialize a model with random weights. Next, we generate 100 self-play games. Five percent of those games and their results are held out for validation. The rest are kept for training the network. After the first initialization and self-play iteration, we essentially loop through the following steps:

  1. Generate self-play data
  2. Collate self-play data to create TFRecords
  3. Train network using collated self-play data
  4. Validate on holdout dataset

After every step 3, the resulting model is stored in a directory ...

Get Python Reinforcement Learning Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.