Now that we have implemented the two main components of AlphaGo Zero—the PolicyValueNetwork and the MCTS algorithm—we can build the controller that handles training. At the very beginning of the training procedure, we initialize a model with random weights. Next, we generate 100 self-play games. Five percent of those games and their results are held out for validation. The rest are kept for training the network. After the first initialization and self-play iteration, we essentially loop through the following steps:
- Generate self-play data
- Collate self-play data to create TFRecords
- Train network using collated self-play data
- Validate on holdout dataset
After every step 3, the resulting model is stored in a directory ...