Monte Carlo Tree Search

In Monte Carlo Tree Search (MCTS), the environment model is represented by a search tree. Let's say that the agent is at some state, s. Our immediate goal is to select the next action (and our main goal is to maximize the total reward). To do this, we'll create a new search tree with a single node (root): the state s. Then, we'll gradually build it node by node by playing simulated episodes. The edges of the tree will represent actions and the nodes will represent states where the agent ends up. In the process of tree building (that is, playing simulations), we'll assign some performance value over each action (edge). Once we finish building it, we'll be able to select the action (starting from the root node, s) with ...

Get Python Deep Learning - Second Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.