June 2018
Intermediate to advanced
546 pages
13h 30m
English
The first method that we'll apply to our walking robot problem is A2C, which we experimented with in part three of the book. This choice of method is quite obvious, as A2C is very easy to adapt to the continuous action domain. As a quick refresher, A2C's idea is to estimate the gradient of our policy as
. The
policy is supposed to provide to us the probability distribution of actions given the observed state. The quantity is called a critic, equals to the value of the state and is trained using the Mean Square Error ...