Skip to Content
Reinforcement Learning with TensorFlow
book

Reinforcement Learning with TensorFlow

by Sayon Dutta
April 2018
Intermediate to advanced content levelIntermediate to advanced
334 pages
10h 18m
English
Packt Publishing
Content preview from Reinforcement Learning with TensorFlow

Temporal difference rule

Firstly, temporal difference (TD) is the difference of the value estimates between two time steps. It is different from the outcome-based Monte Carlo approach where a full look ahead till the end of the episode is done in order to update the learning parameters. In case of temporal difference learning, only one step look ahead is done and a value estimate of the state at the next step is used to update the current state's value estimate. Thus, learning parameters update along the way. Different rules to approach temporal difference learning are the TD(1), TD(0), and TD() rules. The basic notion in all the approaches ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Deep Learning with TensorFlow - Second Edition

Deep Learning with TensorFlow - Second Edition

Giancarlo Zaccone, Vihan Jain, Md. Rezaul Karim, Motaz Saad
Deep Learning with TensorFlow 2 and Keras - Second Edition

Deep Learning with TensorFlow 2 and Keras - Second Edition

Antonio Gulli, Dr. Amita Kapoor, Sujit Pal

Publisher Resources

ISBN: 9781788835725Supplemental Content