Skip to Content
Deep Learning Quick Reference
book

Deep Learning Quick Reference

by Mike Bernico
March 2018
Intermediate to advanced
272 pages
7h 53m
English
Packt Publishing
Content preview from Deep Learning Quick Reference

Infinite state space

This discussion of Q functions brings us to an important limitation of traditional reinforcement learning. As you may recall, it assumes a finite and discrete set of state spaces. Unfortunately that isn't the world we live in, nor is it the environment that our agents will find themselves in much of the time. Consider an agent that can play ping pong. One important part of it's state space would be the velocity of the ping pong ball, which is certainly not discrete. An agent that can see, like one we will cover shortly, would be presented with an image, that is a large continuous space.

The Bellman equation we discussed would require us to keep a big matrix of experienced rewards as we moved from state to state. But, ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Keras Deep Learning Cookbook

Keras Deep Learning Cookbook

Rajdeep Dua, Sujit Pal, Manpreet Singh Ghotra
Deep Learning with Keras

Deep Learning with Keras

Antonio Gulli, Sujit Pal

Publisher Resources

ISBN: 9781788837996Supplemental Content