Skip to Main Content
Deep Reinforcement Learning Hands-On
book

Deep Reinforcement Learning Hands-On

by Oleg Vasilev, Maxim Lapan, Martijn van Otterlo, Mikhail Yurushkin, Basem O. F. Alijla
June 2018
Intermediate to advanced content levelIntermediate to advanced
546 pages
13h 30m
English
Packt Publishing
Content preview from Deep Reinforcement Learning Hands-On

Chapter 5. Tabular Learning and the Bellman Equation

In the previous chapter, we got acquainted with our first Reinforcement Learning (RL) method, cross-entropy, and saw its strengths and weaknesses. In this new part of the book, we'll look at another group of methods, called Q-learning, which have much more flexibility and power.

This chapter will establish the required background shared by those methods. We'll also revisit the FrozenLake environment and show how new concepts will fit with this environment and help us to address the issues of the environment's uncertainty.

Value, state, and optimality

You may remember our definition of the value of the state in Chapter 1, What is Reinforcement Learning?. This is a very important notion and the time ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Grokking Deep Reinforcement Learning

Grokking Deep Reinforcement Learning

Miguel Morales

Publisher Resources

ISBN: 9781788834247Supplemental Content