Content preview from Hands-On Machine Learning with TensorFlow.js
- Come up with a situation around you that can be defined as an MDP problem.
- Do you think we can use the state-value function to solve the MDP problem in the same way as we use the action-value function?
- Explore the active-function result by changing the following hyperparameters for the four-states MDP introduced here:
- Discount ratio
- Learning rate
- Reward in the transition from state 2 to 3
- Use the following policy for the four-states MDP introduced in the chapter:
- Always choose action 1.
- Always choose action 2.
- Choosing the action maximizing the action value.
- Try to run the CartPole example in the example code and see how the behavior is changed.
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access
More than 5,000 organizations count on O’Reilly
O’Reilly covers everything we've got, with content to help us build a world-class technology community, upgrade the capabilities and competencies of our teams, and improve overall team performance as well as their engagement.Julian F.
I wanted to learn C and C++, but it didn't click for me until I picked up an O'Reilly book. When I went on the O’Reilly platform, I was astonished to find all the books there, plus live events and sandboxes so you could play around with the technology.Addison B.
I’ve been on the O’Reilly platform for more than eight years. I use a couple of learning platforms, but I'm on O'Reilly more than anybody else. When you're there, you start learning. I'm never disappointed.Amir M.
I'm always learning. So when I got on to O'Reilly, I was like a kid in a candy store. There are playlists. There are answers. There's on-demand training. It's worth its weight in gold, in terms of what it allows me to do.Mark W.
Publisher Resources
ISBN: 9781838821739