Skip to Content
Hands-On Machine Learning for Algorithmic Trading
book

Hands-On Machine Learning for Algorithmic Trading

by Stefan Jansen
December 2018
Beginner to intermediate
684 pages
21h 9m
English
Packt Publishing
Content preview from Hands-On Machine Learning for Algorithmic Trading

From a value function to an optimal policy

The solution to an RL problem is a policy that optimizes the cumulative reward. Policies and value functions are closely connected: an optimal policy yields a value estimate for each state, vπ (s), or state-action pair, qπ (s, a), that is at least as high as for any other policy since the value is the cumulative reward under the given policy. Hence, the optimal value functions, v * (s) = maxπ vπ (s) and q * (s, a) = maxπ qπ (s, a), implicitly define optimal policies and solve the MDP.

The optimal value functions, v * and q *, also satisfy the Bellman equations from the previous section. These Bellman optimality equations can omit the explicit reference to a policy as it is implied by v * and q*

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Machine Learning for Algorithmic Trading - Second Edition

Machine Learning for Algorithmic Trading - Second Edition

Stefan Jansen

Publisher Resources

ISBN: 9781789346411Supplemental Content