Skip to Main Content
Deep Reinforcement Learning Hands-On
book

Deep Reinforcement Learning Hands-On

by Oleg Vasilev, Maxim Lapan, Martijn van Otterlo, Mikhail Yurushkin, Basem O. F. Alijla
June 2018
Intermediate to advanced content levelIntermediate to advanced
546 pages
13h 30m
English
Packt Publishing
Content preview from Deep Reinforcement Learning Hands-On

The trading environment

As we have lots of code that is supposed to work with OpenAI Gym, we’ll implement the trading functionality following Gym’s Env class API, which should be familiar to you. Our environment is implemented in the StocksEnv class in the Chapter08/lib/environ.py module. It uses several internal classes to keep its state and encode observations. Let’s first look at the public API class.

class Actions(enum.Enum):
    Skip = 0
    Buy = 1
    Close = 2

We encode all available actions as an enumerator’s fields. We support a very simple set of actions with only three options: do nothing, buy a single share, and close the existing position.

class StocksEnv(gym.Env):
    metadata = {‘render.modes’: [‘human’]}

This metadata field is required the for

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Grokking Deep Reinforcement Learning

Grokking Deep Reinforcement Learning

Miguel Morales

Publisher Resources

ISBN: 9781788834247Supplemental Content