December 2018
Beginner to intermediate
684 pages
21h 9m
English
RNNs with an LSTM architecture have more complex units that maintain an internal state and contain gates to keep track of dependencies between elements of the input sequence and regulate the cell states accordingly. These gates recurrently connect to each other instead of the usual hidden units we have previously encountered. They aim to address the problem of vanishing and exploding gradients by letting gradients pass through unchanged.
The following diagram shows unrolled LSTM units and outlines their typical gating mechanism (for more information from GitHub, refer to https://github.com/PacktPublishing/Hands-On-Machine-Learning-for-Algorithmic-Trading):
A typical LSTM unit combines four parameterized layers ...