December 2018
Beginner to intermediate
684 pages
21h 9m
English
GRUs simplify LSTM units by omitting the output gate. They have been shown to achieve a similar performance on certain language modeling tasks but do better on smaller datasets.
GRUs aim at making each recurrent unit adapt to capturing dependencies of different time scales. Similarly to the LSTM unit, GRUs have gating units that modulate the flow of information inside the unit but discard separate memory cells (for additional details from GitHub, refer to https://github.com/PacktPublishing/Hands-On-Machine-Learning-for-Algorithmic-Trading).