Appendix H
Gated Recurrent Units
This appendix is related to Chapter 10, “Long Short-Term Memory.”
In Chapter 10, we introduced long short-term memory (LSTM), which was introduced by Hochreiter and Schmidhuber in 1997. In 2014, Cho and colleagues (2014b) introduced the gated recurrent unit (GRU), which was described as “motivated by the LSTM unit but is much simpler to compute and implement.” Both LSTM and GRU are frequently used in modern recurrent neural networks (RNNs). To refresh your memory, we start with Figure H-1 of an LSTM-based layer, which was previously shown in Chapter 10, Figure 10-6.
Get Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, NLP, and Transformers using TensorFlow now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.