Introducing gated recurrent units

A Gated Recurrent Unit (GRU) is a type of recurrent block that was introduced in 2014 (Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation and Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, as an improvement over LSTM. A GRU unit usually has similar or better performance than an LSTM, but it does so with fewer parameters and operations:

A GRU cell

Similar to the classic RNN, a GRU cell has a single hidden state, ht. You can think of it as a combination of the hidden and ...

Get Advanced Deep Learning with Python now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.