December 2018
Beginner to intermediate
684 pages
21h 9m
English
RNNs (see Chapter 18, Recurrent Neural Networks) have been developed to take into account the dynamics and dependencies over potentially long ranges often found in sequential data. Similarly, sequence-to-sequence autoencoders aim to learn representations attuned to the nature of data generated in sequence.
Sequence-to-sequence autoencoders are based on RNN components, such as Long Short-Term Memory (LSTM) or Gated Recurrent Units (GRUs). They learn a compressed representation of sequential data and have been applied to video, text, audio, and time-series data.
As mentioned in the last chapter, encoder-decoder architectures allow RNNs to process input and output sequences of variable lengths. These architectures ...