December 2018
Beginner to intermediate
684 pages
21h 9m
English
The architectures discussed so far assumed that the input and output sequences have equal length. Encoder-decoder architectures, also called sequence-to-sequence (seq2seq) architectures, relax this assumption and have become very popular for machine translation and seq2seq prediction in general.
The encoder is an RNN model that maps the input space to a different space, also called the latent space, whereas the decoder function is a complementary RNN model that maps the encoded input to the target space. In the next chapter, we will cover autoencoders, which are able to learn a feature representation in an unsupervised setting using a variety of deep learning architectures.
Encoder-decoder ...