Introducing seq2seq models

Seq2seq, or encoder-decoder (see Sequence to Sequence Learning with Neural Networks at https://arxiv.org/abs/1409.3215), models use RNNs in a way that's especially suited for solving tasks with indirect many-to-many relationships between the input and the output. A similar model was also proposed in another pioneering paper, Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (go to https://arxiv.org/abs/1406.1078 for more information). The following is a diagram of the seq2seq model. The input sequence [A, B, C, <EOS>] is decoded into the output sequence [W, X, Y, Z, <EOS>]:

A seq2seq model case by https://arxiv.org/abs/1409.3215

The model consists of two parts: an encoder ...

Get Advanced Deep Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.