January 2019
Intermediate to advanced
386 pages
11h 13m
English
The decoder has to generate the entire output sequence based solely on the thought vector. For this to work, the thought vector has to encode the entire information of the input sequence. However, the encoder is an RNN and we can expect that its hidden state will carry more information about the latest sequence elements, compared to the earliest.
Using LSTM cells and reversing the input helps, but cannot prevent it entirely. Because of this, the thought vector becomes something of a bottleneck and the seq2seq model works well for short sentences, but the performance deteriorates for longer ones. To solve this problem, Bahdanau et al. (https://arxiv.org/abs/1409.0473) proposed a seq2seq extension called ...