December 2019
Intermediate to advanced
468 pages
14h 28m
English
The decoder has to generate the entire output sequence based solely on the thought vector. For this to work, the thought vector has to encode all of the information of the input sequence; however, the encoder is an RNN, and we can expect that its hidden state will carry more information about the latest sequence elements than the earliest. Using LSTM cells and reversing the input helps, but cannot prevent it entirely. Because of this, the thought vector becomes something of a bottleneck. As a result, the seq2seq model works well for short sentences, but the performance deteriorates for longer ones.
Read now
Unlock full access