October 2018
Intermediate to advanced
252 pages
6h 49m
English
We will develop a basic character-level seq2seq model for text summarization. We could also use a word-level model, which is quite common in the domain of text processing. For our recipe, we will use character level models. As mentioned earlier, encoder and decoder architecture is a way of creating RNNs for sequence prediction. Encoders read the entire input sequence and encode it into an internal representation, usually a fixed-length vector, named the context vector. The decoder, on the other hand, reads the encoded input sequence from the encoder and produces the output sequence.
The encoder-decoder architecture consists of two primary models: one reads the input sequence and encodes it to a fixed-length vector, ...