We can solve this problem with the help of the attention mechanism (see Neural Machine Translation by Jointly Learning to Align and Translate at https://arxiv.org/abs/1409.0473), an extension of the seq2seq model, that provides a way for the decoder to work with all encoder hidden states, not just the last one.
Besides solving the bottleneck problem, the attention mechanism has some other advantages. For one, the immediate access to all previous states helps to prevent the vanishing gradients problem. It also allows for some interpretability of the results because we can see what parts of the input the ...