Traditional many to many architecture

In this architecture, we will embed each input word into a 128 dimensional vector, resulting in an output vector of shape (batch_size, 128, 17). We want to do this because, in this version, we want to test out the scenario where the input data has 17 time steps and the output dataset also has 17 time steps.

We shall connect each input time step to the output time step through an LSTM, and then perform a softmax on top of the predictions:

  1. Create input and output datasets. Note that we have decoder_input_data and decoder_target_data. For now, let us create decoder_input_data as the word ID corresponding to the target sentence words. The decoder_target_data is the one-hot-encoded version of the target data ...

Get Neural Networks with Keras Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.