With this, let us go ahead and build the model on the input and output datasets that we already prepared in the previous section (step 1 of many to hidden to many architecture of the previous section remains the same). The code file is available as Machine_translation.ipynb in GitHub.
- Build the model, as follows:
# We shall convert each word into a 128 sized vectorembedding_size = 128
-
- Prepare the encoder model:
encoder_inputs = Input(shape=(None,))en_x= Embedding(num_encoder_tokens+1, embedding_size)(encoder_inputs)encoder = LSTM(256, return_state=True)encoder_outputs, state_h, state_c = encoder(en_x)# We discard `encoder_outputs` and only keep the states.encoder_states = [state_h, state_c]
Note that we are using a ...