December 2018
Beginner to intermediate
684 pages
21h 9m
English
Now we can define our RNN architecture. The first layer learns the word embeddings. We define the embedding dimension as previously using the input_dim keyword to set the number of tokens that we need to embed, the output_dim keyword, which defines the size of each embedding, and how long each input sequence is going to be.
Note that we are using GRUs this time, which train faster and perform better on smaller data. We are also using dropout for regularization, as follows:
embedding_size = 100rnn = Sequential([ Embedding(input_dim=vocab_size, output_dim= embedding_size, input_length=maxlen), GRU(units=32, dropout=0.2, recurrent_dropout=0.2), Dense(1, activation='sigmoid')])rnn.summary()
The resulting ...