Our final strategy is to look up embeddings from pre-trained networks. The simplest way to do this with the current examples is to just set the trainable parameter of the embedding layer to False. This ensures that backpropagation will not update the weights on the embedding layer:
model.add(Embedding(vocab_sz, EMBED_SIZE, input_length=maxlen, weights=[embedding_weights], trainable=False))model.add(SpatialDropout1D(Dropout(0.2)))
Setting this value with the word2vec and GloVe examples gave us accuracies of 98.7% and 98.9% respectively after 10 epochs of training.
However, in general, this is not how you would use pre-trained embeddings in your code. Typically, it involves preprocessing your dataset to create word vectors ...