With embeddings

In the previous recipe, we did not use any embeddings, such as Global Vectors for Word Representation (GloVe) or Word2vec; we will now use pre-trained word embeddings from Keras. Let's reuse the documents and labels from the preceding recipe. The code is as follows:

# define documents documents = ['Well done!',              'Good work',              'Great effort',              'nice work',              'Excellent!',              'Weak',              'Poor effort!',              'not good',              'poor work',              'Could have done better.']  # define class labels labels = array([1, 1, 1, 1, 1, 0, 0, 0, 0, 0])

Keras provides tokenizer APIs for preparing text that can be fit and reused to prepare multiple text documents. A tokenizer is constructed and then fit onto text documents or integer encoded text documents. Here, words ...

Get Keras Deep Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.