Training the embedding model

In the first example, we'll train a word2vec model on the classic novel War and Peace by Leo Tolstoy. The novel is stored as a regular text file in the code repository. Let's start:

  1. As the tradition goes, we'll do the imports:
import loggingimport pprint  # beautify printsimport gensimimport nltk
  1. Then, we'll set the logging level to INFO so we can track the training progress:
logging.basicConfig(level=logging.INFO)
  1. Next, we'll implement the text tokenization pipeline. Tokenization refers to the breaking up of a text sequence into pieces (or tokens) such as words, keywords, phrases, symbols, and other elements. Tokens can be individual words, phrases, or even whole sentences. We'll implement two-level tokenization; ...

Get Advanced Deep Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.