June 2018
Intermediate to advanced
436 pages
10h 33m
English
Instead of generating a new Word2Vec model from scratch, Google's pre-trained news word vector model can be used, which provides an efficient implementation of the CBOW and skip-gram architectures for computing vector representations of words. These representations can subsequently be used in many NLP applications and further research.
The model can be downloaded from https://code.google.com/p/word2vec/ manually. The Word2Vec model takes a text corpus as input and produces the word vectors as output. It first constructs a vocabulary from the training text data and then learns vector representation of words.