Word2Vec from Google News

The Word2Vec model trained by Google on the Google News dataset has a feature dimension of 300. The number of features is considered as a hyperparameter which you can, and perhaps should, experiment with in your own applications to see which setting yields the best results.

In this pretrained model, some stop words such as a, and, and of are being excluded, but others such as the, also, and should are included. Some misspelled words are also included, for example, both mispelled and misspelled—the latter is the correct one.

You can find open source tools such as https://github.com/chrisjmccormick/inspect_word2vec to inspect the word embeddings in the pre-trained model.

Get Deep Learning Essentials now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.