December 2019
Intermediate to advanced
468 pages
14h 28m
English
This was the first chapter devoted to NLP. Appropriately, we started with the basic building blocks of most NLP algorithms today—the words and their context-based vector representations. We started with n-grams and the need to represent words as vectors. Then, we discussed the word2vec, fastText, and GloVe models. Finally, we implemented a simple pipeline to train an embedding model and we visualized word vectors with t-SNE.
In the next chapter, we'll discuss RNNs—a neural network architecture that naturally lends itself to NLP tasks.
Read now
Unlock full access