So far, we have covered how to apply dimensionality reduction and clustering to textual data. There is another type of unsupervised Learning, which is specific to text: word embeddings. You have probably heard about Word2Vec, which is one such algorithm.
The problem Word embeddings tries to solve is how to embed words into low-dimensional vector space such that semantically close words are close in this space, and different words are far apart.
For example, cat and dog should be rather close there, but laptop and sky should be quite far apart.
Here, we will implement a Word Embedding algorithm based on the co-occurrence matrix. It builds upon the ideas of LSA: there we could represent the terms by the documents they contain. ...