January 2019
Intermediate to advanced
386 pages
11h 13m
English
In the following diagram, we can see a 2D projection of some word embeddings (source http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/). The words, which are semantically close, are also close to each other in the embedding space:

A surprising result is that these word embedding's can capture analogies between words as differences (as shown in the following diagram; source: https://www.aclweb.org/anthology/N/N13/N13-1090.pdf). For example, it might capture that the difference between the embedding of "woman" and "man" ...