Understanding word embedding

The BoW models that we discussed in our earlier section suffer from a problem that they do not capture information about a word’s meaning or context. This means that potential relationships, such as contextual closeness, are not captured across collections of words. For example, the approach cannot capture simple relationships, such as determining that the words "cars" and "buses" both refer to vehicles that are often discussed in the context of transportation. This problem that we experience with the BoW approach will be overcome by word embedding, which is an improved approach to mapping semantically similar words.

Word vectors represent words as multidimensional continuous floating point numbers, where semantically ...

Get R Machine Learning Projects now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.