Computers need to be taught to deal with the context. Say, for example, "I like eating apple." The computer need to understand that here, apple is a fruit and not a company. We want text where words have the same meaning to have the same representation, or at least a similar representation, so that machines can understand that the words have the same meaning. The main objective of word embedding is to capture as much context, hierarchical, and morphological information concerning the word as possible.
Word embedding can be categorized in two ways:
- Frequency-based embedding
- Prediction-based embedding
From the name, it is clear that frequency-based embedding uses a counting mechanism, whereas prediction-based embedding uses ...