Skip-gram

Given an input word, the Skip-gram model can predict its context (the opposite of CBOW). For example, the word brown will predict the words The quick fox jumps. Unlike CBOW, the input is a single one-hot word. But how do we represent the context words in the output? Instead of trying to predict the whole context (all surrounding words) simultaneously, Skip-gram transforms the context into multiple training pairs such as (fox, the), (fox, quick), (fox, brown), and (fox, jumps). Once again, we can train the model with a simple one-layer network:

A Skip-gram model network

As with CBOW, the output is a softmax, which represents the one-hot ...

Get Advanced Deep Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.