Hierarchical softmax

The denominator term in our usual softmax is calculated using the sum operator over a large number of words. This normalization is a very expensive operation to do at each update during training.

Instead, we can break this down into a specific sequence of calculations, which saves us from having to calculate expensive normalization over all words. This means that for each word, we use an approximation of sorts.

In practice, this approximation has worked so well that some systems use this in both training and inference time. For training, it can give a speed of up to 50x (as per Sebastian Ruder, an NLP research blogger). In my own experiments, I have seen speed gains of around 15-25x.

model.build_vocab(ted_talk_docs)

Get Natural Language Processing with Python Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.