Exploring Sentence-, Document-, and Character-Level Embeddings

In Chapter 5, Word Embeddings and Distance Measurements for Text, we looked at how information related to the ordering of words, along with their semantics, can be taken into account when building embeddings to represent words. The idea of building embeddings will be extended in this chapter. We will explore techniques that will help us build embeddings for documents and sentences, as well as words based on their characters. We will start by looking into an algorithm called Doc2Vec, which, as the name suggests, provides document- or paragraph-level contextual embeddings. A sentence can essentially be treated as a paragraph, and embeddings for individual sentences can also be obtained ...

Get Hands-On Python Natural Language Processing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.