Chapter 6. Making Sentiment Programmable Using Embeddings
In Chapter 5 you saw how to take words and encode them into tokens. You then saw how to encode sentences full of words into sequences full of tokens, padding or truncating them as appropriate to end up with a well-shaped set of data that can be used to train a neural network. In none of that was there any type of modeling of the meaning of a word. While it’s true that there’s no absolute numeric encoding that could encapsulate meaning, there are relative ones. In this chapter you’ll learn about them, and in particular the concept of embeddings, where vectors in high-dimensional space are created to represent words. The directions of these vectors can be learned over time based on the use of the words in the corpus. Then, when you’re given a sentence, you can investigate the directions of the word vectors, sum them up, and from the overall direction of the summation establish the sentiment of the sentence as a product of its words.
In this chapter we’ll explore how that works. Using the Sarcasm dataset from Chapter 5, you’ll build embeddings to help a model detect sarcasm in a sentence. You’ll also see some cool visualization tools that help you understand how words in a corpus get mapped to vectors so you can see which words determine the overall classification.
Establishing Meaning from Words
Before we get into the higher-dimensional vectors for embeddings, let’s try to visualize how meaning can be derived from numerics ...
Get AI and Machine Learning for Coders now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.