Chapter 10. Topic Modeling
In the previous chapter we covered some of the techniques used to extract information from text. These techniques can be complicated to implement and may also be slow. If the application requires the information extracted to be readable by the users, these techniques are great. If we are looking to extract information as part of an intermediate processing step—for instance, building features for a classifier—then we don’t need to extract readable information. As we saw in Chapters 5 and 7, simply using our vocabulary will create an unwieldy number of features. Therefore, we want to reduce the dimensionality of our data. This is where distributional semantics comes in.
Distributional semantics is the study, using statistical distributions, of elements of language to characterize similarities between documents (e.g., email), speech acts (e.g., spoken or written sentences), or elements thereof (e.g., phrases, words). The idea for this field comes from John R. Firth, a linguist in the first half of the 20th century. He noted how semantics was dependent on context and coined the oft-repeated quote, “You shall know a word by the company it keeps.”
The idea is that you can represent a word as a probabilistic distribution over the contexts it appears in. The words will exist in a vector space where the dimensions are associated with these contexts. For example, “doctor” will have a larger value on the medical dimension than in the financial dimension. However, ...
Get Natural Language Processing with Spark NLP now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.