16Traditional Natural Language Processing
Natural language processing (NLP) is a collection of techniques for working with human language. Examples would include flagging e‐mails as spam, using Twitter to assess public sentiment, and finding which text documents are about similar topics. NLP is an area that many data scientists never actually need to touch. But, enough of them end up needing it – and it is sufficiently different from other subjects – that it deserves a chapter in this book.
This chapter will start with several generic sections about NLP datasets and discussions of big‐picture concepts. Then, I will switch gears to specific NLP concepts, discussing them roughly in order of increasing sophistication.
I want to emphasize that NLP techniques are not strictly limited to language. I’ve also seen them used to parse computer log files, figuring out what “sentences” the computer generates. Personally, I first learned many of the statistical techniques while working with bioinformatics.
This chapter will mostly limit itself to what you could call “traditional” NLP techniques – those that predate recent advances in deep learning. They will not give you near‐magical performance that can be had with Large Language Models (LLMs), but they are dramatically easier to understand, reason about, and hack.
The central concept I will discuss is what’s sometimes called a “bag of words,” a heavy‐handed way to condense a piece of text down into a vector suitable for numerical algorithms. ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access