Building the corpus with tokenization and data cleaning

The first thing we need to create when working with text data is to extract the tokens that will be used to create our corpus. Simply, these tokens are all the terms found in every text in our data, put together, and removed the ordering or grammatical context. To create them, we use the tokens() function and the related functions from the quanteda package. As you can imagine, our data will not only contain words, but also punctuation marks, numbers, symbols, and other characters like hyphens. Depending on the context of the problem you're working with, you may find it quite useful to remove all of them as we do here. However, keep in mind that in some contexts some of these special ...

Get R Programming By Example now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.