The importance of data cleansing

If you follow the preceding workflow, and stop from time to time to see the results (which you absolutely should, by the way) you will notice that there is a lot of garbage around—words with upper and lower case, punctuation and so on. What happens if you improve this workflow by properly parsing the words? You can use the tokenizers library instead of the space_tokenizer function from text2vec to remove stopwords and punctuation in a single line:

 library(tokenizers) tokens <- tokenize_words(imdb$review, stopwords = stopwords())

The full code is now:

library(plyr)library(dplyr)library(text2vec)library(tidytext)library(caret)imdb <- read.csv("./data/labeledTrainData.tsv"                   , encoding = "utf-8"                   , quote = ""

Get Deep Learning with R for Beginners now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.