O'Reilly logo

R Deep Learning Projects by Pablo Maldonado, Yuxi Liu

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

The importance of data cleansing

If you follow the preceding workflow, and stop from time to time to see the results (which you absolutely should, by the way) you will notice that there is a lot of garbage around—words with upper and lower case, punctuation and so on. What happens if you improve this workflow by properly parsing the words? You can use the tokenizers library instead of the space_tokenizer function from text2vec to remove stopwords and punctuation in a single line:

 library(tokenizers) tokens <- tokenize_words(imdb$review, stopwords = stopwords())

The full code is now:

library(plyr)library(dplyr)library(text2vec)library(tidytext)library(caret)imdb <- read.csv("./data/labeledTrainData.tsv"                   , encoding = "utf-8"                   , quote = ""

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required