If you follow the preceding workflow, and stop from time to time to see the results (which you absolutely should, by the way) you will notice that there is a lot of garbage around—words with upper and lower case, punctuation and so on. What happens if you improve this workflow by properly parsing the words? You can use the tokenizers library instead of the space_tokenizer function from text2vec to remove stopwords and punctuation in a single line:
library(tokenizers) tokens <- tokenize_words(imdb$review, stopwords = stopwords())
The full code is now:
library(plyr)library(dplyr)library(text2vec)library(tidytext)library(caret)imdb <- read.csv("./data/labeledTrainData.tsv" , encoding = "utf-8" , quote = ""