O'Reilly logo

Machine Learning with Spark - Second Edition by Nick Pentreath, Manpreet Singh Ghotra, Rajdeep Dua

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Improving our tokenization

The preceding simple approach results in a lot of tokens and does not filter out many nonword characters (such as punctuation). Most tokenization schemes will remove these characters. We can do this by splitting each raw document on nonword characters using a regular expression pattern:

val nonWordSplit = text.flatMap(t =>   t.split("""W+""").map(_.toLowerCase)) println(nonWordSplit.distinct.count)

This reduces the number of unique tokens significantly:

130126

If we inspect the first few tokens, we will see that we have eliminated most of the less useful characters in the text:

println( nonWordSplit.distinct.sample(true, 0.3,   50).take(100).mkString(","))

You will see the following result displayed:

jejones,ml5,w1w3s1,k29p,nothin,42b,beleive,robin,believiing,749, ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required