Implementing stanford NLP - lemmatization over Spark

Lemmatization is one of the pre-processing steps which is a more methodical way of converting all the grammatical/inflected forms of the root of the word. It uses context and parts of speech to determine the inflected form of the word and applies different normalization rules for each part of speech to get the word (lemma). In this recipe, we'll see lemmatization of text using Stanford API.

Getting ready

To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html ...

Get Apache Spark for Data Science Cookbook now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.