In this recipe, we'll see how to identify the required variables for analysis and understand their description.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the
build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
Let's look at an example of sales data. It contains 2013 sales data for nearly 1600 products across 10 stores in different cities. The data contains product and store ...