O'Reilly logo

Fast Data Processing with Spark 2 - Third Edition by Krishna Sankar

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Basic statistics

Let's read the car mileage data and then compute some basic statistics. In Spark 2.0.0, DataFrameReader has the capability to read CSV files and create Datasets. And the Dataset has the describe() function, which calculates the count, mean, standard deviation, min, and max values. For correlation and covariance, we use the stat.corr() and stat.cov() methods. Spark 2.0.0 Datasets have made our statistics work a lot easier.

Now let's run the program, parse the code, and compare the results.

The code files are in fdps-v3/code and the data files in fdps-v3/data. You can run the code either from a Scala IDE or just from the Spark shell startup.

Start the Spark shell from the bin directory where you have installed Spark:

/Volumes/sdxc-01/spark-2.0.0/bin/spark-shell ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required