Let's read the car mileage data and then compute some basic statistics. In Spark 2.0.0, DataFrameReader has the capability to read CSV files and create Datasets. And the Dataset has the
describe() function, which calculates the count, mean, standard deviation, min, and max values. For correlation and covariance, we use the
stat.cov() methods. Spark 2.0.0 Datasets have made our statistics work a lot easier.
Now let's run the program, parse the code, and compare the results.
The code files are in
fdps-v3/code and the data files in
fdps-v3/data. You can run the code either from a Scala IDE or just from the Spark shell startup.
Start the Spark shell from the
bin directory where you have installed Spark: