How it works...

After loading and processing the data, the core of the work for PCA is done via the following code:

val pca = new PCA() .setInputCol("features") .setOutputCol("pcaFeatures") .setK(4) .fit(df)

The PCA() call allows us to select how many components we need (setK(4)). In the case of this recipe, we selected the first four components.

The goal is to find a lower-dimension space (a reduced PCA space) from the original higher-dimension data while preserving the structural properties (variance of data along principal component axis) in such a way that allows for maximum discrimination of labeled data without the original high-dimensional space requirement.

A sample PCA chart is shown in the following figure. After dimension reduction, ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.