O'Reilly logo

Apache Spark 2.x Machine Learning Cookbook by Shuen Mei, Broderick Hall, Meenakshi Rajendran, Siamak Amirghodsi

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

How it works...

After loading and processing the data, the core of the work for PCA is done via the following code:

val pca = new PCA() .setInputCol("features") .setOutputCol("pcaFeatures") .setK(4) .fit(df)

The PCA() call allows us to select how many components we need (setK(4)). In the case of this recipe, we selected the first four components.

The goal is to find a lower-dimension space (a reduced PCA space) from the original higher-dimension data while preserving the structural properties (variance of data along principal component axis) in such a way that allows for maximum discrimination of labeled data without the original high-dimensional space requirement.

A sample PCA chart is shown in the following figure. After dimension reduction, ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required