Spark provides some built-in functions for feature scaling and standardization in its machine learning library. These include StandardScaler, which applies the standard normal transformation, and Normalizer, which applies the same feature vector normalization we showed you in our preceding example code.lization we showed you in our preceding example code.lization, we showed you in our preceding example code.
We will explore the use of these methods in the upcoming chapters, but for now, let's simply compare the results of using MLlib's Normalizer to our own results:
from pyspark.mllib.feature import Normalizer normalizer = Normalizer() vector = sc.parallelize([x])
After importing the required class, we ...