Pipelines

The pipeline API is part of the spark.ml package and it consists of a sequence of stages such as data cleaning, Feature extraction, model training, model validation, model testing, and so on. It provides the ability to chain all these different stages in a sequence for developing any machine learning algorithm. First let's get acquainted with common terminologies used in ML workflow using pipelines:

  • Dataframe: Unlike the spark.mllib package, where the datatype was RDD based, in spark.ml the datatype is dataframe. Every pipeline expects input data to be dataframe, which can again be created from various Data sources discussed in Chapter 8, Working with Spark SQL, such as from filesystems, external databases, and other data generation ...

Get Apache Spark 2.x for Java Developers now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.