Pipeline components
Pipelines consist of a set of components joined together such that the DataFrame produced by one component is used as input for the next component. The components available are split into two classes: transformers and estimators.
Transformers
Transformers transform one DataFrame into another, normally by appending one or more columns.
The first step in our spam classification algorithm is to split each message into an array of words. This is called
tokenization. We can use the Tokenizer
transformer, provided by MLlib:
scala> import org.apache.spark.ml.feature._ import org.apache.spark.ml.feature._ scala> val tokenizer = new Tokenizer() tokenizer: org.apache.spark.ml.feature.Tokenizer = tok_75559f60e8cf
The behavior of transformers ...
Get Scala:Applied Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.