O'Reilly logo

Machine Learning with Spark - Second Edition by Nick Pentreath, Manpreet Singh Ghotra, Rajdeep Dua

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Tuning tree depth and impurity

We will illustrate the impact of tree depth in a similar manner as we did for our logistic regression model.

First, we will need to create another helper function in the Spark shell as follows:

import org.apache.spark.mllib.tree.impurity.Impurity import org.apache.spark.mllib.tree.impurity.Entropy import org.apache.spark.mllib.tree.impurity.Gini def trainDTWithParams(input: RDD[LabeledPoint], maxDepth: Int, impurity: Impurity) = {   DecisionTree.train(input, Algo.Classification, impurity, maxDepth) } 

Now, we're ready to compute our AUC metric for different settings of tree depth. We will simply use our original dataset in this example, since we do not need the data to be standardized.

Note that decision tree ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required