O'Reilly logo

Mastering Machine Learning with Spark 2.x by Michal Malohlava, Max Pumperla, Alex Tellez

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Base model

At this point, we have prepared the target prediction column and cleaned up the input data, and we can now build a base model. The base model gives us basic intuition about data. For this purpose, we will use all columns except columns detected as being useless. We will also skip handling of missing values, since we will use H2O and the RandomForest algorithm, which can handle missing values. However, the first step is to prepare a dataset with the help of defined Spark transformations:

import com.packtpub.mmlwspark.chapter8.Chapter8Library._val loanDataDf = h2oContext.asDataFrame(loanDataHf)(sqlContext)val loanStatusBaseModelDf = basicDataCleanup(   loanDataDf     .where("loan_status is not null")     .withColumn("loan_status", toBinaryLoanStatusUdf( ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required