At this point, we have prepared the target prediction column and cleaned up the input data, and we can now build a base model. The base model gives us basic intuition about data. For this purpose, we will use all columns except columns detected as being useless. We will also skip handling of missing values, since we will use H2O and the RandomForest algorithm, which can handle missing values. However, the first step is to prepare a dataset with the help of defined Spark transformations:
import com.packtpub.mmlwspark.chapter8.Chapter8Library._val loanDataDf = h2oContext.asDataFrame(loanDataHf)(sqlContext)val loanStatusBaseModelDf = basicDataCleanup( loanDataDf .where("loan_status is not null") .withColumn("loan_status", toBinaryLoanStatusUdf( ...