The input stream publishes a similar interface as a Spark DataSet; thus, it can be transformed via a regular SQL interface or machine learning transformers. In our case, we will reuse all the trained models and transformation that were saved in the previous sections.
First, we will load empTitleTransformer-it is a regular Spark pipeline transformer that can be loaded with help of the Spark PipelineModel class:
val empTitleTransformer = PipelineModel.load(s"${modelDir}/empTitleTransformer")
The loanStatus and intRate models were saved in the H2O MOJO format. To load them, it is necessary to use the MojoModel class:
val loanStatusModel = MojoModel.load(new File(s"${modelDir}/loanStatusModel.mojo").getAbsolutePath)val ...