In the previous section, we have seen how to use H2O for ethnicity prediction. However, we could not achieve better prediction accuracy. Therefore, H2O is not mature enough to compute all the necessary performance metrics.
So why don't we try Spark-based tree ensemble techniques such as Random Forest or GBTs? Because we have seen that in most cases, RF shows better predictive accuracy, so let us try with that one.
In the K-means section, we've already prepared the Spark DataFrame named schemaDF. Therefore, we can simply transform the variables into feature vectors that we described before. Nevertheless, for this, we need to exclude the label column. We can do it using the drop() method as follows: ...