We can go forward with manual optimization and find the right model after having exhaustively tried many different configurations. Doing that would lead to both an immense waste of time (and reusability of the code) and will overfit the test dataset. Cross-validation is instead the correct key to run the hyperparameter optimization. Let's now see how Spark performs this crucial task.

First of all, as the training will be used many times, we can cache it. Therefore, let's cache it after all the transformations:

In: pipeline_to_clf = Pipeline(        stages=preproc_stages + [assembler]).fit(sampled_train_df)    train = pipeline_to_clf.transform(sampled_train_df).cache()    test = pipeline_to_clf.transform(test_df)

The useful classes for ...

Get Python Data Science Essentials - Third Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.