Cross-validation

We can go forward with manual optimization and find the right model after having exhaustively tried many different configurations. Doing that would lead to both an immense waste of time (and reusability of the code) and will overfit the test dataset. Cross-validation is instead the correct key to run the hyperparameter optimization. Let's now see how Spark performs this crucial task.

First of all, as the training will be used many times, we can cache it. Therefore, let's cache it after all the transformations:

In: pipeline_to_clf = Pipeline(        stages=preproc_stages + [assembler]).fit(sampled_train_df)    train = pipeline_to_clf.transform(sampled_train_df).cache()    test = pipeline_to_clf.transform(test_df)

The useful classes for ...

Get Python Data Science Essentials - Third Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.