Splitting and caching the data

Here, we split the data into a training and testing set, as follows:

>>> df_train, df_test = df.randomSplit([0.7, 0.3], 42)

Here, 70% of samples are used for training and the remaining for testing, with a random seed specified, as always, for reproduction.

Before we perform any heavy lifting (such as model learning) on the training set, df_train, it is good practice to cache the object. In Spark, caching and persistence is an optimization technique that reduces the computation overhead. It saves the intermediate results of RDD or DataFrame operations in memory and/or on disk. Without caching or persistence, whenever an intermediate DataFrame is needed, it will be recalculated again according to how it was created ...

Get Python Machine Learning By Example - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.