Here, we split the data into a training and testing set, as follows:
>>> df_train, df_test = df.randomSplit([0.7, 0.3], 42)
Here, 70% of samples are used for training and the remaining for testing, with a random seed specified, as always, for reproduction.
Before we perform any heavy lifting (such as model learning) on the training set, df_train, it is good practice to cache the object. In Spark, caching and persistence is an optimization technique that reduces the computation overhead. It saves the intermediate results of RDD or DataFrame operations in memory and/or on disk. Without caching or persistence, whenever an intermediate DataFrame is needed, it will be recalculated again according to how it was created ...