Handling persistence in Spark

In this section, we will discuss how the persistence or caching is being handled in Spark. We will talk about various persistence and caching mechanisms provided by Spark along with their significance.

Persistence/caching is one the important components or features of Spark. Earlier, we talked about the computations/transformations are lazy in Spark and the actual computations do not take place unless any action is invoked on the RDD. Though this is a default behavior and provides fault tolerance, sometimes it also impacts the overall performance of the job, especially when we have common datasets that are leveraged and used across the computations.

Persistence/caching helps us in solving this problem by exposing the ...

Get Real-Time Big Data Analytics now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.