Handling persistence in Spark
In this section, we will discuss how the persistence or caching is being handled in Spark. We will talk about various persistence and caching mechanisms provided by Spark along with their significance.
Persistence/caching is one the important components or features of Spark. Earlier, we talked about the computations/transformations are lazy in Spark and the actual computations do not take place unless any action is invoked on the RDD. Though this is a default behavior and provides fault tolerance, sometimes it also impacts the overall performance of the job, especially when we have common datasets that are leveraged and used across the computations.
Persistence/caching helps us in solving this problem by exposing the ...
Get Real-Time Big Data Analytics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.