The previous part of the book covered Spark’s Structured APIs. You should heavily favor these APIs in almost all scenarios. That being said, there are times when higher-level manipulation will not meet the business or engineering problem you are trying to solve. For those cases, you might need to use Spark’s lower-level APIs, specifically the Resilient Distributed Dataset (RDD), the SparkContext, and distributed shared variables like accumulators and broadcast variables. The chapters that follow in this part cover these APIs and how to use them.
If you are brand new to Spark, this is not the place to start. Start with the Structured APIs, you’ll be more productive more quickly!
There are two sets of low-level APIs: there is one for manipulating distributed data (RDDs), and another for distributing and manipulating distributed shared variables (broadcast variables and accumulators).
You should generally use the lower-level APIs in three situations:
You need some functionality that you cannot find in the higher-level APIs; for example, if you need very tight control over physical data placement across the cluster.
You need to maintain some legacy codebase written using RDDs.
You need to do some custom shared variable manipulation. We will discuss shared variables more in Chapter 14.
Those are the reasons why you should use these lower-level tools, buts ...