Chapter 3. Apache Spark’s Structured APIs
In this chapter, we will explore the principal motivations behind adding structure to Apache Spark, how those motivations led to the creation of high-level APIs (DataFrames and Datasets), and their unification in Spark 2.x across its components. We’ll also look at the Spark SQL engine that underpins these structured high-level APIs.
When Spark SQL was first introduced in the early Spark 1.x releases, followed by DataFrames as a successor to SchemaRDDs in Spark 1.3, we got our first glimpse of structure in Spark. Spark SQL introduced high-level expressive operational functions, mimicking SQL-like syntax, and DataFrames, which laid the foundation for more structure in subsequent releases, paved the path to performant operations in Spark’s computational queries.
But before we talk about the newer Structured APIs, let’s get a brief glimpse of what it’s like to not have structure in Spark by taking a peek at the simple RDD programming API model.
Spark: What’s Underneath an RDD?
The RDD is the most basic abstraction in Spark. There are three vital characteristics associated with an RDD:
-
Dependencies
-
Partitions (with some locality information)
-
Compute function: Partition =>
Iterator[T]
All three are integral to the simple RDD programming API model upon which all higher-level functionality is constructed. First, a list of dependencies that instructs Spark how an RDD is constructed with its inputs is required. When necessary to reproduce ...
Get Learning Spark, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.