Spark SQL and its
Datasets interfaces are the future of Spark performance, with more efficient storage options, advanced optimizer, and direct operations on serialized data.
These components are super important for getting the best of Spark performance (see Figure 3-1).
These are relatively new components;
Datasets were introduced in Spark 1.6,
DataFrames in Spark 1.3, and the SQL engine in Spark 1.0.
This chapter is focused on helping you learn how to best use Spark SQL’s tools and how to intermix Spark SQL with traditional Spark operations.
DataFrames have very different functionality compared to traditional
DataFrames like Panda’s and R’s. While these all deal with structured data, it is important not to depend on your existing intuition surrounding
Datasets represent distributed collections, with additional schema information not found in RDDs.
This additional schema information is used to provide a more efficient storage layer (Tungsten), and in the optimizer (Catalyst) to perform additional optimizations.
Beyond schema information, the operations performed on
DataFrames are such that the optimizer can inspect the ...