Chapter 2. Introduction to Spark and PySpark

The aim of this chapter is to bring you up to speed on PySpark and Spark, giving you enough information so you’re comfortable with the tutorials in the rest of the book. Let’s start at the beginning. What exactly is Spark? Originally developed at UC Berkeley in 2009, Apache Spark is an open source analytics engine for big data and machine learning. It gained rapid adoption by enterprises across many industries soon after its release and is deployed at massive scale by powerhouses like Netflix, Yahoo, and eBay to process exabytes of data on clusters of many thousands of nodes. The Spark community has grown rapidly too, encompassing over 1,000 contributors from 250+ organizations.

Note

For a deep dive into Spark itself, grab a copy of Spark: The Definitive Guide, by Bill Chambers and Matei Zaharia (O’Reilly).

To set you up for the remainder of this book, this chapter will cover the following areas:

  • Apache Spark’s distributed architecture

  • Apache Spark basics (software architecture and data structures)

  • DataFrame immutability

  • PySpark’s functional paradigm

  • How pandas DataFrames differ from Spark DataFrames

  • Scikit-learn versus PySpark for machine learning

Apache Spark Architecture

The Spark architecture consists of the following main components:

Driver program
The driver program (aka Spark driver) is a dedicated process that runs on the driver machine. It is responsible for executing and holding the SparkSession, which encapsulates ...

Get Scaling Machine Learning with Spark now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.