Chapter 11. Join Design Patterns

In this chapter we will examine practical design patterns for joining datasets. As in the previous chapters, I will focus on patterns that are useful in real-world environments. PySpark supports a basic join operation for RDDs (pyspark.RDD.join()) and DataFrames (pyspark.sql.DataFrame.join()) that will be sufficient for most use cases. However, there are circumstances where this join can be costly, so I’ll also show you some special join algorithms that may prove useful.

This chapter introduces the basic concept of joining two datasets, and provides examples of some useful and practical join design patterns. I’ll show you how the join operation is implemented in the MapReduce paradigm and how to use Spark’s transformations to perform a join. You’ll see how to perform map-side joins with RDDs and DataFrames, and how to perform an efficient join using a Bloom filter.

Introduction to the Join Operation

In the relational database world, joining two tables (aka “relations”) with a common key—that is, an attribute or set of attributes in one or more columns that allow the unique identification of each record (tuple or row) in the table—is a frequent operation.

Consider the following two tables, T1 and T2:

T1 = {(k1, v1)}
T2 = {(k2, v2)}

where:

  • k1 is the key for T1 and v1 are the associated attributes.

  • k2 is the key for T2 and v2 are the associated attributes. ...

Get Data Algorithms with Spark now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.