Chapter 7. Optimizing and Tuning Spark Applications
In the previous chapter, we elaborated on how to work with Datasets in Java and Scala. We explored how Spark manages memory to accommodate Dataset constructs as part of its unified and high-level API, and we considered the costs associated with using Datasets and how to mitigate those costs.
Besides mitigating costs, we also want to consider how to optimize and tune Spark. In this chapter, we will discuss a set of Spark configurations that enable optimizations, look at Spark’s family of join strategies, and inspect the Spark UI, looking for clues to bad behavior.
Optimizing and Tuning Spark for Efficiency
While Spark has many configurations for tuning, this book will only cover a handful of the most important and commonly tuned configurations. For a comprehensive list grouped by functional themes, you can peruse the documentation.
Viewing and Setting Apache Spark Configurations
There are three ways you can get and set Spark properties. The first is through a set of configuration files. In your deployment’s $SPARK_HOME
directory (where you installed Spark), there are a number of config files: conf/spark-defaults.conf.template, conf/log4j.properties.template, and conf/spark-env.sh.template. Changing the default values in these files and saving them without the .template suffix instructs Spark to use these new values.
Note
Configuration changes in the conf/spark-defaults.conf file apply to the Spark cluster and all Spark applications ...
Get Learning Spark, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.