Optimizing memory

Spark is a complex distributed computing framework, and has many moving parts. Various cluster resources, such as memory, CPU, and network bandwidth, can become bottlenecks at various points. As Spark is an in-memory compute framework, the impact of the memory is the biggest.

Another issue is that it is common for Spark applications to use a huge amount of memory, sometimes more than 100 GB. This amount of memory usage is not common in traditional Java applications.

In Spark, there are two places where memory optimization is needed, and that is at the driver and at the executor level.

You can use the following commands to set the driver memory:

  • Spark shell:
    $ spark-shell --drive-memory 4g
    
  • Spark submit:
    $ spark-submit --drive-memory ...

Get Spark Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.