Chapter 7. Going Beyond Scala
Working in Spark doesn’t mean limiting yourself to Scala, or even limiting yourself to the JVM, or languages that Spark explicitly supports.
Spark has first-party APIs for writing driver programs and worker code in R,1 Python, Scala, and Java with third-party bindings2 for additional languages including JavaScript, Julia, C#, and F#.
Spark’s language interoperability can be thought of in two tiers:
one is the worker code inside of your transformations (e.g., the lambda’s inside of your maps) and the second is being able to specify the transformations on RDDs/Dataset
s (e.g., the driver program).
This chapter will discuss the performance considerations of using other languages in Spark, and how to effectively work with existing libraries.
Often the language you will choose to specify the code inside of your transformations will be the same as the language for writing the driver program, but when working with specialized libraries or tools (such as CUDA3) specifying our entire program in one language would be a hassle, even if it was possible. Spark supports a range of languages for use on the driver, and an even wider range of languages can be used inside of our transformations on the workers. While the APIs are similar between the languages, the performance characteristics between the different languages are quite different once they need to execute outside of the JVM. We will discuss the design behind language support and how the performance difference ...
Get High Performance Spark now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.