Chapter 17. Deploying Spark
This chapter explores the infrastructure you need in place for you and your team to be able to run Spark Applications:
-
Cluster deployment choices
-
Spark’s different cluster managers
-
Deployment considerations and configuring deployments
For the most, part Spark should work similarly with all the supported cluster managers; however, customizing the setup means understanding the intricacies of each of the cluster management systems. The hard part is deciding on the cluster manager (or choosing a managed service). Although we would be happy to include all the minute details about how you can configure different cluster with different cluster managers, it’s simply impossible for this book to provide hyper-specific details for every situation in every single enviroment. The goal of this chapter, therefore, is not to discuss each of the cluster managers in full detail, but rather to look at their fundamental differences and to provide a reference for a lot of the material already available on the Spark website. Unfortunately, there is no easy answer to “which is the easiest cluster manager to run” because it varies so much by use case, experience, and resources. The Spark documentation site offers a lot of detail about deploying Spark with actionable examples. We do our best to discuss the most relevant points.
As of this writing, Spark has three officially supported cluster managers:
-
Standalone mode
-
Hadoop YARN
-
Apache Mesos
These cluster managers ...
Get Spark: The Definitive Guide now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.