Spark standalone

Spark standalone uses a built-in scheduler without depending on any external scheduler such as YARN or Mesos. To install Spark in standalone mode, you have to copy the spark binary install package onto all the machines in the cluster.

In standalone mode, the client can interact with the cluster, either through spark-submit or Spark shell. In either case, the Driver communicates with the Spark master Node to get the worker nodes, where executors can be started for this application.

Multiple clients interacting with the cluster create their own executors on the Worker Nodes. Also, each client will have its own Driver component.

The following is the standalone deployment of Spark using Master node and worker nodes:

Let's now ...

Get Scala and Spark for Big Data Analytics now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.