If you have a set of machines without any existing cluster management software, you can deploy Spark over SSH with some handy scripts. This method is known as "standalone mode" in the Spark documentation. An individual master and worker can be started by
./run spark.deploy.master.Master and
spark://MASTERIP:PORT respectively. The default port for the master is 8080. It's likely that you don't want to go to each of your machines and run these commands by hand; there are a number of helper scripts in
bin/ to help you run your servers.
A prerequisite for using any of the scripts is having a password-less SSH access setup from the master to all the worker machines. You probably want ...