We're now ready to launch a small Spark cluster by changing into the ec2 directory and then running the cluster launch command:
$ cd ec2 $ ./spark-ec2 --key-pair=rd_spark-user1 --identity-file=spark.pem --region=us-east-1 --zone=us-east-1a launch my-spark-cluster
This will launch a new Spark cluster called test-cluster with one master and one slave node of instance type m3.medium. This cluster will be launched with a Spark version built for Hadoop 2. The key pair name we used is spark, and the key pair file is spark.pem (if you gave the files different names or have an existing AWS key pair, use that name instead).
It might take quite a while for the cluster to fully launch and initialize. You should see ...