For learning purposes, let's now install Spark in the local computer (even though it is more frequently used in a cluster of servers). Full instructions can be found at: https://spark.apache.org/downloads.html.
There are many stable versions, and we take version 2.3.2 (Sep 24 2018) as an example. As illustrated in the following screenshot, after selecting 2.3.2 in step 1, we choose Pre-built for Apache Hadoop 2.7 and later for step 2. Then, click the link in step 3 to download the spark-2.3.2-bin-hadoop2.7.tgz file. Unzip the file and the resulting folder contains a complete Spark package. The steps are in the following screenshot:
Before running any Spark program, we need to make sure the following dependencies are installed: ...