Apache Spark is a distributed and parallel processing system and it also provides in-memory computing capabilities. This type of computing paradigm needs an associated storage system so that you can deploy your application on top of a big data cluster. To make this happen, you will have to use distributed storage systems such as HDFS, S3, HBase, and Hive. For moving data, you will be needing other technologies such as Sqoop, Kinesis, Twitter, Flume, and Kafka.
In practice, you can configure a small Hadoop cluster very easily. You only need to have a single master and multiple worker nodes. In your Hadoop cluster, generally, a master node consists of NameNodes, DataNodes, JobTracker, and TaskTracker. A worker node, on the other ...