July 2017
Beginner to intermediate
715 pages
17h 3m
English
Apache Spark is a framework for scalable data processing. It was designed to be better than Hadoop: it tries to process data in memory and not to save intermediate results on disk. Additionally, it has more operations, not just map and reduce, and thus richer APIs.
The main unit of abstraction in Apache Spark is Resilient Distributed Dataset (RDD), which is a distributed collection of elements. The key difference from usual collections or streams is that RDDs can be processed in parallel across multiple machines, in the same way, Hadoop jobs are processed.
There are two types of operations we can apply to RDDs: transformations and actions.