Chapter 10. Distributed Batch Processing with Spark
In Chapter 4, Parallel Collections and Futures, we discovered how to use parallel collections for "embarrassingly" parallel problems: problems that can be broken down into a series of tasks that require no (or very little) communication between the tasks.
Apache Spark provides behavior similar to Scala parallel collections (and much more), but, instead of distributing tasks across different CPUs on the same computer, it allows the tasks to be distributed across a computer cluster. This provides arbitrary horizontal scalability, since we can simply add more computers to the cluster.
In this chapter, we will learn the basics of Apache Spark and use it to explore a set of emails, extracting features ...