May 2017
Beginner to intermediate
596 pages
15h 2m
English
Shown in the next image is a very simplified view of the Spark streaming process. Spark was originally designed for faster processing of batches of data from Hadoop and was translated for near-real-time use cases as Spark streaming, retaining some of the fundamental building blocks and patterns in both the scenarios. One of the primary building blocks of Spark Streaming is DStreams, Receivers, and Resilient Distributed Datasets (RDD). While Spark started with optimizing batch processing and was translated for near-real-time use cases, the fundamental behavior remained somewhat similar. Even for near-real-time use cases, Spark streaming works with micro-batches with a batch interval. This batch interval also introduces some ...