Chapter 4. Common Hadoop Processing Patterns
With an understanding of how to access and process data on Hadoop, we’d like to move on to discuss how to solve some fairly common problems in Hadoop using some of the tools we discussed in Chapter 3. We’ll cover the following data processing tasks, which in addition to being common patterns in processing data on Hadoop, also have a fairly high degree of complexity in implementation:
-
Removing duplicate records by primary key (compaction)
-
Using windowing analytics
-
Updating time series data
We’ll go into more detail on these patterns next, and take a deep dive into how they’re implemented. We’ll present implementations of these patterns in both Spark and SQL (for Impala and Hive). You’ll note that we’re not including implementations in MapReduce; this is because of the size and complexity of the code in MapReduce, as well as the move toward newer processing frameworks such as Spark and abstractions such as SQL.
Pattern: Removing Duplicate Records by Primary Key
Duplicate records are a common occurrence when you are working with data in Hadoop for two primary reasons:
- Resends during data ingest
-
As we’ve discussed elsewhere in the book, it’s difficult to ensure that records are sent exactly once, and it’s not uncommon to have to deal with duplicate records during ingest processing.
- Deltas (updated records)
-
HDFS is a “write once and read many” filesystem. Making modifications at a record level is not a simple thing to do. In ...