Chapter 5. Serialization and Hadoop I/O

Hadoop is about big data, and whenever data is handled, discussion and detailing of IO becomes an integral part of the setup. Data needs to be ingested via the network or loaded from an external persistent media. The ingested data needs to be staged during the extraction and transformation steps. Finally, the results need to be stored for consumption by downstream analysis processes for serving data, reporting, and visualization. Each of these stages involves understanding the underlying data storage structure, data formats, and data models. These aspects help in tuning the entire data-handling pipeline for efficiency of storage and speed.

In this chapter, we will look into the IO features and capabilities ...

Get Hadoop: Data Processing and Modelling now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.