Tiering data flows

In Chapter 1, Overview and Architecture, we talked about tiering your data flows. There are several reasons for wanting to do this. You may want to limit the number of Flume agents that directly connect to your Hadoop cluster to limit the number of parallel requests. You may also lack sufficient disk space on your application servers to store a significant amount of data while you are performing maintenance on your Hadoop cluster. Whatever your reason or use case, the most common mechanism for chaining Flume agents is using the Avro Source/Sink pair.

Avro Source/Sink

We covered Avro a bit in Chapter 4, Sink and Sink Processors, when we discussed using it as an on-disk serialization format for files stored in HDFS. Here we'll put ...

Get Apache Flume: Distributed Log Collection for Hadoop now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.