In this webcast, Hari Shreedharan, the author of Using Flume will discuss how to use Flume to write data to HDFS, HBase and Spark. Hari will discuss strategies or partitioning and serializing the data in formats friendly with other systems. Flume can also be used to feed Spark Streaming to process data in real-time - which we be shown in a demo at the end of the webcast.
Table of contents
- Title: Using Flume: Integrating Flume with Hadoop, HBase and Spark
- Release date: April 2015
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781491934609
You might also like
Introduction to Apache Kafka
Currently one of the hottest projects across the Hadoop ecosystem, Apache Kafka is a distributed, real-time …
Designing Data-Intensive Applications
Data is at the center of many challenges in system design today. Difficult issues need to …
Software Engineering at Google
Today, software engineers need to know not only how to program effectively but also how to …
Head First Design Patterns, 2nd Edition
You know you don’t want to reinvent the wheel, so you look to design patterns—the lessons …