In this webcast, Hari Shreedharan, the author of Using Flume will discuss how to use Flume to write data to HDFS, HBase and Spark. Hari will discuss strategies or partitioning and serializing the data in formats friendly with other systems. Flume can also be used to feed Spark Streaming to process data in real-time - which we be shown in a demo at the end of the webcast.
Table of contents
- Title: Using Flume: Integrating Flume with Hadoop, HBase and Spark
- Release date: April 2015
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781491934609
You might also like
51+ hours of video instruction. Overview The professional programmer’s Deitel® video guide to Python development with …
Apache Kafka Series - Learn Apache Kafka for Beginners
Tutorial: Learn the Apache Kafka ecosystem, core concepts, operations, Kafka API, and build your own producers …
Introduction to Apache Kafka
Currently one of the hottest projects across the Hadoop ecosystem, Apache Kafka is a distributed, real-time …
Learning Apache Hadoop
In this Introduction to Hadoop training course, expert author Rich Morrow will teach you the tools …