Video description
In this webcast, Hari Shreedharan, the author of Using Flume will discuss how to use Flume to write data to HDFS, HBase and Spark. Hari will discuss strategies or partitioning and serializing the data in formats friendly with other systems. Flume can also be used to feed Spark Streaming to process data in real-time - which we be shown in a demo at the end of the webcast.
Publisher resources
Table of contents
Product information
- Title: Using Flume: Integrating Flume with Hadoop, HBase and Spark
- Author(s):
- Release date: April 2015
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781491934609
You might also like
book
Apache Hadoop 3 Quick Start Guide
A fast paced guide that will help you learn about Apache Hadoop 3 and its ecosystem …
video
Learning Apache Hadoop
In this Introduction to Hadoop training course, expert author Rich Morrow will teach you the tools …
book
Hadoop Application Architectures
Get expert guidance on architecting end-to-end data management solutions with Apache Hadoop. While many sources explain …
video
Building an End-to-End Batch Data Pipeline with Apache Spark
Explore Big Data architectures and the tools you can leverage to build an end-to-end data platform. …