Using Flume: Integrating Flume with Hadoop, HBase and Spark
Date: This event took place live on April 22 2015
Presented by: Hari Shreedharan
Duration: Approximately 60 minutes.
Questions? Please send email to
Watch the webcast recording
In this webcast, Hari Shreedharan, the author of Using Flume will discuss how to use Flume to write data to HDFS, HBase and Spark. Hari will discuss strategies or partitioning and serializing the data in formats friendly with other systems. Flume can also be used to feed Spark Streaming to process data in real-time - which we be shown in a demo at the end of the webcast.
About Hari Shreedharan
Hari Shreedharan is a PMC Member and Committer on the Apache Flume Project. As a PMC member, he is involved in making decisions on the direction of the project. Hari is also a Software Engineer at Cloudera where he works on Apache Flume, Apache Spark and Apache Sqoop. He also ensures that customers can successfully deploy and manage Flume, Spark and Sqoop on their clusters, by helping them resolve any issues they are facing. Hari completed his Bachelors from Malaviya National Institute of Technology, Jaipur, India and his Masters in Computer Science from Cornell University in 2010.