Write Storm topology to persist data into HDFS

In this section, we are going to cover how we can write the HDFS bolt to persist data into HDFS. In this section, we are focusing on the following points:

  • Consuming data from Kafka
  • The logic to store the data into HDFS
  • Rotating file into HDFS after a predefined time or size

Perform the following steps to create the topology to store the data into the HDFS:

  1. Create a new maven project with groupId com.stormadvance and artifactId storm-hadoop.
  2. Add the following dependencies in the pom.xml file. We are adding the Kafka Maven dependency in pom.xml to support Kafka Consumer. Please refer the previous chapter to produce data in Kafka as here we are going to consume data from Kafka and store in HDFS: ...

Get Mastering Apache Storm now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.