In this section, we are going to cover how we can write the HDFS bolt to persist data into HDFS. In this section, we are focusing on the following points:
- Consuming data from Kafka
- The logic to store the data into HDFS
- Rotating file into HDFS after a predefined time or size
Perform the following steps to create the topology to store the data into the HDFS:
- Create a new maven project with groupId com.stormadvance and artifactId storm-hadoop.
- Add the following dependencies in the pom.xml file. We are adding the Kafka Maven dependency in pom.xml to support Kafka Consumer. Please refer the previous chapter to produce data in Kafka as here we are going to consume data from Kafka and store in HDFS: ...