Since we are working with two types of data sources with different data structures, we have streamed them into two different topics respectively. This will require us to build two execution Flink pipelines in Flume that would be working in parallel.
The fundamental code remains the same as in chapter08/flume-example1 in both the cases since the messages are originating from Kafka topic and are then stored in HDFS. But the message types for both have very different structures, hence we will need to modify the Deserialization schema implementation as shown below. The source for this class can be found in chapter08/flink-customer-db project. The class is well documented for easy understanding of its working.
public class