May 2017
Beginner to intermediate
596 pages
15h 2m
English
Now that we have seen an example of Flink based processing, it is time that we apply this to the single customer view use case and continue building the Data Lake landscape. For this purpose, let us consider integrating sources such as customer information stored in relational format and user location logs. For user logs, we will generate them as spool files which can then be consumed by Flume with Kafka channel. This Kafka channel would be eventually consumed and processed by the Flink pipeline into HDFS sink. In this use case Flume acts as the acquisition layer that would acquire data from these sources and store them as messages in Kafka topics. We will define two Flink processing pipelines, that would ...