May 2017
Beginner to intermediate
596 pages
15h 2m
English
As the events are queued into the respective Kafka topics, the Flink processing pipeline gets triggered and starts consuming Kafka events from these topics.
Taking as a reference the Flink example covered in an earlier chapter, we build two pipelines here, one for address and the other for contacts. Both of these pipelines would stream the events into two sinks, HDFS and Elasticsearch, respectively so that both of these ingestions are part of the same transaction.
Building over the earlier Flink example, which we ran from IDE, we would now package it in such a way that we can also deploy the code in the Flink container. This aspect of Flink deployment is new and not covered in the earlier ...