Book description
Process large volumes of data in real-time while building high performance and robust data stream processing pipeline using the latest Apache Kafka 2.0
Key Features
- Solve practical large data and processing challenges with Kafka
- Tackle data processing challenges like late events, windowing, and watermarking
- Understand real-time streaming applications processing using Schema registry, Kafka connect, Kafka streams, and KSQL
Book Description
Apache Kafka is a great open source platform for handling your real-time data pipeline to ensure high-speed filtering and pattern matching on the ?y. In this book, you will learn how to use Apache Kafka for efficient processing of distributed applications and will get familiar with solving everyday problems in fast data and processing pipelines.
This book focuses on programming rather than the configuration management of Kafka clusters or DevOps. It starts off with the installation and setting up the development environment, before quickly moving on to performing fundamental messaging operations such as validation and enrichment.
Here you will learn about message composition with pure Kafka API and Kafka Streams. You will look into the transformation of messages in different formats, such asext, binary, XML, JSON, and AVRO. Next, you will learn how to expose the schemas contained in Kafka with the Schema Registry. You will then learn how to work with all relevant connectors with Kafka Connect. While working with Kafka Streams, you will perform various interesting operations on streams, such as windowing, joins, and aggregations. Finally, through KSQL, you will learn how to retrieve, insert, modify, and delete data streams, and how to manipulate watermarks and windows.
What you will learn
- How to validate data with Kafka
- Add information to existing data ?ows
- Generate new information through message composition
- Perform data validation and versioning with the Schema Registry
- How to perform message Serialization and Deserialization
- How to perform message Serialization and Deserialization
- Process data streams with Kafka Streams
- Understand the duality between tables and streams with KSQL
Who this book is for
This book is for developers who want to quickly master the practical concepts behind Apache Kafka. The audience need not have come across Apache Kafka previously; however, a familiarity of Java or any JVM language will be helpful in understanding the code in this book.
Table of contents
- Title page
- Copyright and Credits
- Dedication
- About Packt
- Contributors
- Preface
- Configuring Kafka
- Message Validation
- Message Enrichment
-
Serialization
- Kioto, a Kafka IoT company
- Project setup
- The constants
- HealthCheck message
- Java PlainProducer
- Running the PlainProducer
- Java plain consumer
- Java PlainProcessor
- Running the PlainProcessor
- Custom serializer
- Java CustomProducer
- Running the CustomProducer
- Custom deserializer
- Java custom consumer
- Java custom processor
- Running the custom processor
- Summary
-
Schema Registry
- Avro in a nutshell
- Defining the schema
- Starting the Schema Registry
-
Using the Schema Registry
- Registering a new version of a schema under a – value subject
- Registering a new version of a schema under a – key subject
- Registering an existing schema into a new subject
- Listing all subjects
- Fetching a schema by its global unique ID
- Listing all schema versions registered under the healthchecks–value subject
- Fetching version 1 of the schema registered under the healthchecks-value subject
- Deleting version 1 of the schema registered under the healthchecks-value subject
- Deleting the most recently registered schema under the healthchecks-value subject
- Deleting all the schema versions registered under the healthchecks–value subject
- Checking whether a schema is already registered under the healthchecks–key subject
- Testing schema compatibility against the latest schema under the healthchecks–value subject
- Getting the top-level compatibility configuration
-  Globally updating the compatibility requirements
- Updating the compatibility requirements under the healthchecks–value subject
- Java AvroProducer
- Running the AvroProducer
- Java AvroConsumer
- Java AvroProcessor
- Running the AvroProcessor
- Summary
-
Kafka Streams
- Kafka Streams in a nutshell
- Project setup
- Java PlainStreamsProcessor
- Running the PlainStreamsProcessor
- Scaling out with Kafka Streams
- Java CustomStreamsProcessor
- Running the CustomStreamsProcessor
- Java AvroStreamsProcessor
- Running the AvroStreamsProcessor
- Late event processing
- Basic scenario
- Late event generation
- Running the EventProducer
- Kafka Streams processor
- Running the Streams processor
- Stream processor analysis
- Summary
- KSQL
- Kafka Connect
- Other Books You May Enjoy
Product information
- Title: Apache Kafka Quick Start Guide
- Author(s):
- Release date: December 2018
- Publisher(s): Packt Publishing
- ISBN: 9781788997829
You might also like
book
Apache Kafka 1.0 Cookbook
Simplify real-time data processing by leveraging the power of Apache Kafka 1.0 About This Book Use …
video
Apache Kafka A-Z with Hands-On Learning
Do you want to learn Apache Kafka to store and process multiple, nonstop streams of information …
video
Apache Kafka Series - Kafka Streams for Data Processing
The new volume in the Apache Kafka Series! Learn the Kafka Streams data-processing library, for Apache …
video
Apache Kafka for Absolute Beginners
This course is designed to get you up and running with the fundamentals and the working …