O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Learning Apache Apex

Book Description

Designing and writing a real-time streaming publication with Apache Apex

About This Book
  • Get a clear, practical approach to real-time data processing
  • Program Apache Apex streaming applications
  • This book shows you Apex integration with the open source Big Data ecosystem
Who This Book Is For

This book assumes knowledge of application development with Java and familiarity with distributed systems. Familiarity with other real-time streaming frameworks is not required, but some practical experience with other big data processing utilities might be helpful.

What You Will Learn
  • Put together a functioning Apex application from scratch
  • Scale an Apex application and configure it for optimal performance
  • Understand how to deal with failures via the fault tolerance features of the platform
  • Use Apex via other frameworks such as Beam
  • Understand the DevOps implications of deploying Apex
In Detail

Apache Apex is a next-generation stream processing framework designed to operate on data at large scale, with minimum latency, maximum reliability, and strict correctness guarantees.

Half of the book consists of Apex applications, showing you key aspects of data processing pipelines such as connectors for sources and sinks, and common data transformations. The other half of the book is evenly split into explaining the Apex framework, and tuning, testing, and scaling Apex applications.

Much of our economic world depends on growing streams of data, such as social media feeds, financial records, data from mobile devices, sensors and machines (the Internet of Things - IoT). The projects in the book show how to process such streams to gain valuable, timely, and actionable insights. Traditional use cases, such as ETL, that currently consume a significant chunk of data engineering resources are also covered.

The final chapter shows you future possibilities emerging in the streaming space, and how Apache Apex can contribute to it.

Style and approach

This book is divided into two major parts: first it explains what Apex is, what its relevant parts are, and how to write well-built Apex applications. The second part is entirely application-driven, walking you through Apex applications of increasing complexity.

Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.

Table of Contents

  1. Preface
    1. What this book covers
    2. What you need for this book 
    3. Who this book is for
    4. Conventions
    5. Reader feedback
    6. Customer support
      1. Downloading the example code
      2. Downloading the color images of this book
      3. Errata
      4. Piracy
      5. Questions
  2. Introduction to Apex
    1. Unbounded data and continuous processing
      1. Stream processing
      2. Stream processing systems
      3. What is Apex and why is it important?
    2. Use cases and case studies
      1. Real-time insights for Advertising Tech (PubMatic)
      2. Industrial IoT applications (GE)
      3. Real-time threat detection (Capital One)
      4. Silver Spring Networks (SSN)
    3. Application Model and API
      1. Directed Acyclic Graph (DAG)
      2. Apex DAG Java API
        1. High-level Stream Java API
      3. SQL
      4. JSON
      5. Windowing and time
    4. Value proposition of Apex
      1. Low latency and stateful processing
      2. Native streaming versus micro-batch
      3. Performance
      4. Where Apex excels
      5. Where Apex is not suitable
    5. Summary
  3. Getting Started with Application Development
    1. Development process and methodology
    2. Setting up the development environment
    3. Creating a new Maven project
    4. Application specifications
    5. Custom operator development
      1. The Apex operator model
      2. CheckpointListener/CheckpointNotificationListener
      3. ActivationListener
      4. IdleTimeHandler
    6. Application configuration
    7. Testing in the IDE
      1. Writing the integration test
    8. Running the application on YARN
      1. Execution layer components
      2. Installing Apex Docker sandbox
      3. Running the application
    9. Working on the cluster
      1. YARN web UI
      2. Apex CLI
      3. Logging
      4. Dynamically adjusting logging levels
    10. Summary
  4. The Apex Library
    1. An overview of the library
    2. Integrations
      1. Apache Kafka
      2. Kafka input
      3. Kafka output
      4. Other streaming integrations
        1. JMS (ActiveMQ, SQS, and so on)
        2. Kinesis streams
      5. Files
        1. File input
        2. File splitter and block reader
        3. File writer
      6. Databases
        1. JDBC input
        2. JDBC output
        3. Other databases
    3. Transformations
      1. Parser
      2. Filter
      3. Enrichment
      4. Map transform
      5. Custom functions
      6. Windowed transformations
        1. Windowing
          1. Global Window
          2. Time Windows
          3. Sliding Time Windows
          4. Session Windows
          5. Window propagation
        2. State
          1. Accumulation
          2. Accumulation Mode
          3. State storage
        3. Watermarks
          1. Allowed lateness
        4. Triggering
        5. Merging of streams
        6. The windowing example
      7. Dedup
      8. Join
      9. State Management
    4. Summary
  5. Scalability, Low Latency, and Performance
    1. Partitioning and how it works
    2. Elasticity
    3. Partitioning toolkit
      1. Configuring and triggering partitioning
      2. StreamCodec
      3. Unifier
    4. Custom dynamic partitioning
    5. Performance optimizations
      1. Affinity and anti-affinity
    6. Low-latency versus throughput
    7. Sample application for dynamic partitioning
    8. Performance – other aspects for custom operators
    9. Summary
  6. Fault Tolerance and Reliability
    1. Distributed systems need to be resilient
    2. Fault-tolerance components and mechanism in Apex
    3. Checkpointing
      1. When to checkpoint
      2. How to checkpoint
      3. What to checkpoint
      4. Incremental state saving
      5. Incremental recovery
    4. Processing guarantees
      1. Example – exactly-once counting
      2. The exactly-once output to JDBC
    5. Summary
  7. Example Project – Real-Time Aggregation and Visualization
    1. Streaming ETL and beyond
    2. The application pattern in a real-world use case
    3. Analyzing Twitter feed
      1. Top Hashtags
      2. TweetStats
    4. Running the application
      1. Configuring Twitter API access
      2. Enabling WebSocket output
    5. The Pub/Sub server
    6. Grafana visualization
      1. Installing Grafana
      2. Installing Grafana Simple JSON Datasource
      3. The Grafana Pub/Sub adapter server
      4. Setting up the dashboard
    7. Summary
  8. Example Project – Real-Time Ride Service Data Processing
    1. The goal
    2. Datasource
    3. The pipeline
    4. Simulation of a real-time feed using historical data
    5. Parsing the data
    6. Looking up of the zip code and preparing for the windowing operation
    7. Windowed operator configuration
    8. Serving the data with WebSocket
    9. Running the application
    10. Running the application on GCP Dataproc
    11. Summary
  9. Example Project – ETL Using SQL
    1. The application pipeline
    2. Building and running the application
    3. Application configuration
    4. The application code
    5. Partitioning
    6. Application testing
    7. Understanding application logs
    8. Calcite integration
    9. Summary
  10. Introduction to Apache Beam
    1. Introduction to Apache Beam
    2. Beam concepts
      1. Pipelines, PTransforms, and PCollections
        1. ParDo – elementwise computation
        2. GroupByKey/CombinePerKey – aggregation across elements
      2. Windowing, watermarks, and triggering in Beam
        1. Windowing in Beam
        2. Watermarks in Beam
        3. Triggering in Beam
      3. Advanced topic – stateful ParDo
    3. WordCount in Apache Beam
      1. Setting up your pipeline
        1. Reading the works of Shakespeare in parallel
        2. Splitting each line on spaces
        3. Eliminating empty strings
        4. Counting the occurrences of each word
        5. Format your results
        6. Writing to a sharded text file in parallel
      2. Testing the pipeline at small scale with DirectRunner
    4. Running Apache Beam WordCount on Apache Apex
    5. Summary
  11. The Future of Stream Processing
    1. Lower barrier for building streaming pipelines
      1. Visual development tools
      2. Streaming SQL
      3. Better programming API
      4. Bridging the gap between data science and engineering
      5. Machine learning integration
      6. State management
      7. State query and data consistency
      8. Containerized infrastructure
      9. Management tools
    2. Summary