O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Learning Apache Flink

Book Description

Discover the definitive guide to crafting lightning-fast data processing for distributed systems with Apache Flink

About This Book

  • Build your expertize in processing real-time data with Apache Flink and its ecosystem
  • Gain insights into the working of all components of Apache Flink such as FlinkML, Gelly, and Table API filled with real world use cases
  • Exploit Apache Flink's capabilities like distributed data streaming, in-memory processing, pipelining and iteration operators to improve performance.
  • Solve real world big-data problems with real time in-memory and disk-based processing capabilities of Apache Flink.

Who This Book Is For

Big data developers who are looking to process batch and real-time data on distributed systems. Basic knowledge of Hadoop and big data is assumed. Reasonable knowledge of Java or Scala is expected.

What You Will Learn

  • Learn how to build end to end real time analytics projects
  • Integrate with existing big data stack and utilize existing infrastructure
  • Build predictive analytics applications using FlinkML
  • Use graph library to perform graph querying and search.
  • Understand Flink's - "Streaming First" architecture to implementing real streaming applications
  • Learn Flink Logging and Monitoring best practices in order to efficiently design your data pipelines
  • Explore the detailed processes to deploy Flink cluster on Amazon Web Services(AWS) and Google Cloud Platform (GCP).

In Detail

With the advent of massive computer systems, organizations in different domains generate large amounts of data on a real-time basis. The latest entrant to big data processing, Apache Flink, is designed to process continuous streams of data at a lightning fast pace.

This book will be your definitive guide to batch and stream data processing with Apache Flink. The book begins with introducing the Apache Flink ecosystem, setting it up and using the DataSet and DataStream API for processing batch and streaming datasets. Bringing the power of SQL to Flink, this book will then explore the Table API for querying and manipulating data. In the latter half of the book, readers will get to learn the remaining ecosystem of Apache Flink to achieve complex tasks such as event processing, machine learning, and graph processing. The final part of the book would consist of topics such as scaling Flink solutions, performance optimization and integrating Flink with other tools such as ElasticSearch.

Whether you want to dive deeper into Apache Flink, or want to investigate how to get more out of this powerful technology, you'll find everything you need inside.

Style and approach

This book is a comprehensive guide that covers advanced features of the Apache Flink, and communicates them with a practical understanding of the underlying concepts for how, when, and why to use them.

Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.

Table of Contents

  1. Learning Apache Flink
    1. Learning Apache Flink
    2. Credits
    3. About the Author
    4. About the Reviewers
    5. www.PacktPub.com
      1. Why subscribe?
    6. Customer Feedback
    7. Preface
      1. What this book covers
      2. What you need for this book
      3. Who this book is for
      4. Conventions
      5. Reader feedback
      6. Customer support
        1. Downloading the example code
        2. Downloading the color images of this book
        3. Errata
        4. Piracy
        5. Questions
    8. 1. Introduction to Apache Flink
      1. History
      2. Architecture
      3. Distributed execution
        1. Job Manager
          1. Actor system
          2. Scheduler
          3. Check pointing
        2. Task manager
        3. Job client
      4. Features
        1. High performance
        2. Exactly-once stateful computation
        3. Flexible streaming windows
        4. Fault tolerance
        5. Memory management
        6. Optimizer
        7. Stream and batch in one platform
        8. Libraries
        9. Event time semantics
      5. Quick start setup
        1. Pre-requisite
        2. Installing on Windows
        3. Installing on Linux
      6. Cluster setup
        1. SSH configurations
        2. Java installation
        3. Flink installation
        4. Configurations
        5. Starting daemons
        6. Adding additional Job/Task Managers
        7. Stopping daemons and cluster
      7. Running sample application
      8. Summary
    9. 2. Data Processing Using the DataStream API
      1. Execution environment
      2. Data sources
        1. Socket-based
        2. File-based
      3. Transformations
        1. Map
        2. FlatMap
        3. Filter
        4. KeyBy
        5. Reduce
        6. Fold
        7. Aggregations
        8. Window
          1. Global windows
          2. Tumbling windows
          3. Sliding windows
          4. Session windows
        9. WindowAll
        10. Union
        11. Window join
        12. Split
        13. Select
        14. Project
      4. Physical partitioning
        1. Custom partitioning
        2. Random partitioning
        3. Rebalancing partitioning
        4. Rescaling
        5. Broadcasting
      5. Data sinks
      6. Event time and watermarks
        1. Event time
        2. Processing time
        3. Ingestion time
      7. Connectors
        1. Kafka connector
        2. Twitter connector
        3. RabbitMQ connector
        4. ElasticSearch connector
          1. Embedded node mode
          2. Transport client mode
        5. Cassandra connector
      8. Use case - sensor data analytics
      9. Summary
    10. 3. Data Processing Using the Batch Processing API
      1. Data sources
        1. File-based
        2. Collection-based
        3. Generic sources
        4. Compressed files
      2. Transformations
        1. Map
        2. Flat map
        3. Filter
        4. Project
        5. Reduce on grouped datasets
        6. Reduce on grouped datasets by field position key
        7. Group combine
        8. Aggregate on a grouped tuple dataset
        9. MinBy on a grouped tuple dataset
        10. MaxBy on a grouped tuple dataset
        11. Reduce on full dataset
        12. Group reduce on a full dataset
        13. Aggregate on a full tuple dataset
        14. MinBy on a full tuple dataset
        15. MaxBy on a full tuple dataset
        16. Distinct
        17. Join
        18. Cross
        19. Union
        20. Rebalance
        21. Hash partition
        22. Range partition
        23. Sort partition
        24. First-n
      3. Broadcast variables
      4. Data sinks
      5. Connectors
        1. Filesystems
          1. HDFS
          2. Amazon S3
          3. Alluxio
          4. Avro
          5. Microsoft Azure storage
        2. MongoDB
      6. Iterations
        1. Iterator operator
        2. Delta iterator
      7. Use case - Athletes data insights using Flink batch API
      8. Summary
    11. 4. Data Processing Using the Table API
      1. Registering tables
        1. Registering a dataset
        2. Registering a datastream
        3. Registering a table
        4. Registering external table sources
          1. CSV table source
          2. Kafka JSON table source
      2. Accessing the registered table
      3. Operators
        1. The select operator
        2. The where operator
        3. The filter operator
        4. The as operator
        5. The groupBy operator
        6. The join operator
        7. The leftOuterJoin operator
        8. The rightOuterJoin operator
        9. The fullOuterJoin operator
        10. The union operator
        11. The unionAll operator
        12. The intersect operator
        13. The intersectAll operator
        14. The minus operator
        15. The minusAll operator
        16. The distinct operator
        17. The orderBy operator
        18. The limit operator
        19. Data types
      4. SQL
        1. SQL on datastream
        2. Supported SQL syntax
        3. Scalar functions
          1. Scalar functions in the table API
        4. Scala functions in SQL
      5. Use case - Athletes data insights using Flink Table API
      6. Summary
    12. 5. Complex Event Processing
      1. What is complex event processing?
      2. Flink CEP
        1. Event streams
      3. Pattern API
        1. Begin
        2. Filter
        3. Subtype
        4. OR
        5. Continuity
          1. Strict continuity
          2. Non-strict continuity
        6. Within
        7. Detecting patterns
        8. Selecting from patterns
          1. Select
          2. flatSelect
        9. Handling timed-out partial patterns
      4. Use case - complex event processing on a temperature sensor
      5. Summary
    13. 6. Machine Learning Using FlinkML
      1. What is machine learning?
        1. Supervised learning
          1. Regression
          2. Classification
        2. Unsupervised learning
          1. Clustering
          2. Association
        3. Semi-supervised learning
      2. FlinkML
      3. Supported algorithms
        1. Supervised learning
          1. Support Vector Machine
          2. Multiple Linear Regression
          3. Optimization framework
        2. Recommendations
          1. Alternating Least Squares
        3. Unsupervised learning
          1. k Nearest Neighbour join
        4. Utilities
        5. Data pre processing and pipelines
          1. Polynomial features
          2. Standard scaler
          3. MinMax scaler
      4. Summary
    14. 7. Flink Graph API - Gelly
      1. What is a graph?
      2. Flink graph API - Gelly
        1. Graph representation
          1. Graph nodes
          2. Graph edges
        2. Graph creation
          1. From dataset of edges and vertices
          2. From dataset of tuples representing edges
          3. From CSV files
          4. From collection lists
        3. Graph properties
        4. Graph transformations
          1. Map
          2. Translate
          3. Filter
          4. Join
          5. Reverse
          6. Undirected
          7. Union
          8. Intersect
        5. Graph mutations
        6. Neighborhood methods
        7. Graph validation
      3. Iterative graph processing
        1. Vertex-Centric iterations
        2. Scatter-Gather iterations
        3. Gather-Sum-Apply iterations
      4. Use case - Airport Travel Optimization
      5. Summary
    15. 8. Distributed Data Processing with Flink and Hadoop
      1. Quick overview of Hadoop
        1. HDFS
        2. YARN
      2. Flink on YARN
        1. Configurations
        2. Starting a Flink YARN session
        3. Submitting a job to Flink
        4. Stopping Flink YARN session
        5. Running a single Flink job on YARN
        6. Recovery behavior for Flink on YARN
        7. Working details
      3. Summary
    16. 9. Deploying Flink on Cloud
      1. Flink on Google Cloud
        1. Installing Google Cloud SDK
        2. Installing BDUtil
        3. Launching a Flink cluster
        4. Executing a sample job
        5. Shutting down the cluster
      2. Flink on AWS
        1. Launching an EMR cluster
        2. Installing Flink on EMR
        3. Executing Flink on EMR-YARN
        4. Starting a Flink YARN session
        5. Executing Flink job on YARN session
        6. Shutting down the cluster
        7. Flink on EMR 5.3+
        8. Using S3 in Flink applications
      3. Summary
    17. 10. Best Practices
      1. Logging best practices
        1. Configuring Log4j
        2. Configuring Logback
        3. Logging in applications
      2. Using ParameterTool
        1. From system properties
        2. From command line arguments
        3. From .properties file
      3. Naming large TupleX types
      4. Registering a custom serializer
      5. Metrics
        1. Registering metrics
          1. Counters
          2. Gauges
          3. Histograms
          4. Meters
        2. Reporters
      6. Monitoring REST API
        1. Config API
        2. Overview API
        3. Overview of the jobs
        4. Details of a specific job
        5. User defined job configuration
      7. Back pressure monitoring
      8. Summary