O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Apache Spark 2.x for Java Developers

Book Description

Unleash the data processing and analytics capability of Apache Spark with the language of choice: Java

About This Book

  • Perform big data processing with Spark—without having to learn Scala!
  • Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics
  • Go beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using Spark

Who This Book Is For

If you are a Java developer interested in learning to use the popular Apache Spark framework, this book is the resource you need to get started. Apache Spark developers who are looking to build enterprise-grade applications in Java will also find this book very useful.

What You Will Learn

  • Process data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library.
  • Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming Library
  • Learn SQL schema creation and the analysis of structured data using various SQL functions including Windowing functions in the Spark SQL Library
  • Explore Spark Mlib APIs while implementing Machine Learning techniques to solve real-world problems
  • Get to know Spark GraphX so you understand various graph-based analytics that can be performed with Spark

In Detail

Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone.

The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages.

By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications.

Style and approach

This practical guide teaches readers the fundamentals of the Apache Spark framework and how to implement components using the Java language. It is a unique blend of theory and practical examples, and is written in a way that will gradually build your knowledge of Apache Spark.

Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.

Table of Contents

  1. Preface
    1. What this book covers
    2. What you need for this book
    3. Who this book is for
    4. Conventions
    5. Reader feedback
    6. Customer support
      1. Downloading the example code
      2. Errata
      3. Piracy
      4. Questions
  2. Introduction to Spark
    1. Dimensions of big data
    2. What makes Hadoop so revolutionary?
      1. Defining HDFS
        1. NameNode
        2. HDFS I/O
      2. YARN
        1. Processing the flow of application submission in YARN
      3. Overview of MapReduce
    3. Why Apache Spark?
    4. RDD - the first citizen of Spark
      1. Operations on RDD
      2. Lazy evaluation
      3. Benefits of RDD
    5. Exploring the Spark ecosystem
    6. What's new in Spark 2.X?
    7. References
    8. Summary
  3. Revisiting Java
    1. Why use Java for Spark?
    2. Generics
      1. Creating your own generic type
    3. Interfaces
      1. Static method in an interface
      2. Default method in interface
        1. What if a class implements two interfaces which have default methods with same name and signature?
      3. Anonymous inner classes
    4. Lambda expressions
      1. Functional interface
      2. Syntax of Lambda expressions
    5. Lexical scoping
      1. Method reference
      2. Understanding closures
    6. Streams
      1. Generating streams
    7. Intermediate operations
      1. Working with intermediate operations
    8. Terminal operations
      1. Working with terminal operations
        1. String collectors
        2. Collection collectors
        3. Map collectors
        4. Groupings
        5. Partitioning
        6. Matching
        7. Finding elements
    9. Summary
  4. Let Us Spark
    1. Getting started with Spark
    2. Spark REPL also known as CLI
    3. Some basic exercises using Spark shell
      1. Checking Spark version
      2. Creating and filtering RDD
        1. Word count on RDD
      3. Finding the sum of all even numbers in an RDD of integers
        1. Counting the number of words in a file
    4. Spark components
    5. Spark Driver Web UI
      1. Jobs
      2. Stages
      3. Storage
      4. Environment
      5. Executors
      6. SQL
      7. Streaming
    6. Spark job configuration and submission
    7. Spark REST APIs
    8. Summary
  5. Understanding the Spark Programming Model
    1. Hello Spark
      1. Prerequisites
    2. Common RDD transformations
      1. Map
      2. Filter
      3. flatMap
      4. mapToPair
      5. flatMapToPair
      6. union
      7. Intersection
      8. Distinct
      9. Cartesian
      10. groupByKey
      11. reduceByKey
      12. sortByKey
      13. Join
      14. CoGroup
    3. Common RDD actions
      1. isEmpty
      2. collect
      3. collectAsMap
      4. count
      5. countByKey
      6. countByValue
      7. Max
      8. Min
      9. First
      10. Take
      11. takeOrdered
      12. takeSample
      13. top
      14. reduce
      15. Fold
      16. aggregate
      17. forEach
      18. saveAsTextFile
      19. saveAsObjectFile
    4. RDD persistence and cache
    5. Summary
  6. Working with Data and Storage
    1. Interaction with external storage systems
      1. Interaction with local filesystem
      2. Interaction with Amazon S3
      3. Interaction with HDFS
      4. Interaction with Cassandra
    2. Working with different data formats
      1. Plain and specially formatted text
      2. Working with CSV data
      3. Working with JSON data
      4. Working with XML Data
    3. References
    4. Summary
  7. Spark on Cluster
    1. Spark application in distributed-mode
      1. Driver program
      2. Executor program
    2. Cluster managers
      1. Spark standalone
        1. Installation of Spark standalone cluster
          1. Start master
          2. Start slave
          3. Stop master and slaves
        2. Deploying applications on Spark standalone cluster
          1. Client mode
          2. Cluster mode
      2. Useful job configurations
      3. Useful cluster level configurations (Spark standalone)
    3. Yet Another Resource Negotiator (YARN)
      1. YARN client
      2. YARN cluster
      3. Useful job configuration
    4. Summary
  8. Spark Programming Model - Advanced
    1. RDD partitioning
      1. Repartitioning
      2. How Spark calculates the partition count for transformations with shuffling (wide transformations )
      3. Partitioner
        1. Hash Partitioner
        2. Range Partitioner
        3. Custom Partitioner
    2. Advanced transformations
      1. mapPartitions
      2. mapPartitionsWithIndex
      3. mapPartitionsToPair
      4. mapValues
      5. flatMapValues
      6. repartitionAndSortWithinPartitions
      7. coalesce
      8. foldByKey
      9. aggregateByKey
      10. combineByKey
    3. Advanced actions
      1. Approximate actions
      2. Asynchronous actions
      3. Miscellaneous actions
    4. Shared variable
    5. Broadcast variable
      1. Properties of the broadcast variable
      2. Lifecycle of a broadcast variable
      3. Map-side join using broadcast variable
        1. Accumulators
        2. Driver program
    6. Summary
  9. Working with Spark SQL
    1. SQLContext and HiveContext
      1. Initializing SparkSession
      2. Reading CSV using SparkSession
    2. Dataframe and dataset
      1. SchemaRDD
      2. Dataframe
      3. Dataset
        1. Creating a dataset using encoders
        2. Creating a dataset using StructType
      4. Unified dataframe and dataset API
      5. Data persistence
    3. Spark SQL operations
      1. Untyped dataset operation
      2. Temporary view
      3. Global temporary view
      4. Spark UDF
      5. Spark UDAF
        1. Untyped UDAF
        2. Type-safe UDAF:
    4. Hive integration
      1. Table Persistence
    5. Summary
  10. Near Real-Time Processing with Spark Streaming
    1. Introducing Spark Streaming
    2. Understanding micro batching
      1. Getting started with Spark Streaming jobs
    3. Streaming sources
      1. fileStream
    4. Kafka
    5. Streaming transformations
      1. Stateless transformation
      2. Stateful transformation
        1. Checkpointing
        2. Windowing
      3. Transform operation
    6. Fault tolerance and reliability
      1. Data receiver stage
      2. File streams
      3. Advanced streaming sources
      4. Transformation stage
      5. Output stage
    7. Structured Streaming
      1. Recap of the use case
      2. Structured streaming - programming model
      3. Built-in input sources and sinks
        1. Input sources
        2. Built-in Sinks
    8. Summary
  11. Machine Learning Analytics with Spark MLlib
    1. Introduction to machine learning
    2. Concepts of machine learning
      1. Datatypes
    3. Machine learning work flow
      1. Pipelines
    4. Operations on feature vectors
      1. Feature extractors
      2. Feature transformers
      3. Feature selectors
    5. Summary
  12. Learning Spark GraphX
    1. Introduction to GraphX
    2. Introduction to Property Graph
    3. Getting started with the GraphX API
      1. Using vertex and edge RDDs
      2. From edges
      3. EdgeTriplet
    4. Graph operations
      1. mapVertices
      2. mapEdges
      3. mapTriplets
      4. reverse
      5. subgraph
      6. aggregateMessages
      7. outerJoinVertices
    5. Graph algorithms
      1. PageRank
        1. Static PageRank
        2. Dynamic PageRank
      2. Triangle counting
      3. Connected components
    6. Summary