Unleash the data processing and analytics capability of Apache Spark with the language of choice: Java
About This Book
- Perform big data processing with Spark—without having to learn Scala!
- Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics
- Go beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using Spark
Who This Book Is For
If you are a Java developer interested in learning to use the popular Apache Spark framework, this book is the resource you need to get started. Apache Spark developers who are looking to build enterprise-grade applications in Java will also find this book very useful.
What You Will Learn
- Process data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library.
- Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming Library
- Learn SQL schema creation and the analysis of structured data using various SQL functions including Windowing functions in the Spark SQL Library
- Explore Spark Mlib APIs while implementing Machine Learning techniques to solve real-world problems
- Get to know Spark GraphX so you understand various graph-based analytics that can be performed with Spark
Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone.
The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages.
By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications.
Style and approach
This practical guide teaches readers the fundamentals of the Apache Spark framework and how to implement components using the Java language. It is a unique blend of theory and practical examples, and is written in a way that will gradually build your knowledge of Apache Spark.
Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.
Table of contents
Introduction to Spark
- Dimensions of big data
- What makes Hadoop so revolutionary?
- Why Apache Spark?
- RDD - the first citizen of Spark
- Exploring the Spark ecosystem
- What's new in Spark 2.X?
- Why use Java for Spark?
- Lambda expressions
- Lexical scoping
- Intermediate operations
- Terminal operations
Let Us Spark
- Getting started with Spark
- Spark REPL also known as CLI
- Some basic exercises using Spark shell
- Spark components
- Spark Driver Web UI
- Spark job configuration and submission
- Spark REST APIs
Understanding the Spark Programming Model
- Hello Spark
- Common RDD transformations
- Common RDD actions
- RDD persistence and cache
Working with Data and Storage
- Interaction with external storage systems
- Working with different data formats
Spark on Cluster
- Spark application in distributed-mode
- Cluster managers
- Yet Another Resource Negotiator (YARN)
Spark Programming Model - Advanced
- RDD partitioning
- Advanced transformations
- Advanced actions
- Shared variable
- Broadcast variable
Working with Spark SQL
- SQLContext and HiveContext
- Dataframe and dataset
- Spark SQL operations
- Hive integration
Near Real-Time Processing with Spark Streaming
- Introducing Spark Streaming
- Understanding micro batching
- Streaming sources
- Streaming transformations
- Fault tolerance and reliability
- Structured Streaming
- Machine Learning Analytics with Spark MLlib
Learning Spark GraphX
- Introduction to GraphX
- Introduction to Property Graph
- Getting started with the GraphX API
- Graph operations
- Graph algorithms
- Title: Apache Spark 2.x for Java Developers
- Release date: July 2017
- Publisher(s): Packt Publishing
- ISBN: 9781787126497
You might also like
Software Engineering at Google
Today, software engineers need to know not only how to program effectively but also how to …
Data Science from Scratch, 2nd Edition
To really learn data science, you should not only master the tools—data science libraries, frameworks, modules, …
Python Crash Course, 2nd Edition
This is the second edition of the best selling Python book in the world. Python Crash …
Programming Skills for Data Science: Start Writing Code to Wrangle, Analyze, and Visualize Data with R, First Edition
The Foundational Hands-On Skills You Need to Dive into Data Science “Freeman and Ross have created …