Advanced analytics on your Big Data with latest Apache Spark 2.x
About This Book
- An advanced guide with a combination of instructions and practical examples to extend the most up-to date Spark functionalities.
- Extend your data processing capabilities to process huge chunk of data in minimum time using advanced concepts in Spark.
- Master the art of real-time processing with the help of Apache Spark 2.x
Who This Book Is For
If you are a developer with some experience with Spark and want to strengthen your knowledge of how to get around in the world of Spark, then this book is ideal for you. Basic knowledge of Linux, Hadoop and Spark is assumed. Reasonable knowledge of Scala is expected.
What You Will Learn
- Examine Advanced Machine Learning and DeepLearning with MLlib, SparkML, SystemML, H2O and DeepLearning4J
- Study highly optimised unified batch and real-time data processing using SparkSQL and Structured Streaming
- Evaluate large-scale Graph Processing and Analysis using GraphX and GraphFrames
- Apply Apache Spark in Elastic deployments using Jupyter and Zeppelin Notebooks, Docker, Kubernetes and the IBM Cloud
- Understand internal details of cost based optimizers used in Catalyst, SystemML and GraphFrames
- Learn how specific parameter settings affect overall performance of an Apache Spark cluster
- Leverage Scala, R and python for your data science projects
Apache Spark is an in-memory cluster-based parallel processing system that provides a wide range of functionalities such as graph processing, machine learning, stream processing, and SQL. This book aims to take your knowledge of Spark to the next level by teaching you how to expand Spark’s functionality and implement your data flows and machine/deep learning programs on top of the platform.
The book commences with an overview of the Spark ecosystem. It will introduce you to Project Tungsten and Catalyst, two of the major advancements of Apache Spark 2.x.
You will understand how memory management and binary processing, cache-aware computation, and code generation are used to speed things up dramatically. The book extends to show how to incorporate H20, SystemML, and Deeplearning4j for machine learning, and Jupyter Notebooks and Kubernetes/Docker for cloud-based Spark. During the course of the book, you will learn about the latest enhancements to Apache Spark 2.x, such as interactive querying of live data and unifying DataFrames and Datasets.
You will also learn about the updates on the APIs and how DataFrames and Datasets affect SQL, machine learning, graph processing, and streaming. You will learn to use Spark as a big data operating system, understand how to implement advanced analytics on the new APIs, and explore how easy it is to use Spark in day-to-day tasks.
Style and approach
This book is an extensive guide to Apache Spark modules and tools and shows how Spark's functionality can be extended for real-time processing and storage with worked examples.
Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.
Table of contents
A First Taste and What’s New in Apache Spark V2
- Spark machine learning
- Spark Streaming
- Spark SQL
- Spark graph processing
- Extended ecosystem
- What's new in Apache Spark V2?
- Cluster design
- Cluster management
- Cloud-based deployments
Apache Spark SQL
- The SparkSession--your gateway to structured data processing
- Importing and saving data
- Understanding the DataSource API
- Using SQL
- Using Datasets
- User-defined functions
- RDDs versus DataFrames versus Datasets
The Catalyst Optimizer
- Understanding the workings of the Catalyst Optimizer
- Managing temporary views with the catalog API
- The SQL abstract syntax tree
- How to go from Unresolved Logical Execution Plan to Resolved Logical Execution Plan
- Code generation
- Memory management beyond the Java Virtual Machine Garbage Collector
- Cache-friendly layout of data in memory
- Code generation
- Apache Spark Streaming
- The concept of continuous applications
- Increased performance with good old friends
- How transparent fault tolerance and exactly-once delivery guarantee is achieved
- Example - connection to a MQTT message broker
- Apache Spark MLlib
- What does the new API look like?
- The concept of pipelines
- Model evaluation
- CrossValidation and hyperparameter tuning
- Winning a Kaggle competition with Apache SparkML
- Why do we need just another library?
- A cost-based optimizer for machine learning algorithms
- Performance measurements
- Apache SystemML in action
Deep Learning on Apache Spark with DeepLearning4j and H2O
- ND4J - high performance linear algebra for the JVM
- Example: an IoT real-time anomaly detector
- Deploying the test data generator
- Install the Deeplearning4j example within Eclipse
- Running the examples in Eclipse
- Run the examples in Apache Spark
- Apache Spark GraphX
- Apache Spark GraphFrames
- Apache Spark with Jupyter Notebooks on IBM DataScience Experience
Apache Spark on Kubernetes
- Bare metal, virtual machines, and containers
- Understanding the core concepts of Docker
- Understanding Kubernetes
- Using Kubernetes for provisioning containerized Spark applications
- Example--Apache Spark on Kubernetes
- Title: Mastering Apache Spark 2.x - Second Edition
- Release date: July 2017
- Publisher(s): Packt Publishing
- ISBN: 9781786462749
You might also like
Introducing Python, 2nd Edition
Easy to understand and fun to read, this updated edition of Introducing Python is ideal for …
51+ hours of video instruction. Overview The professional programmer’s Deitel® video guide to Python development with …
Python Crash Course, 2nd Edition
This is the second edition of the best selling Python book in the world. Python Crash …
Software Engineering at Google
Today, software engineers need to know not only how to program effectively but also how to …