Book description
Harness the power of Scala to program Spark and analyze tonnes of data in the blink of an eye!
About This Book
- Learn Scala’s sophisticated type system that combines Functional Programming and object-oriented concepts
- Work on a wide array of applications, from simple batch jobs to stream processing and machine learning
- Explore the most common as well as some complex use-cases to perform large-scale data analysis with Spark
Who This Book Is For
Anyone who wishes to learn how to perform data analysis by harnessing the power of Spark will find this book extremely useful. No knowledge of Spark or Scala is assumed, although prior programming experience (especially with other JVM languages) will be useful to pick up concepts quicker.
What You Will Learn
- Understand object-oriented & functional programming concepts of Scala
- In-depth understanding of Scala collection APIs
- Work with RDD and DataFrame to learn Spark’s core abstractions
- Analysing structured and unstructured data using SparkSQL and GraphX
- Scalable and fault-tolerant streaming application development using Spark structured streaming
- Learn machine-learning best practices for classification, regression, dimensionality reduction, and recommendation system to build predictive models with widely used algorithms in Spark MLlib & ML
- Build clustering models to cluster a vast amount of data
- Understand tuning, debugging, and monitoring Spark applications
- Deploy Spark applications on real clusters in Standalone, Mesos, and YARN
In Detail
Scala has been observing wide adoption over the past few years, especially in the field of data science and analytics. Spark, built on Scala, has gained a lot of recognition and is being used widely in productions. Thus, if you want to leverage the power of Scala and Spark to make sense of big data, this book is for you.
The first part introduces you to Scala, helping you understand the object-oriented and functional programming concepts needed for Spark application development. It then moves on to Spark to cover the basic abstractions using RDD and DataFrame. This will help you develop scalable and fault-tolerant streaming applications by analyzing structured and unstructured data using SparkSQL, GraphX, and Spark structured streaming. Finally, the book moves on to some advanced topics, such as monitoring, configuration, debugging, testing, and deployment.
You will also learn how to develop Spark applications using SparkR and PySpark APIs, interactive data analytics using Zeppelin, and in-memory data processing with Alluxio.
By the end of this book, you will have a thorough understanding of Spark, and you will be able to perform full-stack data analytics with a feel that no amount of data is too big.
Style and approach
Filled with practical examples and use cases, this book will hot only help you get up and running with Spark, but will also take you farther down the road to becoming a data scientist.
Table of contents
- Preface
- Introduction to Scala
- Object-Oriented Scala
- Functional Programming Concepts
- Collection APIs
- Tackle Big Data – Spark Comes to the Party
- Start Working with Spark – REPL and RDDs
- Special RDD Operations
- Introduce a Little Structure - Spark SQL
- Stream Me Up, Scotty - Spark Streaming
- Everything is Connected - GraphX
-
Learning Machine Learning - Spark MLlib and Spark ML
- Introduction to machine learning
- Spark machine learning APIs
- Feature extraction and transformation
- Creating a simple pipeline
- Unsupervised machine learning
- Binary and multiclass classification
- Summary
- My Name is Bayes, Naive Bayes
- Time to Put Some Order - Cluster Your Data with Spark MLlib
- Text Analytics Using Spark ML
- Spark Tuning
-
Time to Go to ClusterLand - Deploying Spark on a Cluster
- Spark architecture in a cluster
-
Deploying the Spark application on a cluster
- Submitting Spark jobs
-
Hadoop YARN
-
Configuring a single-node YARN cluster
- Step 1: Downloading Apache Hadoop
- Step 2: Setting the JAVA_HOME
- Step 3: Creating users and groups
- Step 4: Creating data and log directories
- Step 5: Configuring core-site.xml
- Step 6: Configuring hdfs-site.xml
- Step 7: Configuring mapred-site.xml
- Step 8: Configuring yarn-site.xml
- Step 9: Setting Java heap space
- Step 10: Formatting HDFS
- Step 11: Starting the HDFS
- Step 12: Starting YARN
- Step 13: Verifying on the web UI
- Submitting Spark jobs on YARN cluster
- Advance job submissions in a YARN cluster
-
Configuring a single-node YARN cluster
- Apache Mesos
- Deploying on AWS
- Summary
- Testing and Debugging Spark
- PySpark and SparkR
Product information
- Title: Scala and Spark for Big Data Analytics
- Author(s):
- Release date: July 2017
- Publisher(s): Packt Publishing
- ISBN: 9781785280849
You might also like
book
Scala Programming for Big Data Analytics : Get Started With Big Data Analytics Using Apache Spark
Gain the key language concepts and programming techniques of Scala in the context of big data …
book
Spark for Data Science
Analyze your data and delve deep into the world of machine learning with the latest Spark …
book
Scala Machine Learning Projects
Powerful smart applications using deep learning algorithms to dominate numerical computing, deep learning, and functional programming. …
book
Mastering Spark for Data Science
Master the techniques and sophisticated analytics used to construct Spark-based solutions that scale to deliver production-grade …