Building Spark Applications

Video description

13+ Hours of Video Instruction


Building Spark Applications LiveLessons provides data scientists and developers with a practical introduction to the Apache Spark framework using Python, R, and SQL.  Additionally, it covers best practices for developing scalable Spark applications for predictive analytics in the context of a data scientist's standard workflow.


In this video training, Jonathan starts off with a brief history of Spark itself and shows you how to get started programming in a Spark environment on a laptop.  Taking an application and code first approach, he then covers the various APIs in Python, R, and SQL to show how Spark makes large scale data analysis much more accessible through languages familiar to data scientists and analysts alike.  With the basics covered, the videos move into a real-world case study showing you how to explore data, process text, and build models with Spark. Throughout the process, Jonathan exposes the internals of the Spark framework itself to show you how to write better application code, optimize performance, and set up a cluster to fully leverage the distributed nature of Spark.  After watching these videos, data scientists and developers will feel confident building an end-to-end application with Spark to perform machine learning and do data analysis at scale!

About the Instructor

Jonathan Dinu is the founder of Zipfian Academy an advanced immersive training program for data scientists and data engineers in San Francisco and served as its CAO/CTO before it was acquired by Galvanize,  where he now is the VP of Academic Excellence.  He first discovered his love of all things data while studying Computer Science and Physics at UC Berkeley, and in a former life he worked for Alpine Data Labs developing distributed machine learning algorithms for predictive analytics on Hadoop.

Jonathan is a dedicated educator, author, and speaker with a passion for sharing the things he has learned in the most creative ways he can.  He has run data science workshops at Strata and PyData (among others), built a Data Visualization course with Udacity, and served on the UC Berkeley Extension Data Science Advisory Board.  Currently he is writing a book on practical Data Science applications using Python.  When he is not working with students you can find him blogging about data, visualization, and education at

Skill Level

  • Beginning/Intermediate

What You Will Learn

  • How to install and set up a Spark environment locally and on a cluster
  • The differences between and the strengths of the Python, R, and SQL programming interfaces
  • How to build a machine learning model for text
  • Common data science use cases that Spark is especially well-suited to solve
  • How to tune a Spark application for performance
  • The internals of the Spark framework and its execution model
  • How to use Spark in a data science application workflow
  • The basics of the larger Spark ecosystem

Who Should Take This Course

  • Practicing Data scientists who already use Python or R and want to learn how to scale up their analyses with Spark.
  • Data Engineers who already use Java/Scala for Spark but want to learn about the Python, R, and SQL APIs and understand how Spark can be used to solve Data Science problems.

Course Requirements

  • Basic understanding of programming.
  • Familiarity with the data science process and machine learning are a plus.

Lesson Descriptions

Lesson 1: Introduction to the Spark Environment

Lesson 1, “Introduction to the Spark Environment,” introduces Spark and provides context for the history and motivation for the framework.  This lesson covers how to install and set up Spark locally, work with the Spark REPL and Jupyter notebook, and the basics of programming with Spark.

Lesson 2: Spark Programming APIs

Lesson 2, “Spark Programming APIs,” covers each of the various Spark programming interfaces. This lesson highlights the differences between and the tradeoffs of the Python (PySpark), R (SparkR), and SQL (Spark SQL and DataFrames) APIs as well as typical workflows for which each is best suited.

Lesson 3: Your First Spark Application

Lesson 3, “Your First Spark Application,” walks you through a case study with data showing how Spark fits into the typical data science workflow.  This lesson covers how to perform exploratory data analysis at scale, apply natural language processing techniques, and write an implementation of the k-means algorithm for unsupervised learning on text data.

Lesson 4: Spark Internals

Lesson 4, “Spark Internals,” peels back the layers of the framework and walks you through how Spark executes code in a distributed fashion.  This lesson starts with a primer on distributed systems theory before diving into the Spark execution context, the details of RDDs, and how to run Spark in cluster mode on Amazon EC2.  The lesson finishes with best practices for monitoring and tuning the performance of a Spark application.

Lesson 5: Advanced Applications

Lesson 5, “Advanced Applications,” takes you through a KDD cup competition, showing you how to leverage Spark’s higher level machine learning libraries (MLlib and  The lesson covers the basics of machine learning theory, shows you how to evaluate the performance of models through cross validation, and demonstrates how to build a machine learning pipeline with Spark.  The lesson finishes by showing you how to serialize and deploy models for use in a production setting.

About LiveLessons Video Training

The LiveLessons Video Training series publishes hundreds of hands-on, expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. This professional and personal technology video series features world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, IBM Press, Pearson IT Certification, Prentice Hall, Sams, and Que. Topics include: IT Certification, Programming, Web Development, Mobile Development, Home and Office Technologies, Business and Management, and more.  View all LiveLessons on InformIT at:

Table of contents

  1. Introduction
    1. Building Spark Applications LiveLessons: Introduction 00:05:03
  2. Lesson 1: Introduction to the Spark Environment
    1. Topics 00:00:49
    2. 1.1 Getting the Materials 00:02:40
    3. 1.2 A Brief Historical Diversion 00:07:17
    4. 1.3 Origins of the Framework 00:07:23
    5. 1.4 Why Spark? 00:19:12
    6. 1.5 Getting Set Up: Spark and Java 00:09:48
    7. 1.6 Getting Set Up: Scientific Python 00:05:08
    8. 1.7 Getting Set Up: R Kernel for Jupyter 00:09:11
    9. 1.8 Your First PySpark Job 00:18:04
    10. 1.9 Introduction to RDDs: Functions, Transformations, and Actions 00:23:06
    11. 1.10 MapReduce with Spark: Programming with Key-Value Pairs 00:17:16
  3. Lesson 2: Spark Programming APIs
    1. Topics 00:01:02
    2. 2.1 Introduction to the Spark Programming APIs 00:10:53
    3. 2.2 PySpark: Loading and Importing Data 00:19:31
    4. 2.3 PySpark: Parsing and Transforming Data 00:09:41
    5. 2.4 PySpark: Analyzing Flight Delays 00:20:52
    6. 2.5 SparkR: Introduction to DataFrames 00:20:33
    7. 2.6 SparkR: Aggregations and Analysis 00:08:33
    8. 2.7 SparkR: Visualizing Data with ggplot2 00:09:41
    9. 2.8 Why (Spark) SQL? 00:03:42
    10. 2.9 Spark SQL: Adding Structure to Your Data 00:31:47
    11. 2.10 Spark SQL: Integration into Existing Workflows 00:04:42
  4. Lesson 3: Your First Spark Application
    1. Topics 00:01:10
    2. 3.1 How Spark Fits into the Data Science Process 00:14:29
    3. 3.2 Introduction to Exploratory Data Analysis 00:10:09
    4. 3.3 Case Study: 00:17:40
    5. 3.4 Data Quality Checks with Accumulators 00:18:49
    6. 3.5 Making Sense of Data: Summary Statistics and Distributions 00:14:51
    7. 3.6 Working with Text: Introduction to NLP 00:07:43
    8. 3.7 Tokenization and Vectorization with Spark 00:17:53
    9. 3.8 Summarization with tf-idf 00:20:17
    10. 3.9 Introduction to Machine Learning 00:20:47
    11. 3.10 Unsupervised Learning with Spark: Implementing k-means 00:24:04
    12. 3.11 Testing k-means with Essays 00:09:15
    13. 3.12 Challenges of k-means: Latent Features, Interpretation, and Validation 00:21:38
  5. Lesson 4: Spark Internals
    1. Topics 00:00:55
    2. 4.1 Introduction to Distributed Systems 00:15:56
    3. 4.2 Building Systems That Scale 00:11:37
    4. 4.3 The Spark Execution Context 00:10:08
    5. 4.4 RDD Deep Dive: Dependencies and Lineage 00:11:49
    6. 4.5 A Day in the Life of a Spark Application 00:14:01
    7. 4.6 How Code Runs: Stages, Tasks, and the Shuffle 00:13:21
    8. 4.7 Spark Deployment: Local and Cluster Modes 00:20:50
    9. 4.8 Setting Up Your Own Cluster 00:22:36
    10. 4.9 Spark Performance: Monitoring and Optimization 00:09:25
    11. 4.10 Tuning Your Spark Application 00:20:08
    12. 4.11 Making Spark Fly: Parallelism 00:07:34
    13. 4.12 Making Spark Fly: Caching 00:13:05
  6. Lesson 5: Advanced Applications
    1. Topics 00:00:53
    2. 5.1 Machine Learning on Spark: MLlib and 00:13:40
    3. 5.2 The KDD Cup Competition: Preparing Data and Imputing Values 00:22:43
    4. 5.3 Introduction to Supervised Learning: Logistic Regression 00:17:36
    5. 5.4 Building a Model with MLlib 00:13:09
    6. 5.5 Model Evaluation and Metrics 00:14:41
    7. 5.6 Leveraging scikit-learn to Evaluate MLlib Models 00:21:37
    8. 5.7 Training Models with 00:16:07
    9. 5.8 Machine Learning Pipelines with 00:11:03
    10. 5.9 Tuning Models: Features, Cross Validation, and Grid Search 00:13:43
    11. 5.10 Serializing and Deploying Models 00:08:22
  7. Summary
    1. Building Spark Applications LiveLessons:Summary 00:08:06

Product information

  • Title: Building Spark Applications
  • Author(s):
  • Release date: November 2015
  • Publisher(s): Addison-Wesley Professional
  • ISBN: 013439349X