Apache Spark with Scala - Learn Spark from a Big Data Guru

Video description

This course covers all the fundamentals of Apache Spark with Scala and teaches you everything you need to know about developing Spark applications with Scala. At the end of this course, you will gain in-depth knowledge about Apache Spark and general big data analysis and manipulations skills to help your company to adapt Apache Spark for building a big data processing pipeline and data analytics applications. This course covers 10+ hands-on big data examples. You will learn valuable knowledge about how to frame data analysis problems as Spark problems. Together we will learn examples such as aggregating NASA Apache web logs from different sources; we will explore the price trend by looking at the real estate data in California; we will write Spark applications to find out the median salary of developers in different countries through the Stack Overflow survey data; we will develop a system to analyze how maker spaces are distributed across different regions in the United Kingdom, and much, much more. This course is taught in Scala. Scala is the next generation programming language for functional programming that is growing in popularity and it is one of the most widely used languages in the industry to write Spark programs. Let's learn how to write Spark programs with Scala to model big data problems today!

What You Will Learn

  • An overview of the architecture of Apache Spark.
  • Work with Apache Spark s primary abstraction, resilient distributed datasets(RDDs) to process and analyze large data sets.
  • Develop Apache Spark 2.0 applications using RDD transformations and actions and Spark SQL.
  • Scale up Spark applications on a Hadoop YARN cluster through Amazon s Elastic MapReduce service.
  • Analyze structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding about Spark SQL.
  • Share information across different nodes on a Apache Spark cluster by broadcast variables and accumulators.
  • Advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.
  • Best practices of working with Apache Spark in the field.


Anyone who wants to fully understand how Apache Spark technology works and learn how Apache Spark is being used in the field. Software engineers who want to develop Apache Spark 2.0 applications using Spark Core and Spark SQL.Data scientists or data engineers who want to advance their career by improving their big data processing skills.

About The Author

James Lee: James Lee is a passionate software wizard working at one of the top Silicon Valley-based start-ups specializing in big data analysis. He has also worked at Google and Amazon. In his day job, he works with big data technologies, including Cassandra and Elasticsearch, and is an absolute Docker geek and IntelliJ IDEA lover. Apart from his career as a software engineer, he is keen on sharing his knowledge with others and guiding them, especially in relation to start-ups and programming. He has been teaching courses and conducting workshops on Java programming / IntelliJ IDEA since he was 21. James also enjoys skiing and swimming, and is a passionate traveler.

Publisher resources

Download Example Code

Table of contents

  1. Chapter 1 : Get Started with Apache Spark
    1. Course Overview
    2. Introduction to Spark
    3. Install Java and Git
    4. Set up Spark project with IntelliJ IDEA
    5. Run our first Apache Spark job
    6. Trouble Shooting: Run our first Apache Spark job
  2. Chapter 2 : RDD
    1. RDD Basics in Apache Spark
    2. Create RDDs
    3. Map and Filter Transformation in Apache Spark
    4. Solution to Airports by Latitude Problem
    5. FlatMap Transformation in Apache Spark
    6. Set Operation in Apache Spark
    7. Solution for the Same Hosts Problem
    8. Actions in Apache Spark
    9. Solution to Sum of Numbers Problem
    10. Important Aspects about RDD
    11. Summary of RDD Operations in Apache Spark
    12. Caching and Persistence in Apache Spark
  3. Chapter 3 : Spark Architecture and Components
    1. Spark Architecture
    2. Spark Components
  4. Chapter 4 : Pair RDD in Apache Spark
    1. Introduction to Pair RDD in Spark
    2. Create Pair RDDs in Spark
    3. Filter and MapValue Transformations on Pair RDD
    4. Reduce By Key Aggregation in Apache Spark
    5. Sample solution for the Average House problem
    6. GroupBy Key Transformation in Spark
    7. SortBy Key Transformation in Spark
    8. Sample Solution for the Sorted Word Count Problem
    9. Data Partitioning in Apache Spark
    10. Join Operations in Spark
  5. Chapter 5 : Advanced Spark Topic
    1. Accumulators
    2. Solution to StackOverflow Survey Follow-up Problem
    3. Broadcast Variables
  6. Chapter 6 : Apache Spark SQL
    1. Introduction to Apache Spark SQL
    2. Spark SQL in Action
    3. Spark SQL practice: House Price Problem
    4. Spark SQL Joins
    5. Strongly Typed Dataset
    6. Use Dataset or RDD
    7. Dataset and RDD Conversion
    8. Performance Tuning of Spark SQL
  7. Chapter 7 : Running Spark in a Cluster
    1. Introduction to Running Spark in a Cluster
    2. Package Spark Application and Use spark-submit
    3. Run Spark Application on Amazon EMR (Elastic MapReduce) cluster

Product information

  • Title: Apache Spark with Scala - Learn Spark from a Big Data Guru
  • Author(s): James Lee, Tao W
  • Release date: April 2018
  • Publisher(s): Packt Publishing
  • ISBN: 9781789134537