This course covers all the fundamentals of Apache Spark with Scala and teaches you everything you need to know about developing Spark applications with Scala. At the end of this course, you will gain in-depth knowledge about Apache Spark and general big data analysis and manipulations skills to help your company to adapt Apache Spark for building a big data processing pipeline and data analytics applications. This course covers 10+ hands-on big data examples. You will learn valuable knowledge about how to frame data analysis problems as Spark problems. Together we will learn examples such as aggregating NASA Apache web logs from different sources; we will explore the price trend by looking at the real estate data in California; we will write Spark applications to find out the median salary of developers in different countries through the Stack Overflow survey data; we will develop a system to analyze how maker spaces are distributed across different regions in the United Kingdom, and much, much more. This course is taught in Scala. Scala is the next generation programming language for functional programming that is growing in popularity and it is one of the most widely used languages in the industry to write Spark programs. Let's learn how to write Spark programs with Scala to model big data problems today!
What You Will Learn
- An overview of the architecture of Apache Spark.
- Work with Apache Spark s primary abstraction, resilient distributed datasets(RDDs) to process and analyze large data sets.
- Develop Apache Spark 2.0 applications using RDD transformations and actions and Spark SQL.
- Scale up Spark applications on a Hadoop YARN cluster through Amazon s Elastic MapReduce service.
- Analyze structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding about Spark SQL.
- Share information across different nodes on a Apache Spark cluster by broadcast variables and accumulators.
- Advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.
- Best practices of working with Apache Spark in the field.
Anyone who wants to fully understand how Apache Spark technology works and learn how Apache Spark is being used in the field. Software engineers who want to develop Apache Spark 2.0 applications using Spark Core and Spark SQL.Data scientists or data engineers who want to advance their career by improving their big data processing skills.
About The Author
James Lee: James Lee is a passionate software wizard working at one of the top Silicon Valley-based start-ups specializing in big data analysis. He has also worked at Google and Amazon. In his day job, he works with big data technologies, including Cassandra and Elasticsearch, and is an absolute Docker geek and IntelliJ IDEA lover. Apart from his career as a software engineer, he is keen on sharing his knowledge with others and guiding them, especially in relation to start-ups and programming. He has been teaching courses and conducting workshops on Java programming / IntelliJ IDEA since he was 21. James also enjoys skiing and swimming, and is a passionate traveler.
Table of contents
- Chapter 1 : Get Started with Apache Spark
Chapter 2 : RDD
- RDD Basics in Apache Spark
- Create RDDs
- Map and Filter Transformation in Apache Spark
- Solution to Airports by Latitude Problem
- FlatMap Transformation in Apache Spark
- Set Operation in Apache Spark
- Solution for the Same Hosts Problem
- Actions in Apache Spark
- Solution to Sum of Numbers Problem
- Important Aspects about RDD
- Summary of RDD Operations in Apache Spark
- Caching and Persistence in Apache Spark
- Chapter 3 : Spark Architecture and Components
Chapter 4 : Pair RDD in Apache Spark
- Introduction to Pair RDD in Spark
- Create Pair RDDs in Spark
- Filter and MapValue Transformations on Pair RDD
- Reduce By Key Aggregation in Apache Spark
- Sample solution for the Average House problem
- GroupBy Key Transformation in Spark
- SortBy Key Transformation in Spark
- Sample Solution for the Sorted Word Count Problem
- Data Partitioning in Apache Spark
- Join Operations in Spark
- Chapter 5 : Advanced Spark Topic
- Chapter 6 : Apache Spark SQL
- Chapter 7 : Running Spark in a Cluster
- Title: Apache Spark with Scala - Learn Spark from a Big Data Guru
- Release date: April 2018
- Publisher(s): Packt Publishing
- ISBN: 9781789134537
You might also like
Apache Spark with Java - Learn Spark from a Big Data Guru
This course covers all the fundamentals of Apache Spark with Java and teaches you everything you …
Apache Spark with Python - Big Data with PySpark and Spark
This course covers all the fundamentals of Apache Spark with Python and teaches you everything you …
Spark Programming in Scala for Beginners with Apache Spark 3
Apache Spark is a lightning-fast unified analytics engine for big data and machine learning. Since its …
Scala & Spark-Master Big Data with Scala and Spark
The course Scala from Beginner to Pro is refreshingly different. The well-thought-out quizzes and mini projects …