Video description
If you are looking to expand your knowledge in data engineering or want to level up your portfolio by adding Spark programming to your skillset, then you are in the right place. This course will help you understand Spark programming and apply that knowledge to build data engineering solutions. This course is example-driven and follows a working session-like approach. We will be taking a live coding approach and explaining all the concepts needed along the way.
In this course, we will start with a quick introduction to Apache Spark, then set up our environment by installing and using Apache Spark. Next, we will learn about Spark execution model and architecture, and about Spark programming model and developer experience. Next, we will cover Spark structured API foundation and then move towards Spark data sources and sinks.
Then we will cover Spark Dataframe and dataset transformations. We will also cover aggregations in Apache Spark and finally, we will cover Spark Dataframe joins.
By the end of this course, you will be able to build data engineering solutions using Spark structured API in Python.
What You Will Learn
- Learn Apache Spark Foundation and Spark architecture
- Learn data engineering and data processing in Spark
- Work with data sources and sinks
- Use PyCharm IDE for Spark development and debugging
- Learn unit testing, managing application logs, and cluster deployment
Audience
This course is designed for software engineers willing to develop a data engineering pipeline and application using Apache Spark; for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure, for managers and architects who do not directly work with Spark implementation but work with the people who implement Apache Spark at the ground level.
This course does not require any prior knowledge of Apache Spark or Hadoop; only programming knowledge using Python programming language is required.
About The Author
ScholarNest: ScholarNest is a small team of people passionate about helping others learn and grow in their careers by bridging the gap between their existing and required skills.
Together, they have over 40+ years of experience in IT as a developer, architect, consultant, trainer, and mentor. They have worked with international software services organizations on various data-centric and Big Data projects.
It is a team of firm believers in lifelong continuous learning and skill development. To popularize the importance of continuous learning, they started publishing free training videos on their YouTube channel. They conceptualized the notion of continuous learning, creating a journal of our learning under the Learning Journal banner.
Table of contents
- Chapter 1 : Apache Spark Introduction
-
Chapter 2 : Installing and Using Apache Spark
- Spark Development Environments
- Mac Users - Apache Spark in Local Mode Command Line REPL
- Windows Users - Apache Spark in Local Mode Command Line REPL
- Mac Users - Apache Spark in the IDE - PyCharm
- Windows Users - Apache Spark in the IDE - PyCharm
- Apache Spark in Cloud - Databricks Community and Notebooks
- Apache Spark in Anaconda - Jupyter Notebook
-
Chapter 3 : Spark Execution Model and Architecture
- Execution Methods - How to Run Spark Programs?
- Spark Distributed Processing Model - How Your Program Runs?
- Spark Execution Modes and Cluster Managers
- Summarizing Spark Execution Models - When to Use What?
- Working with PySpark Shell - Demo
- Installing Multi-Node Spark Cluster - Demo
- Working with Notebooks in Cluster - Demo
- Working with Spark Submit - Demo
- Section Summary
-
Chapter 4 : Spark Programming Model and Developer Experience
- Creating Spark Project Build Configuration
- Configuring Spark Project Application Logs
- Creating Spark Session
- Configuring Spark Session
- Data Frame Introduction
- Data Frame Partitions and Executors
- Spark Transformations and Actions
- Spark Jobs Stages and Task
- Understanding your Execution Plan
- Unit Testing Spark Application
- Rounding off Summary
- Chapter 5 : Spark Structured API Foundation
- Chapter 6 : Spark Data Sources and Sinks
- Chapter 7 : Spark Dataframe and Dataset Transformations
- Chapter 8 : Aggregations in Apache Spark
- Chapter 9 : Spark Dataframe Joins
- Chapter 10 : Keep Learning
Product information
- Title: Spark Programming in Python for Beginners with Apache Spark 3
- Author(s):
- Release date: February 2022
- Publisher(s): Packt Publishing
- ISBN: 9781803246161
You might also like
book
Beginning Apache Spark 3: With DataFrame, Spark SQL, Structured Streaming, and Spark Machine Learning Library
Take a journey toward discovering, learning, and using Apache Spark 3.0. In this book, you will …
video
Python: Zero to Coder (Video Collection)
Overview 15+ Hours of Video Instruction Description This master class includes the following courses: Introduction to …
video
Spark Programming in Scala for Beginners with Apache Spark 3
Apache Spark is a lightning-fast unified analytics engine for big data and machine learning. Since its …
video
Apache Kafka Series - Learn Apache Kafka for Beginners v3
The high throughput and low latency of Apache Kafka have made it one of the leading …