Book description
Get started with distributed computing using PySpark, a single unified framework to solve end-to-end data analytics at scale
Key Features
- Discover how to convert huge amounts of raw data into meaningful and actionable insights
- Use Spark's unified analytics engine for end-to-end analytics, from data preparation to predictive analytics
- Perform data ingestion, cleansing, and integration for ML, data analytics, and data visualization
Book Description
Apache Spark is a unified data analytics engine designed to process huge volumes of data quickly and efficiently. PySpark is Apache Spark's Python language API, which offers Python developers an easy-to-use scalable data analytics framework.
Essential PySpark for Scalable Data Analytics starts by exploring the distributed computing paradigm and provides a high-level overview of Apache Spark. You'll begin your analytics journey with the data engineering process, learning how to perform data ingestion, cleansing, and integration at scale. This book helps you build real-time analytics pipelines that help you gain insights faster. You'll then discover methods for building cloud-based data lakes, and explore Delta Lake, which brings reliability to data lakes. The book also covers Data Lakehouse, an emerging paradigm, which combines the structure and performance of a data warehouse with the scalability of cloud-based data lakes. Later, you'll perform scalable data science and machine learning tasks using PySpark, such as data preparation, feature engineering, and model training and productionization. Finally, you'll learn ways to scale out standard Python ML libraries along with a new pandas API on top of PySpark called Koalas.
By the end of this PySpark book, you'll be able to harness the power of PySpark to solve business problems.
What you will learn
- Understand the role of distributed computing in the world of big data
- Gain an appreciation for Apache Spark as the de facto go-to for big data processing
- Scale out your data analytics process using Apache Spark
- Build data pipelines using data lakes, and perform data visualization with PySpark and Spark SQL
- Leverage the cloud to build truly scalable and real-time data analytics applications
- Explore the applications of data science and scalable machine learning with PySpark
- Integrate your clean and curated data with BI and SQL analysis tools
Who this book is for
This book is for practicing data engineers, data scientists, data analysts, and data enthusiasts who are already using data analytics to explore distributed and scalable data analytics. Basic to intermediate knowledge of the disciplines of data engineering, data science, and SQL analytics is expected. General proficiency in using any programming language, especially Python, and working knowledge of performing data analytics using frameworks such as pandas and SQL will help you to get the most out of this book.
Table of contents
- Essential PySpark for Scalable Data Analytics
- Contributors
- About the author
- About the reviewers
- Preface
- Section 1: Data Engineering
- Chapter 1: Distributed Computing Primer
-
Chapter 2: Data Ingestion
- Technical requirements
- Introduction to Enterprise Decision Support Systems
- Ingesting data from data sources
- Ingesting data into data sinks
- Using file formats for data storage in data lakes
- Building data ingestion pipelines in batch and real time
- Unifying batch and real time using Lambda Architecture
- Summary
- Chapter 3: Data Cleansing and Integration
- Chapter 4: Real-Time Data Analytics
- Section 2: Data Science
- Chapter 5: Scalable Machine Learning with PySpark
- Chapter 6: Feature Engineering – Extraction, Transformation, and Selection
- Chapter 7: Supervised Machine Learning
- Chapter 8: Unsupervised Machine Learning
- Chapter 9: Machine Learning Life Cycle Management
- Chapter 10: Scaling Out Single-Node Machine Learning Using PySpark
- Section 3: Data Analysis
- Chapter 11: Data Visualization with PySpark
- Chapter 12: Spark SQL Primer
- Chapter 13: Integrating External Tools with Spark SQL
- Chapter 14: The Data Lakehouse
- Other Books You May Enjoy
Product information
- Title: Essential PySpark for Scalable Data Analytics
- Author(s):
- Release date: October 2021
- Publisher(s): Packt Publishing
- ISBN: 9781800568877
You might also like
book
Hands-On Big Data Analytics with PySpark
Use PySpark to easily crush messy data at-scale and discover proven techniques to create testable, immutable, …
video
Mastering Big Data Analytics with PySpark
PySpark helps you perform data analysis at-scale; it enables you to build more scalable analyses and …
book
Simplify Big Data Analytics with Amazon EMR
Design scalable big data solutions using Hadoop, Spark, and AWS cloud native services Key Features Build …
book
Advanced Analytics with PySpark
The amount of data being generated today is staggering and growing. Apache Spark has emerged as …