This course covers all the fundamentals of Apache Spark with Python and teaches you everything you need to know about developing Spark applications using PySpark, the Python API for Spark. At the end of this course, you will gain in-depth knowledge about Apache Spark and general big data analysis and manipulations skills to help your company to adopt Apache Spark for building big data processing pipeline and data analytics applications. This course covers 10+ hands-on big data examples. You will learn valuable knowledge about how to frame data analysis problems as Spark problems. Together we will learn examples such as aggregating NASA Apache weblogs from different sources; we will explore the price trend by looking at the real estate data in California; we will write Spark applications to find out the median salary of developers in different countries through the Stack Overflow survey data; we will develop a system to analyze how maker spaces are distributed across different regions in the United Kingdom. And much much more.
What You Will Learn
- An overview of the architecture of Apache Spark.
- Develop Apache Spark 2.0 applications using RDD transformations and actions and Spark SQL.
- Work with Apache Spark s primary abstraction, resilient distributed datasets (RDDs) to process and analyze large data sets
- Analyze structured and semi-structured data using DataFrames, and develop a thorough understanding about Spark SQL.
- Advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.
- Scale up Spark applications on a Hadoop YARN cluster through Amazon s Elastic MapReduce service.
- Share information across different nodes on an Apache Spark cluster by broadcast variables and accumulators.
- Write Spark applications using the Python API - PySpark
Anyone who wants to fully understand how Apache Spark technology works and learn how Apache Spark is being used in the field. Software engineers who want to develop Apache Spark 2.0 applications using Spark Core and Spark SQL.Data scientists or data engineers who want to advance their career by improving their big data processing skills.
About The Author
James Lee: James Lee is a passionate software wizard working at one of the top Silicon Valley-based start-ups specializing in big data analysis. He has also worked at Google and Amazon. In his day job, he works with big data technologies, including Cassandra and Elasticsearch, and is an absolute Docker geek and IntelliJ IDEA lover. Apart from his career as a software engineer, he is keen on sharing his knowledge with others and guiding them, especially in relation to start-ups and programming. He has been teaching courses and conducting workshops on Java programming / IntelliJ IDEA since he was 21. James also enjoys skiing and swimming, and is a passionate traveler.
Table of contents
- Chapter 1 : Get Started with Apache Spark
- Chapter 2 : RDD
- Chapter 3 : Spark Architecture and Components
- Chapter 4 : Pair RDD
- Chapter 5 : Advanced Spark Topics
- Chapter 6 : Spark SQL
- Chapter 7 : Running Spark in a Cluster
- Title: Apache Spark with Python - Big Data with PySpark and Spark
- Release date: April 2018
- Publisher(s): Packt Publishing
- ISBN: 9781789133394
You might also like
Apache Spark Streaming with Python and PySpark
Spark Streaming is becoming incredibly popular, and with good reason. According to IBM, 90% of the …
Apache Spark with Java - Learn Spark from a Big Data Guru
This course covers all the fundamentals of Apache Spark with Java and teaches you everything you …
Beginning Apache Spark 3: With DataFrame, Spark SQL, Structured Streaming, and Spark Machine Learning Library
Take a journey toward discovering, learning, and using Apache Spark 3.0. In this book, you will …
Apache Spark with Scala - Learn Spark from a Big Data Guru
This course covers all the fundamentals of Apache Spark with Scala and teaches you everything you …