Big Data Processing with Apache Spark

Video description

Efficiently tackle large data sets and big data analysis challenges using Spark and Python

About This Video

This course will allow the learner to:

  • Get up and running with Apache Spark and Python
  • Integrate Spark with AWS for real-time analytics
  • Apply processed data streams to machine learning APIs of Apache Spark

In Detail

Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. Big Data Processing with Apache Spark teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming.

You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption.

By the end of this course, you'll not only have understood how to use machine learning extensions and structured streams but you'll also be able to apply Spark in your own upcoming big data projects.


This course is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don‘t need any knowledge of Spark, prior experience of working with Python is recommended.

Product information

  • Title: Big Data Processing with Apache Spark
  • Author(s): Manuel Ignacio Franco Galeano, Nimish Narang
  • Release date: January 2019
  • Publisher(s): Packt Publishing
  • ISBN: 9781789953688