Apache Spark Streaming with Python and PySpark

Video description

Spark Streaming is becoming incredibly popular, and with good reason. According to IBM, 90% of the data in the World today was created in the last two years alone. Our current output of data is roughly 2.5 quintillion bytes per day. The World is being immersed in data, more so each and every day. As such, analyzing static DataFrames for non-dynamic data is becoming less and less of a practical approach to more and more problems. This is where data streaming comes in, the ability to process data almost as soon as it's produced, recognizing the time-dependency of the data. Apache Spark Streaming gives us an unlimited ability to build cutting-edge applications. It is also one of the most compelling technologies of the last decade in terms of its disruption in the big data world. Spark provides in-memory cluster computing, which greatly boosts the speed of iterative algorithms and interactive data mining tasks. Spark also is a powerful engine for streaming data as well as processing it. The synergy between them makes Spark an ideal tool for processing gargantuan data fire hoses. Tons of companies, including Fortune 500 companies, are adapting Apache Spark Streaming to extract meaning from massive data streams; today, you have access to that same big data technology right on your desktop. This Apache Spark Streaming course is taught in Python. Python is currently one of the most popular programming languages in the World! Its rich data community, offering vast amounts of toolkits and features, makes it a powerful tool for data processing. Using PySpark (the Python API for Spark), you will be able to interact with Apache Spark Streaming's main abstraction, RDDs, as well as other Spark components, such as Spark SQL and much more! Let's learn how to write Apache Spark Streaming programs with PySpark Streaming to process big data sources today!

What You Will Learn

  • An overview of the Apache Spark architecture.
  • How to develop Apache Spark Streaming applications with PySpark using RDD transformations and actions and Spark SQL, Spark s primary abstraction, Resilient Distributed Datasets (RDDs), to process and analyze large data sets.
  • Advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching, and persisting RDDs.
  • Analyzing structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding of Spark SQL.
  • How to scale up Spark Streaming applications for both bandwidth and processing speed and to integrate Spark Streaming with cluster computing tools such as Apache Kafka, to connect your Spark stream to a data source such as Amazon Web Services (AWS) Kinesis.
  • Best practices for working with Apache Spark Streaming in the field and Big data ecosystem overview.

Audience

This course is for Python developers looking to get better at data streaming, managers or senior engineers in data engineering teams, and Spark developers eager to expand their skills.

About The Author

James Lee: James Lee is a passionate software wizard working at one of the top Silicon Valley-based start-ups specializing in big data analysis. He has also worked at Google and Amazon. In his day job, he works with big data technologies, including Cassandra and Elasticsearch, and is an absolute Docker geek and IntelliJ IDEA lover. Apart from his career as a software engineer, he is keen on sharing his knowledge with others and guiding them, especially in relation to start-ups and programming. He has been teaching courses and conducting workshops on Java programming / IntelliJ IDEA since he was 21. James also enjoys skiing and swimming, and is a passionate traveler.

Product information

  • Title: Apache Spark Streaming with Python and PySpark
  • Author(s): James Lee, Matthew P. McAteer, Tao W
  • Release date: September 2018
  • Publisher(s): Packt Publishing
  • ISBN: 9781789808223