Skip to Content
40 Algorithms Every Programmer Should Know
book

40 Algorithms Every Programmer Should Know

by Imran Ahmad
June 2020
Intermediate to advanced
382 pages
11h 39m
English
Packt Publishing
Content preview from 40 Algorithms Every Programmer Should Know

Implementing data processing in Apache Spark

Let's see how we can create an RDD in Apache Spark and run distributed processing on it across the cluster:

  1. For this, first, we need to create a new Spark session, as follows:

from pyspark.sql import SparkSessionspark = SparkSession.builder.appName('cloudanum').getOrCreate()
  1. Once we have created a Spark session, we use a CSV file for the source of the RDD. Then, we will run the following function—it will create an RDD that is abstracted as a DataFrame called df. The ability to abstract an RDD as a DataFrame was added in Spark 2.0 and this makes it easier to process the data:

df = spark.read.csv('taxi2.csv',inferSchema=True,header=True)

Let's look into the columns of the DataFrame:

  1. Next, ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

50 Algorithms Every Programmer Should Know - Second Edition

50 Algorithms Every Programmer Should Know - Second Edition

Imran Ahmad
Grokking Algorithms

Grokking Algorithms

Aditya Bhargava

Publisher Resources

ISBN: 9781789801217Supplemental Content