Writing UDF on PySpark

Like Scala and Java, you can also work with User Defined Functions (aka. UDF) on PySpark. Let's see an example in the following. Suppose we want to see the grade distribution based on the score for some students who have taken courses at a university.

We can store them in two separate arrays as follows:

# Let's generate somerandom lists students = ['Jason', 'John', 'Geroge', 'David'] courses = ['Math', 'Science', 'Geography', 'History', 'IT', 'Statistics']

Now let's declare an empty array for storing the data about courses and students so that later on both can be appended to this array as follows:

rawData = []for (student, course) in itertools.product(students, courses): rawData.append((student, course, random.randint(0, ...

Get Scala and Spark for Big Data Analytics now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.