Using Avro with Spark

So far, we have looked at text-based files. We worked with plain text, JSON, and CSV. JSON and CSV are better than plain text because they carry some schema information.

In this section, we'll be looking at an advanced schema, known as Avro. The following topics will be covered:

  • Saving data in Avro format
  • Loading Avro data
  • Testing

Avro has a schema and data embedded within it. This is a binary format and is not human-readable. We will learn how to save data in Avro format, load it, and then test it.

First, we will create our user transaction:

 test("should save and load avro") { //given import spark.sqlContext.implicits._ val rdd = spark.sparkContext .makeRDD(List(UserTransaction("a", 100), UserTransaction("b", 200))) ...

Get Hands-On Big Data Analytics with PySpark now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.