Drawing our first chart

With these Parquet files ready, create a new notebook in Zeppelin and name it Batch analytics. Then, in the first cell, type the following:

val transactions = spark.read.parquet("<rootProjectPath>/Scala-Programming-Projects/bitcoin-analyser/data/transactions")z.show(transactions.sort($"timestamp"))

The first line creates DataFrame from the transactions files. You need to replace the absolute path in the parquet function with the path to your Parquet files. The second line uses the special z variable to show the content of DataFrame in a table. This z variable is automatically provided in all notebooks. Its type is ZeppelinContext, and it allows you to interact with the Zeppelin renderer and interpreter.

Execute the ...

Get Scala Programming Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.