June 2017
Beginner to intermediate
576 pages
15h 22m
English
One of the challenges of working with Spark is finding analytic solutions when working with very large datasets. As a preparation step, in this chapter, we will build our very own large Spark dataframe.
Also note that the concept of large dataframe is obviously relative. Since the free databricks will limit the size of any dataframe created, we will end up building a 1-million-row dataframe of about 11 variables.
I will also show you how to build a similar dataset in base R, so that you can perform your own testing and be able to judge the performance benefits received from performing the analytics on Spark.