Chapter 6. Bayes Classifier on Cloud Dataproc
Having become accustomed to running queries in BigQuery where there were no clusters to manage, I’m dreading going back to configuring and managing Hadoop clusters. But I did promise you a tour of data science on the cloud, and in many companies, Hadoop plays an important role in that. Fortunately, Google Cloud Dataproc makes it convenient to spin up a Hadoop cluster that is capable of running MapReduce, Pig, Hive, and Spark. Although there is no getting away from cluster management and diminished resources,1 I can at least avoid the programming drudgery of writing low-level MapReduce jobs by using Apache Spark and Apache Pig.
In this chapter, we tackle the next stage of our data science problem, by creating a Bayesian model to predict the likely arrival delay of a flight. We will do this through an integrated workflow that involves BigQuery, Spark SQL, and Apache Pig. Along the way, we will also learn how to create, resize, and delete job-specific Hadoop clusters using Cloud Dataproc.
MapReduce and the Hadoop Ecosystem
MapReduce was described in a paper by Jeff Dean and Sanjay Ghemawat as a way to process large datasets on a cluster of machines. They showed that many real-world tasks can be decomposed into a sequence of two types of functions: map functions that process key-value pairs to generate intermediate key-value pairs, and reduce functions that merge all the intermediate values associated with the same key. A flexible and ...