Chapter 3. MapReduce
In the traditional relational database world, all processing happens after the information has been loaded into the store, using a specialized query language on highly structured and optimized data structures. The approach pioneered by Google, and adopted by many other web companies, is to instead create a pipeline that reads and writes to arbitrary file formats, with intermediate results being passed between stages as files, with the computation spread across many machines. Typically based around the MapReduce approach to distributing work, this approach requires a whole new set of tools, which I’ll describe below.
Originally developed by Yahoo! as a clone of Google’s MapReduce infrastructure, but subsequently open sourced, Hadoop takes care of running your code across a cluster of machines. Its responsibilities include chunking up the input data, sending it to each machine, running your code on each chunk, checking that the code ran, passing any results either on to further processing stages or to the final output location, performing the sort that occurs between the map and reduce stages and sending each chunk of that sorted data to the right machine, and writing debugging information on each job’s progress, among other things.
As you might guess from that list of requirements, it’s quite a complex system, but thankfully it has been battle-tested by a lot of users. There’s a lot going on under the hood, but most of the time, as a developer, you only have ...
Get Big Data Glossary now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.