Chapter 1. Introduction
Back in 2003, Google published a paper describing a scale-out architecture for storing massive amounts of data across clusters of servers, which it called the Google File System (GFS). A year later, Google published another paper describing a programming model called MapReduce, which took advantage of GFS to process data in a parallel fashion, bringing the program to where the data resides. Around the same time, Doug Cutting and others were building an open source web crawler now called Apache Nutch. The Nutch developers realized that the MapReduce programming model and GFS were the perfect building blocks for a distributed web crawler, and they began implementing their own versions of both projects. These components would later split from Nutch and form the Apache Hadoop project. The ecosystem1 of projects built around Hadoop’s scale-out architecture brought about a different way of approaching problems by allowing the storage and processing of all data important to a business.
While all these new and exciting ways to process and store data in the Hadoop ecosystem have brought many use cases across different verticals to use this technology, it has become apparent that managing petabytes of data in a single centralized cluster can be dangerous. Hundreds if not thousands of servers linked together in a common application stack raises many questions about how to protect such a valuable asset. While other books focus on such things as writing MapReduce code, ...