Foreword
Many of the ideas that underpin the Apache Hadoop project are decades old. Academia and industry have been exploring distributed storage and computation since the 1960s. The entire tech industry grew out of government and business demand for data processing, and at every step along that path, the data seemed big to the people in the moment. Even some of the most advanced and interesting applications go way back: machine learning, a capability that’s new to many enterprises, traces its origins to academic research in the 1950s and to practical systems work in the 1960s and 1970s.
But real, practical, useful, massively scalable, and reliable systems simply could not be found—at least not cheaply—until Google confronted the problem of the internet in the late 1990s and early 2000s. Collecting, indexing, and analyzing the entire web was impossible, using commercially available technology of the time.
Google dusted off the decades of research in large-scale systems. Its architects realized that, for the first time ever, the computers and networking they required could be had, at reasonable cost.
Its work—on the Google File System (GFS) for storage and on the MapReduce framework for computation—created the big data industry.
This work led to the creation of the open source Hadoop project in 2005 by Mike Cafarella and Doug Cutting. The fact that the software was easy to get, and could be improved and extended by a global developer community, made it attractive to a wide audience. ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access