INTRODUCTION
IN THIS FAST-PACED WORLD of ever-changing technology, we have been drowning in information. We are generating and storing massive quantities of data. With the proliferation of devices on our networks, we have seen an amazing growth in a diversity of information formats and data — Big Data.
But let’s face it — if we’re honest with ourselves, most of our organizations haven’t been able to proactively manage massive quantities of this data effectively, and we haven’t been able to use this information to our advantage to make better decisions and to do business smarter. We have been overwhelmed with vast amounts of data, while at the same time we have been starved for knowledge. The result for companies is lost productivity, lost opportunities, and lost revenue.
Over the course of the past decade, many technologies have promised to help with the processing and analyzing of the vast amounts of information we have, and most of these technologies have come up short. And we know this because, as programmers focused on data, we have tried it all. Many approaches have been proprietary, resulting in vendor lock-in. Some approaches were promising, but couldn’t scale to handle large data sets, and many were hyped up so much that they couldn’t meet expectations, or they simply were not ready for prime time.
When Apache Hadoop entered the scene, however, everything was different. Certainly there was hype, but this was an open source project that had already found incredible success ...
Get Professional Hadoop Solutions now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.