The Hadoop Performance Myth

Why Best Practices Lead to Underutilized Clusters, and Which New Tools Can Help

The Hadoop Performance Myth

Get the free ebook

The wish lists of many data-driven organizations seem reasonable enough. They’d like to capitalize on real-time data analysis, move beyond batch processing for time-critical insights, allow multiple users to share cluster resources, and provide predictable service levels. However, fundamental performance limitations of complex distributed systems such as Hadoop prevent much of this from happening.

In this report, Courtney Webster examines the root cause of these performance problems and explains why best practices for mitigating them—cluster tuning, provisioning, and even cluster isolation for mission critical jobs—don’t provide viable, scalable, or long-term solutions.

Organizations have been pushing Hadoop and other distributed systems to their performance breaking points as they seek to use clusters as shared resources across multiple business units and individual users. Once they hit this performance wall, companies will find it difficult to deliver on the big data promise at scale.

Read this report to find out what the implications are for your organization.

Fill out the form below

All fields are required.

We protect your privacy.

Courtney Webster

Courtney Webster is a reformed chemist in the Washington, D.C. metro area. She spent a few years after grad school programming robots to do chemistry and is now managing web and mobile applications for clinical research trials.