Chapter 27. Diagnosing Your Observability Investment
If you’ve felt growing pressure to “cut observability costs,” you’re not alone. Observability spend has climbed steadily for years. According to Gartner,1 the cost of observability has been rising at least 40% per year for the past 15 years straight. Observability is now the second largest software line item in R&D organizations—second only to cloud—and most orgs spend somewhere between 10–25% of their infrastructure bill on observability tools.
In the last chapter, we talked about how to make the business case for observability. It’s rarely pleasant when leadership or finance starts asking for justifications. But cost pressure can be useful when it forces us to ask hard questions, like is our investment working?
That question often gets misread as “can we justify this line item?” It shouldn’t. The actual question is, “did we buy the right capabilities, and are we getting the right returns?” Many organizations are experiencing a disorienting mismatch—for example, paying for observability as if it were a strategic capability while operating it like a cost center and using it like monitoring. If you aren’t getting the returns that you feel you should based on what you’re paying, you might be right.
In this chapter, we’ll show you how to diagnose your observability expenditures. If you decide to invest in better observability, it may cost you. Or it may not: outrageously high bills are the most common consequence of poor tool fit, ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access