The purpose of caching is two-fold: first, to speed up access to a specific resource that, for whatever reason, might be slower than we desire; and second, to tread lightly on that resource. These goals go hand in hand. It is a sort of Catch-22; often it is the act of saturating a resource with requests that can make it slow for future requests.
In Chapter 12, we saw an example of caching at the database layer, in our implementation of a materialized view. Indeed, we saw dramatic performance improvements, on the order of 100 to 1,000 times, with even a very small dataset and an unloaded system. The 99 or 99.9% of time that was spent idle instead of processing our requests was freed up for other requests, making them faster as well.
Caching seems like a marvelous tool that should be used anywhere and everywhere. And it should. Caching can be the difference between having a problematic bottleneck on your hands and having a site or service that hums along without a hitch.
So why do so many caution against what they describe as “premature optimization” when it comes to caching? If you take Twitter as a cautionary tale, you will agree that by the time it’s obvious that you need to optimize, it’s already too late. The technorati will not wait for your upgrades and performance enhancements before they declare you dead, or worse, irrelevant.
What the worrywarts are really afraid of is that caching is hard. It’s also error-prone. As we saw in Chapter 12, maintaining ...