Shortly after I joined Oracle Corporation in 1989, several of my technical mentors taught me that just about the only thing you could tell from looking at a database’s buffer cache hit ratio is that when it’s really high, it’s usually a sign of trouble [Millsap (2001b)]. In the several years that have passed since my first exposure to that lesson, the battle has raged between advocates of using the buffer cache hit ratio as a primary indicator of performance quality and those who believe that hit ratio metrics is too unreliable for such use. It’s not been much of a battle, actually. The evidence that hit ratios are unreliable is overwhelming, and similar ratio fallacies occurring in other industries are well documented (see, for example, [Jain (1991)] and [Goldratt (1992)]).
One of the most compelling (and funniest) proofs that hit
ratios are unreliable is a PL/SQL procedure called
choose_a_hit_ratio written by Connor McDonald.
Connor’s procedure lets you increase your database buffer cache hit
ratio to any value that you like between its current value and 99.999
999 9%. How does it work? By adding wasteful workload to your system.
That’s right. You specify what you want your database buffer cache hit
ratio to be, and
choose_a_hit_ratio adds just enough wasteful workload to raise your hit ratio to that value. What you get in return is proof positive that having a high database buffer cache hit ratio is no indication that ...