Human brains are built and trained for threat avoidance. We might be terrible at weighing relative risk,1 but we’re excellent at picking out the one thing in a pile of other things that looks like a failure mode that we’ve seen before.2
But “antipatterns” are not your average “this one time, at Foo camp” Tale of Fail. They’re the things we’ve seen go horribly wrong not once, not twice, but over and over again. Antipatterns are attractive fallacies. Strategies that succeed for a little less time than you will find you needed them to. Common sense that turns out to be more common than sensible.
Throughout the rest of this book, you’ll find examples of things that you should do. That’s not what I’m about here in this chapter. Think of this section as your “Defense Against the ‘D’oh!’ Arts” glossary. Or just sit back and enjoy imagining all the stuff I and a host of colleagues past and present had to screw up in order to get to the point where I could share this short list with you. SREs are not perfect. Some of these mistakes I’ve even made more than once myself. That’s why they’re antipatterns.
A new mission cannot always be achieved with old tools and methods.
Site Reliability Operations: ...