Alerting is one of the most crucial parts of monitoring that you will want to get right. For whatever reason, infrastructure likes to go sideways in the middle of the night. Why is it always 3 a.m.? Can’t I have an outage at 2 p.m. on a Tuesday? Without alerts, we’d all have to be staring at graphs all day long, every day. With the multitude of things that could possibly go wrong, and the ever-increasing complexity of our systems, this simply isn’t tenable.
So, alerts. We can all agree that alerting is an important function of a monitoring system. However, sometimes we forget that the purpose of monitoring isn’t solely to send us alerts. Remember our definition:
Monitoring is the action of observing and checking the behavior and outputs of a system and its components over time.
Alerts are just one way we accomplish this goal.
Great alerting is harder than it seems. System metrics tend to be spike-y, so alerting on raw datapoints tends to produce lots of false alarms. To get around that problem, a rolling average is often applied to the data to smooth it out (for example, five minutes worth of datapoints averaged into one datapoint), which unfortunately causes us to lose granularity, resulting in occasionally missing important events. There’s just no winning, is there?
One of the other reasons alerting is so difficult to do well is because you often want alerts them going to a human, and we humans have limited attention. You’d ...