Chapter 1. Introduction

Make no mistake—the coming N weeks are going to be personally and professionally stressful, and at times we will race to keep ahead of events as they unfold. But we have been preparing for crises for over a decade, and we’re ready. At a time when people around the world need information, communication, and computation more than ever, we will ensure that Google is there to help them.

Benjamin Treynor Sloss, Vice President, Engineering, Google’s Site Reliability Engineering Team, March 3, 2020

Failure is an inevitability (kind of depressing, we know). As scientists and engineers, you look at problems on the long scale and design systems to be optimally sustainable, scalable, reliable, and secure. But you’re designing systems with only the knowledge you currently have. And when implementing solutions, you do so without having complete knowledge of the future. You can’t always anticipate the next zero-day event, viral media trend, weather disaster, config management error, or shift in technology. Therefore, you need to be prepared to respond when these things happen and affect your systems.

One of Google’s biggest technical challenges of the decade was brought on by the COVID-19 pandemic. The pandemic created a series of rapidly emerging incidents that we needed to mitigate in order to continue serving our users. We had to aggressively boost service capacity, pivot our workforce to be productive at home, and build new ways to efficiently repair servers despite supply chain constraints. As the quotation from Ben Treynor Sloss details, Google was able to continue bringing services to the world during this paradigm-shifting sequence of incidents because we had prepared for it. For more than a decade, Google has proactively invested in incident management. This preparation is the most important thing an organization can do to improve its incident response capability. Preparation builds resilience. And resilience and the ability to handle failure become a key discipline for measuring technological success on the long scale (think decades). Beyond doing the best engineering that you can, you also need to be continually prepared to handle failure when it happens.

Resiliency is one of the critical pillars in company operations. In that regard, incident management is a mandatory company process. Incidents are expensive, not only in their impact on customers but also in the burden they place on human operators. Incidents are stressful, and they usually demand human intervention. Effective incident management, therefore, prioritizes preventive and proactive work over reactive work.

We know that managing incidents comes with a lot of stress, and finding and training responders is hard; we also know that some accidents are unavoidable and failures happen. Instead of asking “What do you do if an incident happens?” we want to address the question “What do you do when the incident happens?” Reducing the ambiguity in this way not only reduces human toil and responders’ stress, it also improves resolution time and reduces the impact on your users.

We wrote this report to be a guide on the practice of technical incident response. We start by building some common language to discuss incidents, and then get into how you encourage engineers, engineering leaders, and executives to think about incident management within the organization. We aim to cover everything from preparing for incidents, responding to incidents, and recovering from incidents to some of that secret glue that maintains a healthy organization which can scalably fight fires. Let’s get started.

What Is an Incident?

Incident is a loaded term. Its meaning can differ depending on the group using it. In ITIL, for example, an incident is any unplanned interruption, such as a ticket, a bug, or an alert. No matter how the word is used, it’s important that you align on a specific definition to reduce silos and ensure that everyone is speaking the same language.1

At Google, incidents are issues that:

  • Are escalated (because they’re too big to handle alone)

  • Require an immediate response

  • Require an organized response

Sometimes an incident can be caused by an outage, which is a period of service unavailability. Outages can be planned; for example, during a service maintenance window in which your system is intentionally unavailable in order to implement updates. If an outage is planned and communicated to users, it’s not an incident—nothing is going on that requires an immediate, organized response. But usually, we’ll be referring to unexpected outages caused by unanticipated failures. Most unexpected outages are incidents, or become incidents.

Incidents could result in confused customers. They could also cause losses in revenue, damaged data, breaches in security, and more, and these things can also impact your customers. When customers feel the impact of an incident, it might chip away at their trust in you as a provider. Therefore, you want to avoid having “too many” incidents or incidents that are “too severe” in order to keep your customers happy; otherwise, they will leave.

Having many incidents can also impact your incident responders, since handling incidents can be stressful. It can be challenging and expensive to find site reliability engineers (SREs) with the right mix of skills to respond to incidents, so you don’t want to burn them out by designating them solely to incident response. Instead, you want to provide them with opportunities to grow their skills through proactive incident mitigation as well. We discuss this further later in this report, along with ways to reduce stress and improve the health of your on-call shifts.

Not Everything Is an Incident

Differentiating between incidents and outages is important. It’s also important to differentiate between metrics, alerts, and incidents. How do you categorize metrics versus alerts, and alerts versus incidents? Not every metric becomes an alert, and not every alert is an incident. To help you understand the meaning of these terms, we’ll start by discussing the role of monitoring and alerts to help maintain system health.

Monitoring

The most common way you keep watch over the health of your system is through monitoring. Monitoring,2 as defined by the Google SRE Book, means collecting, processing, aggregating, and displaying real-time quantitative data about a system, such as query counts and types, error counts and types, processing times, and server lifetimes. Monitoring is a type of measurement.

When it comes to measurement, we suggest taking a customer-centric approach in regard to crafting both service-level objectives (SLOs; discussed in more detail in “Reducing the Impact of Incidents”) and the customer experience. This means collecting metrics that are good indications of the customer experience, and collecting a variety of measures, such as black box, infrastructure, client, and application metrics, wherever possible. Measuring the same values using different methods ensures redundancy and accuracy, since different measurement methods have different advantages. Customer-centric dashboards can also serve as good indications of the customer experience and are vital for troubleshooting and debugging incidents.

It’s also important that your focus is on measuring reliability and the impact on your users, instead of on measuring the number of incidents that have been declared. If you focus on the latter, people will hesitate to declare an incident for fear of being penalized. This can lead to late incident declarations, which are problematic not only in terms of loss of time and loss of captured data, but also because an incident management protocol does not work well retroactively. Therefore, it’s better to declare an incident and close it afterward than to open an incident retroactivity.

In that regard, people sometimes use the terms reliability and availability interchangeably, but reliability is more than just “service availability,” especially in complex distributed systems. Reliability is the ability to provide a consistent level of service at scale. It includes different aspects, such as availability, latency, and accuracy. This can (and should) translate differently in different services. For example, does reliability mean the same for YouTube and Google Search? Depending on your service, your users’ expectations will be different, and reliability can mean different things.

As a rule of thumb, a system is more reliable if it has fewer, shorter, and smaller outages. Therefore, what it all comes down to is the amount of downtime your users are willing to tolerate. As you take a customer-centric approach, the user defines your reliability. Consequently, you need to measure the user experience as closely as possible. (We discuss this in more detail in “Reducing the Impact of Incidents”.)

Alerting

We’ve discussed monitoring for system health. Now let’s talk about the key component of monitoring: alerting. When monitoring identifies something that is not behaving as expected, it sends a signal that something is not right. That signal is an alert. An alert can mean one of two things: something is broken and somebody needs to fix it; or something might break soon, so somebody should take a look. The sense of urgency—that is, when an action needs to be taken—should direct you to choose how to respond. If an immediate (human) action is necessary, you should send a page. If a human action is required in the next several hours, you should send an alert. If no action is needed—that is, the information is needed in pull mode, such as for analysis or troubleshooting—the information remains in the form of metrics or logs.

Note that the alerting method can be different depending on the organization’s preferences—for example, it can be visible in a dashboard or presented in the form of a ticket. At Google, it’s usually the latter; a “bug” with different priorities is opened in the Google Issue Tracker by the monitoring system, which is our form of ticketing.

Now that you know the basics, let’s take a deeper dive into alerting by discussing actionable alerts.

The Importance of Actionable Alerts

As we noted, an alert can trigger when a particular condition is met. You must be careful, however, to only alert on things that you care about and that are actionable. Consider the following scenario: as the active on-caller, you are paged at 2 a.m. because QPS has increased by 300% in the past 5 minutes. Perhaps this is a bursty service—there are periods of steady traffic, but then a large client comes and issues thousands of queries for an extended period of time.

What was the value in getting you out of bed in the middle of the night for this? There was no value, really. This alert was not actionable. As long as the service was not at risk of falling over, there was no reason to get anybody out of bed. Looking at historical data for your service would show that the service needs to be able to handle such traffic spikes, but the spikes themselves are not problematic and should not have generated any alerts.

Now let’s consider a more nuanced (yet much more common) version of the actionable alerting problem. Your company requires making nightly backups of your production database, so you set up a cronjob that runs every four hours to make those backups. One of those runs failed because of a transient error—the replica serving the backup had a hardware failure, and was automatically taken out of serving mode by the load balancer—and consequent runs of the backup completed successfully. A ticket is subsequently created as a result of the failed run.

Creating a ticket because of one failed backup run is unnecessary. This would only result in noise, since the system recovered itself without human interaction.

These scenarios happen often. And although they end by simply closing the ticket with a “This was fine by the time I got to it” message, this behavior is problematic, for a few reasons:

Toil

Someone had to spend time looking at the ticket, looking at graphs/reports, and deciding that they didn’t need to do anything.

Alert fatigue

If 95% of the “Database backups failed” alerts are simply closed, there’s a much higher risk that an actual problem will go unnoticed.

As discussed earlier, an incident is an issue with particular characteristics. An alert is merely an indicator that can be used to signal that an incident is underway. You can have many alerts with no incidents. While this is an unfortunate situation, it doesn’t mean you need to invoke formal incident management techniques; perhaps this is a planned maintenance event and you were expecting to receive these alerts as part of the maintenance process.

You can also have an incident without any alerts—maybe you were briefed by the security team that they suspect there’s been a breach of your production systems; your team didn’t have any alerts of their own that triggered for this particular condition.

More practically speaking, there are differences in how humans perceive alerts versus incidents:

  • It’s much more stressful to do formal incident management as opposed to simply fixing an alert.

  • Less experienced responders are less likely to invoke an incident than more experienced responders.

  • Incidents are much more likely to require additional team resources, so nonresponders can more easily gauge whether they need to start looking at the active issue sooner rather than later.

This applies not just within your team. In fact, it applies across the entire organization.

You typically have many more alerts than incidents. It’s useful to get basic metrics around alerts (e.g., how many alerts there are per quarter), but incidents deserve taking a closer look (e.g., you had five major incidents last quarter, and they were all because of a new feature being rolled out which didn’t have enough testing in pre-prod). You don’t want to pollute these reports with all the alerts that you received. Consider the audience—alert metrics are primarily useful to the team, but incident reports will probably be read by higher-ups and should be scoped accordingly.

Hopefully, this clarifies when you can more confidently say, “This is not an incident.” However, this statement creates a dichotomy: if some things aren’t incidents, that means some things are incidents. How do you handle those? We’ll look at that in the next section.

The Incident Management Lifecycle

Optimal incident management doesn’t simply mean incidents are managed as quickly as possible. Good incident management means paying attention to the whole lifecycle of an incident. In this section, we discuss a programmatic approach to incident management. Think about incidents as a continuous risk existing in your system. The process of dealing with such risks is called the incident management lifecycle. The incident management lifecycle encompasses all of the necessary activities to prepare for, respond to, recover from, and mitigate incidents. This is an ongoing cost of an operational service.

By lifecycle, we mean every stage of an incident’s existence. These stages are shown in Figure 1-1 and described as follows:

Preparedness
This encompasses all the actions a company or team takes to prepare for the occurrence of an incident. This can include safety measures on engineering (code reviews or rollout processes), incident management training, and experiments or testing exercises that are conducted to identify errors. This also includes setting up any monitoring or alerting.
Response
This is what happens when the trigger causes the root cause of the hazard to become an issue. It involves responding to an alert, deciding whether the issue is an incident, and communicating about the incident to impacted individuals.
Mitigation and Recovery
This is the set of actions that allow a system to restore itself to a functional state. These include the urgent mitigations needed in order to avoid impact or prevent growth in impact severity. Recovery includes the systems analysis and reflection involved in conducting a postmortem. A postmortem is a written record of an incident, and it includes the actions taken, impact, root causes, and follow-up actions needed to prevent recurrence and/or reduce future impact.
The incident management lifecycle
Figure 1-1. The incident management lifecycle

Once the recovery phase closes, you’re thrust back into the preparedness phase. Depending on the size of your stack, it’s possible that all of these phases occur simultaneously—but to be sure, you can expect at least one phase to always occur.

1 An incident is defined as an unplanned interruption or reduction in quality of an IT service (a service interruption). ITIL_Glossary: Incident.

2 Rob Ewaschuk, “Monitoring Distributed Systems”, in Site Reliability Engineering, ed. Betsy Beyer, Chris Jones, Niall Richard Murphy, and Jennifer Petoff (Sebastopol, CA: O’Reilly Media, 2016).

Get Anatomy of an Incident now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.