Chapter 4. Democratizing Security

Imagine if every software engineer in your organization was a former attacker. They could look at their team’s feature or product and quickly brainstorm how they could benefit from compromising it and what steps they would take to most easily do so. After daydreaming this attacker fantasy for a bit, they could snap back to reality and propose design improvements that would make it harder for attackers to take the steps they imagined. While having this kind of feedback loop on each one of your engineering teams may seem like its own fantasy, it’s more easily realized than you imagine.

A distributed, democratized security program can accomplish these goals. What do we mean by making defense “democratized”? It represents a security program supported by broad, voluntary participation with benefits accessible to everyone. It means that security efforts are explicitly neither isolated nor exclusive. Like a democracy, it must serve all stakeholders and involve participation by those stakeholders. Specifically, an organization’s team of defenders can’t just consist of security people—it must also include members of product and engineering teams who are building the systems whose security the defenders must challenge.

In this chapter, we’ll explore what the critical function of defenders—alternative analysis—entails and how a democratized security program, such as a Security Champions program, fits into SCE.

What Is Alternative Analysis?

Before we dive into how democratized security is enabled by SCE (and vice versa!), we should probably answer the question “What is alternative analysis?” In modern information security, the function of alternative analysis is typically performed by the red team, for which we can borrow a definition from the military domain: “Red teaming is a function that provides commanders an independent capability to fully explore alternatives in plans, operations, concepts, organizations, and capabilities in the context of the operational environment (OE) and from the perspective of partners, adversaries, and others.”1

Of course, there aren’t “commanders” in the enterprise (as much as some CISOs, CIOs, and CTOs perhaps wish they were!), but swap that term out for “team leaders” and the “operational environment” for your “organizational environment.” The end goals—which will underpin the goals of a democratized security program—include improving your organization’s decision-making by broadening the understanding of its environment; challenging assumptions during planning; offering alternative perspectives; identifying potential weaknesses in architecture, design, tooling, processes, and policies; and anticipating the implications—especially around how your adversaries will respond—of any potential next steps.2

Fundamentally, red teams should be used to challenge the assumptions held by organizational defenders (often called the “blue team”)—the practice which is known as alternative analysis. This idea is centuries old3 but essentially revolves around anticipating what could go wrong, analyzing situations from alternative perspectives to support decision-making, and psychologically hardening yourself to things going wrong.

Applying alternative analysis to our digital world today, thinking through how things can be broken can help you build things more securely. Challenging assumptions underpinning your defensive strategy uncovers flaws and weaknesses that could otherwise promote defensive failure. And, as in the ancient stoic tradition, ruminating on how failure can manifest from your current system state makes you more psychologically resilient to failure, too.

Critically, alternative analysis is not about defenders employing method acting to fully embody the role of a real attacker. It’s not helpful for a red team to sit separately from everyone else and focus exclusively on pwning systems, taking delight in uncovering failures in the enterprise’s security program. This may be a ton of fun for them! But such a narrow focus creates a shallow feedback loop. If red teams aren’t helping to level up shared knowledge, they aren’t performing true alternative analysis and are, in essence, just hunting for sport.

Alternative analysis is a powerful tool in the SCE arsenal, but how should it be implemented? A democratized security program (like the one detailed in “The Security Champions Program”) offers a pragmatic, collaborative solution toward attaining the benefits of improved organizational decision-making.

Distributed Alternative Analysis

As you might imagine, red teams are expensive. The types of professionals who possess the necessary attack expertise combined with critical thinking skills are in hot demand. Luckily, SCE can provide a form of automated red teaming, as it continually challenges the assumptions you hold about your systems. It helps you view your systems from an alternative perspective, by subjecting those systems to failure.

However, building a distributed team dedicated to viewing problems from an adversary’s perspective (or, more generally, from alternative perspectives) is complementary to an SCE approach, and can further enhance your organization’s defensive decision-making. The devil’s advocacy that such a team provides can help you refine your SCE experiments and determine what automated testing is appropriate. But how can you accomplish this on a tight budget?

A rotational red team program can teach engineers how attackers think through the (usually) fun practice of compromising real systems, toward the end goal of distributing alternative analysis capabilities across your product teams. Think of this like spreading the seeds of “attacker math” knowledge across your engineering organization. You can populate your engineering teams with fresh security insight that can improve decision-making. And your security teams, by virtue of interacting with these engineers, will be exposed to new perspectives that can improve how the security program serves the organization.

How is the rotational red team composed? The red team should be anchored by leaders with attack experience (ideally with a genuine appreciation for organizational and development priorities, too). Crucially, they should be aligned with the foremost goal of contributing alternative analysis to improve organizational decision-making. Each leader should be capable of training the team members about alternative analysis and attack methodology. The team itself should consist primarily of engineers from each product team who can learn—and practice—how attackers think.

If your organization has a lot of different products or teams, we recommend starting with engineers from a subset of teams that exhibit the most interest in participating in the program. As Dr. Forsgren’s State of DevOps research shows,4 the two approaches that work best for fostering change are communities of practice and grassroots. Communities of practice involves groups that share common interests and foster shared knowledge with each other and across the organization—very much in line with the overall rotational red team program’s raison d'être. A grassroots approach involves small teams working closely together to pool resources and share their success across the organization to gain buy-in—which, in the rotational red team case, could start with the security team and one or two product teams eager to build in security by design.

The goal of the training is for engineers to immediately provide value vis-à-vis alternative analysis once they rejoin their product team full time. Of course, you don’t want your program to slow down development long term, so you should carefully consider what constitutes a sustainable portion of the engineer’s time that is dedicated to learning attacker math. Optimize for what will help level up the engineer’s ability to perform alternative analysis, while ensuring that they are not out of the loop on their product so long that they must relearn the system context.

As foreshadowed in Chapter 2, decision trees are invaluable tools to support democratized alternative analysis. Engineers participating in the rotational program will be leveling up their attacker math skills by having to apply that math to the real-world systems they’re usually building or operating. The training, in essence, involves engineers creating their own branches in the tree as they conduct their attack to achieve a particular goal. Ideally, they will document the actions they took in a tree format to share with their product team postrotation. As a result, participants can go back to their product teams equipped with newfound expertise to inform what attack branches should be included on the trees. For existing decision trees, participants can poke holes in existing assumptions and propose amendments to the branches. With this experience, they can leverage the attacker mindset to consider how an attacker would respond to a proposed security control or mitigation—helping fuel second-order (and beyond!) thinking to improve the team’s decision-making.

The most promising implementation of the democratized security model is via a Security Champions program. We’ll now turn to a high-level overview of how this program looks in practice and what benefits it brings to organizations.

The Security Champions Program

A Security Champions program is a set of processes involving one or more existing members of an engineering team taking on additional responsibility in the domain of security. The goal of a security champion is to create a more accurate and consistent communication pipeline between their engineering team and the security teams. As a result, organizations gain a significantly improved understanding of the risk they are taking on through new product development and feature design.

If executed correctly, many tasks of security engineers can be handed off and handled within the product teams themselves, and the security engineering team simply guides and leads the champions through complex decision-making. The Security Champions program promotes the continuous exchange of information about each product, direction, and systems interconnectivity. It also spreads knowledge around security practices to product teams, which the security champion ensures are understood and followed.

At Twilio, the Security Champions program allows the integration of security features into products at a much faster cadence. Security champions are now the point of contact when it comes to reporting and resolving bugs and other security issues. Mitigation efficacy is improved, too, along with the speed at which mitigations are introduced. Fewer bugs reach SLA limits and instead get resolved within defined time frames, allowing for risk to be better controlled.

After running the program for a few years at Twilio, engineers on different product teams actively volunteer for and ask to engage in the Security Champions program. They now see security challenges in the same way as load-balancing challenges or programmatic bugs: as part of their everyday responsibilities. Being a member of this program and continuously meeting with a security partner allows them to solve these challenges quickly and efficiently.

1 University of Foreign Military and Cultural Studies: The Applied Critical Thinking Handbook (formerly the Red Team Handbook), vol. 5 (April 15, 2011), p. 1,

2 Toby Kohlenberg, “Red Teaming Probably Isn’t for You,” SlideShare, October 27, 2017,

3 “The Roots of Red Teaming (Praemeditatio Malorum),” The Red Team Journal, April 6, 2016,; Valeri R. Helterbran, Exploring Idioms: A Critical-Thinking Resource for Grades 48 (Gainesville, FL: Maupin House Publishing, 2008).

4 Forsgren, Smith, Humble, and Frazelle, 2019 Accelerate State of DevOps Report.

Get Security Chaos Engineering now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.