How continuous delivery helps security keep up with change

Navigating the accelerating velocity of change in DevOps.

By Jim Bird
August 16, 2016
Bike courier at night Bike courier at night (source: Skitterphoto via Pixabay)

The velocity of change in IT continues to increase. This became a serious challenge for security and compliance with Agile development teams delivering working software in one- or two-week sprints. But the speed at which some DevOps shops initiate and deliver changes boggles the mind. Organizations like Etsy are pushing changes to production 50 or more times each day. Amazon has thousands of small (“two pizza”) engineering teams working independently and continuously deploying changes across their infrastructure. In 2014, Amazon deployed 50 million changes: that’s more than one change deployed every second of every day.[1]

So much change so fast…

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

How can security possibly keep up with this rate of change? How can they understand the risks, and what can they do to manage them when there is no time to do pen testing or audits, and no place to put in control gates, and you can’t even try to add a security sprint or a hardening sprint in before the system is released to production?

Use the speed of continuous delivery to your advantage

The speed at which DevOps moves can seem scary to infosec analysts and auditors. But security can take advantage of the speed of delivery to respond quickly to security threats and deal with vulnerabilities.

A major problem that almost all organizations face is that even when they know that they have a serious security vulnerability in a system, they can’t get the fix out fast enough to stop attackers from exploiting the vulnerability.

The longer vulnerabilities are exposed, the more likely the system will be, or has already been, attacked. WhiteHat Security, which provides a service for scanning websites for security vulnerabilities, regularly analyzes and reports on vulnerability data that it collects. Using data from 2013 and 2014, WhiteHat found that 35 percent of finance and insurance websites are “always vulnerable,” meaning that these sites had at least one serious vulnerability exposed every single day of the year. The stats for other industries and government organizations were even worse. Only 25 percent of finance and insurance sites were vulnerable for less than 30 days of the year. On average, serious vulnerabilities stayed open for 739 days, and only 27 percent of serious vulnerabilities were fixed at all, because of the costs and risks and overhead involved in getting patches out.

Continuous Delivery, and collaboration between developers and operations and infosec staff working closely together, can close vulnerability windows. Most security patches are small and don’t take long to code. A repeatable, automated Continuous Delivery pipeline means that you can figure out and fix a security bug or download a patch from a vendor, test to make sure that it won’t introduce a regression, and get it out quickly, with minimal cost and risk. This is in direct contrast to “hot fixes” done under pressure that have led to failures in the past.

Speed also lets you make meaningful risk and cost trade-off decisions. Recognizing that a vulnerability might be difficult to exploit, you can decide to accept the risk temporarily, knowing that you don’t need to wait for several weeks or months until the next release, and that the team can respond quickly with a fix if it needs to.

Speed of delivery now becomes a security advantage instead of a source of risk.

The honeymoon effect

There appears to be another security advantage to moving fast in DevOps. Recent research shows that smaller, more frequent changes can make systems safer from attackers by means of the “Honeymoon Effect”: older software that is more vulnerable is easier to attack than software that has recently been changed.

Attacks take time. It takes time to identify vulnerabilities, time to understand them, and time to craft and execute an exploit. This is why many attacks are made against legacy code with known vulnerabilities. In an environment where code and configuration changes are rolled out quickly and changed often, it is more difficult for attackers to follow what is going on, to identify a weakness, and to understand how to exploit it. The system becomes a moving target. By the time attackers are ready to make their move, the code or configuration might have already been changed and the vulnerability might have been moved or closed.

To some extent relying on change to confuse attackers is “security through obscurity,” which is generally a weak defensive position. But constant change should offer an edge to fast-moving organizations, and a chance to hide defensive actions from attackers who have gained a foothold in your system, as Sam Guckenheimer at Microsoft explains:

“If you’re one of the bad guys, what do you want? You want a static network with lots of snowflakes and lots of places to hide that aren’t touched. And if someone detects you, you want to be able to spot the defensive action so that you can take countermeasures…. With DevOps, you have a very fast, automated release pipeline, you’re constantly redeploying. If you are deploying somewhere on your net, it doesn’t look like a special action taken against the attackers.”

Post topics: Security