Chapter 1. The Software Industrial Revolution Has Arrived

We asked developers from a variety of organizations and industries a simple question: how long will your application continue to work if you are not allowed to touch its code? What we found was that the more modern the application, the more quickly it would be in jeopardy.

The reason? Software used to be built. Today, it is increasingly assembled from off-the-shelf components in the form of open source software (OSS) and third-party APIs. Unlike similar components we might use to build a physical product, software components evolve independently at their own rate. Enterprise organizations don’t control that evolution. And what happens when you fail to keep up? The software becomes less secure, gets harder to maintain, and eventually stops working.

As an industry, we’ve accepted that keeping assembled software components up to date is considered technical debt, a term originally used to describe self-imposed coding issues that become debt to the organization, compounding daily. One survey suggests that 30% of daily engineering time is spent on technical debt, and we can see this increasing year over year as codebases continue to grow. While technical debt amassed from third-party code is something beyond a development team’s control, they are often blamed for accruing this debt and not keeping up with the evolving third-party software. This can be hugely demoralizing and demotivating to development teams. We should not have to accept this kind of debt!

Code remediation is a standard practice used by development teams to whittle down technical debt, whether it’s addressing security vulnerabilities, migrating frameworks, updating dependencies, or fixing code quality issues. All of these tasks involve manual line-by-line, repository-by-repository code changes that can be tedious and error prone. Because this work takes up a significant portion of a devel- oper’s day-to-day work, it takes away from business value work.

Now, let’s imagine a world where code remediation is automated and the value this could bring to your organization. By automating the tedious, time-consuming work of eliminating technical debt, your teams can address vulnerabilities more quickly and holistically, and developers can be more productive on the work that matters to the business. Let’s find out how you get there from here.

This chapter will address the growing complexity of software today, trends in addressing the debt, and why manual practices are no longer sufficient.

The Assembled Software Supply Chain

We are in the midst of the software industrial revolution, with more and more software rapidly assembled from third-party components. Custom software is integrated with components provided by vendors, cloud providers, and open source software, with as much as 90% of code coming from such dependencies. This is now commonly known as the software supply chain.

The software supply chain includes everything that goes into or touches software as it is being produced: developer tools (IDEs, CI/CD), third-party software (dependencies), language runtimes and frameworks, monitoring, and testing frameworks. It allows us to build applications much more quickly, but it also makes maintenance much more difficult as it is no longer under our control.

These composed applications have a life of their own. Third-party dependencies change and evolve at their own pace. Software vulnerabilities can be unintentionally introduced by anyone at any time. APIs are added, deprecated, and deleted. For example, a third-party vendor or OSS maintainer can make a change to their API and now your organization is on the hook to update your applications before they break.

Additionally, these assembled applications create a larger attack surface, with many vulnerabilities lying dormant until they are exploited. The identified common vulnerabilities and exposures (CVEs) have seen a dramatic increase over the past few years, surpassing 25,000 vulnerabilities in 2022. And a recent audit of 1,250 commercial codebases found that 75% contained open source components with known security vulnerabilities.

The complexity of the software supply chain requires development teams to be ever vigilant and constantly update their code to keep it all secure and working. However, it’s an impossible task. There are too many dependencies, too many repositories, and too many vulnerabilities. Manual developer labor simply cannot keep up.

How Much Code Are We Really Talking About?

The growth in code is explosive as companies face pressure to deliver value to their customers at an increasingly rapid pace. It’s not uncommon for organizations to have hundreds of millions of lines of proprietary source code. Even small organizations can have 20 million lines of their own code. It really doesn’t take very long to accrue. A company we work with has gone from 40 to 1,200 source code repositories in the last eight years.

One financial organization in particular has a comparatively small codebase, totaling 500 million lines of proprietary source code. This is still a mind-boggling number. To put this in perspective, consider an O’Reilly book, with roughly 60 lines to a page. If we were to simply print the 500 million lines of source code in paperbacks and stack them end to end, they would stretch for 75 miles. The amazing thing is that the lion’s share of this code—the third-party assembled code—is not even being included in this count. Including third-party software would extend this line of books from Miami to Montréal.

Many organizations are building these massive codebases with little to no increase in engineering head count and instead are aided by the sophistication of frameworks, third-party libraries, IDEs, methodologies, and techniques. It’s no wonder that the thought of maintaining this much code can feel so intractable!

Scanning and Search: Visibility Without Action

Scanning is not new technology, having originated before the massive code explosion to help developers identify problems in their own code. Fast-forward to today, and scanning is still helping developers identify problems in their own code—but with an angle on grasping dependencies and finding vulnerabilities. Organizations are employing multiple scanning solutions, such as static application security testing (SAST) and software composition analysis (SCA) tools, with one study showing us that 90% of applications are scanned more than once a week.

Then there’s code search tools that use declarative search syntax (e.g., Sourcegraph, Comby, Semgrep), enabling developers to find specific terms. These search systems are based on indexing code for efficient retrieval against the terms that are defined, which can limit the developers’ view into the codebase. Any enrichment to the search syntax impacts the index, potentially requiring reindexing of the whole codebase to enable a new search term. We see this work as analogous to managing a database versus providing efficient flexibility and scale to find issues across a codebase.

Scanning and search tools can integrate with source code management (SCM) and continuous integration (CI) systems for real-time reporting to developers, as well as to block code from shipping to production when vulnerabilities and compliance issues are identified. Unfortunately, these tools also can create noise that is unactioned because developers must also drive business priorities. This adds tremendous pressure to development teams to fix the issues.

Consequently, it’s no surprise that you may see existing SAST, SCA, and search solutions say they are auto-fixing the code, but let’s be real about what that means. The actions of updating dependency version numbers and issuing batch pull requests (PRs) are not actually fixing the source code. It’s hard to add automated refactoring capabilities to existing technology that is built for searching and scanning. We think of these solutions as providing assistance to manual remediation. Helpful, but at the end of the day, not enough.

The biggest concern remains: developers must still dedicate time and energy to review issues and fix the code, and there is no guarantee that they will have the accountability or time required to do so.

DevSecOps Shifts Security Burden onto Every Developer

Security has been shifting left to the earliest stages of the software development lifecycle as part of DevSecOps practices, increasing the responsibility of developers to address vulnerabilities. Because developer time is precious and business priorities take precedence, developers primarily focus on preventing vulnerabilities and bugs from getting introduced to new or modified code. Existing, untouched code often gets a pass until a vulnerability is discovered. Then it’s typical for only the most critical defects in the most critical applications to be remediated.

Unfortunately, recent developer data indicates that almost 50% of developers knowingly ship vulnerabilities in their code, and this is something known by the business (after all, it’s not any worse than the code that is already in production). This becomes technical debt that can be exploited at any time. That’s why even applications not actively worked on by developers still need to be remediated.

We often rely on software bills of materials (SBOMs) to understand which software in production contains known vulnerabilities. While an SBOM provides an inventory of the components and dependencies involved in the development and delivery of an application, this can sometimes be an insufficient record. Consider SolarWinds, and the attack in the CI system that inserted code into an application as it was being built. This was not revealed in the list of libraries nor found in the source code (thus, it was not part of the SBOM), but it required remediation. When a new vulnerability is discovered in a widely used framework, it’s usually specific to a particular use case such as an API call. How would you know if your application uses this call and therefore is affected? You’d need a detailed understanding of the code and all its dependencies to make more informed decisions.

New US government policy aims to change the dynamic of end users suffering through identity theft because organizations ship vulnerabilities to production. According to the latest National Cybersecurity Strategy issued by the White House: “This strategy recognizes that robust collaboration, particularly between the public and private sectors, is essential to securing cyberspace. It also takes on the systemic challenge that too much of the responsibility for cybersecurity has fallen on individual users and small organizations.” We all must work to defend the critical infrastructure of our society.

Migration Engineering Impossibly Manual

Code migration work is labor intensive, chaotic, and clerical. While some vulnerabilities can be closed by upgrading dependency versions if patches are available, others require changes to the application source code. Dependency upgrades often can lead to broken code too. Some fixes are straightforward, like making changes to an API signature. Others are more complex, involving multiple, major lifts and requiring the expertise of migration engineers. And here’s one truism: the longer you put off migrations, the more complex they become.

Note

Migration engineering is a relatively new designation applied to the complex and expert work of coordinating and migrating software from older to newer versions to take advantage of the latest capabilities, improve overall security posture, and avoid software obsolescence.

Major migrations are always risky, as a lot of change needs to happen across multiple repositories, often in a coordinated way. Massive coordination efforts can lead to production outages because they are manual and error prone. This is why businesses try to avoid them at all costs. But then a vulnerability is identified, and there is no choice. Teams scramble to fix the code, worrying that an exploit is coming, and it becomes a race between your developers fixing it and the bad actors attacking it.

Because we (mistakenly) classify such activities as technical debt, business owners in the Agile environment typically do not let developers proactively migrate and update their dependencies. Developers are told to take care of technical debt while simultaneously producing business value. This means that developers never catch up with the debt because every day there is a new third-party software version or vulnerability emerging to add to the queue.

As we continue to produce new software at an accelerated rate, migration engineering can no longer be manual. We should begin thinking of migration engineering as a discipline that requires systematic automation, not endless clerical work, and definitely not the work of shaming other teams into doing clerical work.

Migration engineering could easily fall within platform teams, developer productivity teams, engineering tools teams, and so on. And this tells you something about the role of the migration engineer—ready to serve developers without judgment. After all, the current state has built up over decades in many cases. Current developers on a project often weren’t even on the project when much of the code was written. That doesn’t make them any less responsible for its quality, but suggests that they are best approached with a heightened empathy to their predicament.

This is a new reality of the industrialized software development world that we live in. For example, we have worked with a financial services company that has over 20,000 applications built on a variation of Spring Boot. Every time a breaking change is introduced in the underlying Spring Framework, that’s 20,000 repositories to potentially change. Ouch.

A View into Third-Party Software Components

The majority of OSS authors are dedicated to serving their community. Whenever the inevitable vulnerability is discovered, they issue a patch release of their library, fixing the vulnerability without making any other library changes. This then requires the consumer of the library to upgrade to the latest patch version of the library. Many vulnerabilities in third-party dependencies are fixed by this patch upgrade. However, given the amount of code we accumulate, this is still an insurmountable problem and people are not able to keep up.

The OSS library maintainers cannot support infinite back versions. They are overworked as it is, so they realistically can only support a few back versions of their library with fixes. When a vulnerability is discovered in a version no longer supported with patch releases, it is not the maintainer’s problem anymore. The organization using the open source component will need to go through a major migration effort to bring their software up to date with a supported version.

However, there is a solution that can help. It’s possible, with automated remediation, for OSS maintainers and other third-party software vendors to more easily support their users migrating to new versions on a continuous basis. We’ll explore this in greater detail in the next chapter.

Get Automated Code Remediation now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.