Software that doesn’t evolve stops being useful. Prof. Manny Lehman first introduced this idea to the public consciousness in his 1978 lecture, “Programs, Cities, Students – Limits to Growth?”
In other words, if you don’t continue to update and modify an existing software system or component, it’ll eventually stop working. Sometimes a major iOS or Windows upgrade renders your favorite application unusable. But it’s not always that obvious—a company may be keeping a Windows NT computer running in the back room just to run an outmoded scheduling system. Or, you know, a nuclear weapons array.
So, even with all the new software systems popping up all the time, it’s still a priority to keep changing and updating the existing systems to keep them relevant, usable, and profitable.
Software is very expensive to maintain
Let’s take a look at some specific numbers.
For the last 15 years, my colleagues and I have been looking at software systems and development teams, and helping them improve their efficiency and results. Along this journey, we’ve gathered some more numbers. It turns out that the average software developer churns out 10,000 lines of code per year. It also turns out that, given a certain size code base, an average of 15% of the source code gets changed during upgrades. We’ve looked at literally billions of lines of code, and the 15% is a remarkably stable number.
So, we have 12 million software developers, who create or modify 120,000,000,000 lines of code each and every year. And then next year, 15% of those lines need to be changed. All in all, it takes 1.8 million people working full time to make happen, which means the collective educational institutions of the world need to churn out 1.8 million new software developers per year to keep up with maintenance. Clearly they don’t. In the US, 88,000 computer science degrees are earned every year. And there are currently 144,500 annual job openings. In other countries, these numbers aren’t much better.
We have nowhere near enough developers, so how do we cope?
Clearly, there aren’t enough new software developers trained to maintain the ever-increasing global code volume. So instead three other things happen.
- There is an enormous demand for software developers, with a very strong bias for coders. There is so much demand that the traditional educational system cannot keep up, and people are know paying $12,000 USD for a 12-week course that teaches amateur programmers the basics of professional software engineering.
- Developers end up performing only breakdown maintenance. Once something breaks, they go for the quickest, dirtiest fix, as is evidenced by the seemingly endless stream of issues that plague systems of a certain age.
- Companies stop innovating. Some organizations report up to 90% of IT budget spent on “keeping the lights on.” This means many millions a year spent just to keep things working as they are.
Systems break, become insecure, or are just not available
Bear in mind, a lot of the effort involved in systems maintenance goes into the large, fairly invisible enterprise software systems that you and I rely on every day without even knowing it. And because these systems have become too large to properly maintain (i.e. 15% of code volume needs to be changed per year), they start to break and misbehave in unpredictable ways. For example:
- A system update at the Royal Bank of Scotland in June 2012 caused a series of malfunctions that took a month to sort out and caused serious problems for hundreds of thousands of people.
- Two hackers demonstrated they could take full control of a journalist’s Jeep Cherokee, because it had become impossible to add proper security measures to the car’s system.
- In the well-publicized case of some Toyota vehicles accelerating while the driver was apparently not touching the pedal, it took independent experts 20 months of looking at the source code to determine whether the accelerator was connecting to the engine in the correct way.
So the problem of the IT industry generating much more software maintenance work than it is capable of handling is more than just an economic issue. The issue also affects continuity, reliability, and safety.
Science needs to work on a long-term solution
A long-term structural solution would be to create technology that would need less maintenance. This topic should be in the realm of academic research, but, as far as I am aware, it’s not currently being taken up. There is a lot of research (both academic and commercial) into more productive technology—the kind that lets you create a particular software system faster. Although this is nice, in practice it is hardly relevant. If you’re going to spend $1.6 million maintaining a $1 million system over seven years, it doesn’t matter a whole lot if you can get it operational one month sooner.
We learned from Lehman that software is going to need maintenance in order to stay valuable. In his excellent book “The Laws of Software Process,” Philip Armour explains that software is executable knowledge about a process. As organizations evolve and learn more about the (business) process they’re executing, they need to adapt their software. What we need is a technology that allows for these modifications to be executed as cheaply as possible. As mentioned before, with current technology the rate of change is 15% and that is too high to be sustainable.
Designing and coding for maintainability will stop exponential growth
While the scientists are hopefully working on that one, there is also something we can do today.
Even though the 15% change seems to be constant, that doesn’t necessarily mean that every system gets 15% bigger every year. The systems that grow the most are those that are complex, that are hard to test, and that generally speaking, nobody dares to touch anymore for fear of breakage. Either the system is so unwieldy that any change to its functionality requires a new system to be created to take the output from the unchangeable system and modify it to get the right results. The new system effectively functions as an extension to the old one, which therefore grows fast. Or whole batches of source code are copied inside the system, and then modified—as opposed to being modified in place. The reasoning is that if you leave the original code in place, and add a copy that’s modified, than at least the original code will not break anything. Imagine renovating your house and instead of replacing the kitchen, you add a new kitchen to the back of the house. And a new bathroom, and a new living room. And then, after a couple of years, maybe another kitchen…
On the other hand, if you design and code for maintainability (which is not more expensive or slower than not designing and coding for maintainability) you can at least slow down the aging process. We see that in the best systems you will still have 15% change every year, but there is no growth. That is to say, those 15% of lines of code are actually changed in place, and not added.
This way, you will still spend 15% of your initial development effort every year on maintenance, but at least the year after it won’t be 32.5%.
Managing the total volume of code for survival
If you combine this strategy with solid management and monitoring of the total code volume of your organization, you can at least exert some control over the situation. With every new software initiative, ask yourself what how many lines of code will be eliminated from your software portfolio—and make sure they’re actually removed. Because every 66 thousand lines of code takes a full-time, expensive, hard-to-find software engineer to maintain.
This post is a collaboration between O'Reilly and Software Improvement Group (SIG). See our statement of editorial independence.