Chapter 1. Vicious Circles: The Well-Worn Path
The vicious circle trap is not new; it goes back to the foundation of software development as a whole. As technology advances with greater power and more tools, it becomes more challenging to build good applications. This is called the software crisis:
The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.
Edsger Dijkstra1
The term software crisis was coined at the first NATO Software Engineering Conference in 1968 in Germany when the attendees gathered to discuss the new concept of “software engineering” and were surprised to learn that similar issues were plaguing them all. They used “software crisis” to describe the common problem.
The causes of the software crisis were related to the increasing complexity of hardware, and the challenges in adapting the software development process.
So, what were the problems they were having?
-
Projects running over time and budget.
-
Software was becoming more inefficient and error prone.
-
Software was frequently failing to meet the requirements for the project.
-
Software was increasingly difficult to extend and maintain.
-
Some projects failed before they could even deliver workable code at all.
Does this sound familiar? These are the same problems that face many organizations today. The solution to the software crisis was discovered not long after we knew what the problems were, but it is often misunderstood or misapplied. This is actually good news because while computing power has continued to grow, the solution has not changed, proving that it is not actually about the technology.
So, what is this radical solution? It is component-based design and separation of concerns—the foundational principles of software engineering as we know it. Creating the virtuous cycle and avoiding the software crisis is not a function of time or technology. The basic principles that worked 60 years ago will work today…and 60 years from now.
To overcome the software crisis, we need to understand the ways that poor choices in design and architecture lead to the vicious circle. Software engineers today have more tools and power than ever, and that often creates even more confusion. We can reduce confusion by learning to avoid the worst mistakes and common pitfalls.
Poor Choices
At its core, the vicious circle is quite simple. It’s all about poor choices. Poor choices lead to ever-increasing technical debt, more fragility in your applications, and slower development. In other words, poor choices cause pain.
It can be a challenge to identify the difference between a poor choice and a good one, especially if an architect or engineer doesn’t have enough experience. Given a long enough timeline over a number of projects, experienced developers will naturally learn most of these principles. Often this is exactly what we see.
Most organizations are faced with a pretty difficult choice. On the one hand, they can spend their valuable resources trying to identify and hire very experienced architects and developers. This is already difficult in the best of times and becomes even more challenging in a competitive market where everyone is looking for the same people. Experienced developers are worth a premium because the investment will save time and money in the long run. They can also accelerate the pace of innovation and unlock new opportunities and advantages.
On the other hand, organizations can try to avoid taking risks and stick to very limited approaches that are more tolerant of less experienced developers. This is often the choice of organizations that rely on a single, broadly adopted technology stack. While this may be a safer course in the short-term, ultimately it will lead to stagnation, and the organization will fall further and further behind. It is difficult for an industry standard to adopt new approaches and avoid disruption. Becoming a fossil is easier than one may think, and given the pace of technological innovation today, it is even easier.
Another option is for organizations to hire architects and developers of varying skill levels. Then they attempt to build applications with varying degrees of ambitious plans, and often with less-than-ideal budgets and timelines. This approach can work if an organization has the ability to experiment and rapidly decommission failed projects. This is a function of organizational tolerance to risk and possible “waste.”
Most of the time, however, organizations are unable or unwilling to abandon existing investments. And so, they continue to invest in less-than-ideal projects that accrue technical debt and get more difficult to manage over time.
This process is compounded by turnover in the development team. Employee churn is a fact of life, but as projects become more painful and difficult, they tend to increase the burnout rate of developers. This continues to accelerate the overall degradation of the application as new employees are hired and expected to contribute to a complex and failing project with layers of poor choices embedded by previous developers. Those original developers have long since left, and there is no way to understand why those choices were made. Eventually, the organization is forced to start over and invest in creating new projects, without accounting for those problems or capturing the value of those difficult lessons. The result is that the organization is unable to “learn” from its own mistakes, and so it is often doomed to repeat them.
It is common to approach the new project in a way that reinvents many of the same patterns of the previous application. The business users will often define requirements based on their experience in the previous system. The architects may not have the institutional experience to know the difference between what the users say and what they need.
Repeatedly making poor choices is a major trap and one of the biggest sources of difficulty because it can be tough to break the cycle. Over and over, the organization makes similar decisions, even as they change technologies. The results may be a bit better for a while, but the vicious circle will rise again.
Shiny Object Syndrome
It is tempting and common for architects to look outside for a new tool or approach that will solve their problem. The impulse is to do away with the old and use something new that has advantages that were not available before. This time things will be different, they hope. However, unless they have a good understanding of the principles of good development, then it is just another trap.
Shiny object syndrome, aka silver bullet-itis, is when IT chooses the latest and greatest tools and frameworks as the penultimate solution to their problem. This means focusing on new frameworks and tools for their own sake and assuming your problem will be solved by choosing a new tool. This is a particularly easy trap for developers to fall into. It’s a natural temptation to want to use new tools and learn new techniques.
Indeed, a good developer will often be on the lookout for new “toys” to experiment with. This impulse often hides the core truth we are discussing here—real solutions are not about the tools we use but the mindset we hold when applying those tools.
This can be tricky because it is easy to look at successful applications and outcomes and mistakenly attribute the success to the framework or tool. This can be exacerbated by sales organizations and evangelists who are tasked with promoting these tools. Unless they have the training to take a “consultative approach,” then they will only see the opportunity to sell their tool or ideology. In fact, well-meaning tech enthusiasts are often the guiltiest here. They are so focused on evangelizing their preferred tool that they don’t think about what is best for the project itself.
This trap is greatly compounded when developers making the decisions don’t have experience using the new technology, which leads to a higher risk of poor choices and flawed implementation patterns. It is tempting to read about the advantages of the powerful features provided by new tools and get swept up in the excitement of what that could mean. This doesn’t mean that those tools don’t have advantages. What it means is that the problem isn’t with the tool itself.
If you can’t solve your problem reliably with 10-year-old technologies, then different tools are not likely to help. When properly applied, they may make things faster, easier, or more efficient, but in and of themselves, these new shiny toys cannot make up for bad design patterns.
By way of analogy, if you are a poor tennis player, buying a newer racket is not going to help your game. The only thing that will help you is to focus on training the basics—working on the fundamental aspects of the game, working on your technique, and understanding the strategies and approaches necessary to win. You need to have an open and honest assessment of your own skills and abilities as well as your weaknesses and how you plan to overcome them.
To be clear, better tools might help some things. However, the tool cannot make up for weak skills, inexperience, or a lack of good engineering.
Architecture-First Design
Another related challenge is an architecture-first design pattern. We see this often with trends in the industry. Buzz words like microservices, MACH, headless, and decoupled are examples in the web application field.
These architectural approaches can have a ton of value, and often can be implemented in a progressive way. However, without a nuanced understanding of the pros and cons of the architecture, it is difficult to avoid a poor decision. For example, headless systems often provide more flexibility for delivery across multiple channels; however, they also require more development time and more complex deployment setups.
All too often, organizations will choose an architecture based on other successful projects and decide that this is going to solve their current problem. While the success of another application is a useful data point, we must always remember that every project is unique in some way.
This is like using someone else’s blueprints to build your house without actually looking at what you need. All architectures are potentially valid, but the appropriate architecture is a result of understanding the needs, constraints, and long-term goals of the project and organization.
You need to ensure that you have the right tool for the job and that you’re making good use of the resources you currently have. This may be existing technologies and tools, developer resources, experience, organizational maturity, budget, and other factors. You wouldn’t want to use a hammer to put a screw into a piece of wood, no matter how great that hammer might be.
Just like with shiny object syndrome, it can be challenging to understand how to apply new architectures without actually using them. And it is tempting to look at other examples in successful applications and assume that those same patterns will work for you.
A great example of this is Netflix and microservices architecture. Between 2009 and 2012, Netflix refactored its monolithic architecture into a loosely connected series of microservices. This allowed them to continue driving innovation, expanding to over one thousand microservices today. They have also needed to create new tools and services to manage this network, including Conductor (a microservices orchestrator), Simone (a distributed simulation service), and Mantis (a streaming data analysis tool. They have even built their own CDN (content delivery network) and shipped Netflix-specific red servers to internet providers all over the world!
Netflix’s success with microservices architecture is one of the most impressive you’re likely to see. They have done things and changed things in certain ways that have given them a distinct advantage. It is worth studying their success in order to understand the value of such an architecture. However, it is a mistake to assume that it is also going to be successful for you. Most development teams simply don’t have the size or the scale to apply a similar solution. Trying to replicate what Netflix has achieved with their microservices approach on a smaller scale can often be worse than using a “less interesting” approach.
The trade-off with microservices is that it leads to greater complexity. For Netflix, this complexity is offset by the advantages they get in terms of redundancy, uptime, and self-healing capabilities that enable them to deliver their service with the quality their customers expect.
It is vitally important to remember that your organization is unique. Your team and your resources are also unique. And every project you have, no matter how similar, should be approached with a fresh mind.
Building on the past can be a good thing and disrupting yourself can also be a good thing. But choosing an architecture because it sounds interesting or has been successful for someone else is a short-sighted approach that will lead to more problems than you can realize before it’s too late.
Monolithic Myths
Perhaps no concept or word is as widely despised and cursed as monolith has come to be. In addition to being a powerful term that characterizes what we may think of when we imagine the vicious circle, it is largely used as a curse or insult. Calling an application “monolithic” is a quick way to make enemies and lose friends.
The monolithic approach is the easiest and most intuitively sensible way to build programs and applications, especially for small teams or single developers. Having all the resources in a single place in the single stack can reduce complexity and increase deliverable time.
Here we find the first “monolithic myth”: any degree of unification is a monolithic application. Whether that is the unification of the backend and frontend or the unification of multiple applications in a single repo, many people mistakenly call this a monolith. It is not the unified layer that makes a monolith, it is the lack of composability. This means the components that make up the application are not independent and have many cross connections that make them impossible to separate.
Monolithic applications are so common because they are much easier to build. A developer starts on something small and simple, and then slowly adds bits and pieces here and there. For a small project, a single developer (or small team) can hold the entire application in their head, so this can be an acceptable trade-off. For a POC (proof of concept), sample, experiment, or other low priority project, this is not a problem. For production code, however, this is a recipe for disaster. That is why we see such instinctive pushback against monolithic applications. The pattern does not scale.
One problem is that many people equate monolithic with a unified architecture. Having all of the code in one place or having all of the services on one server is not necessarily monolithic. In a true monolith, it is difficult to update or change one piece without disrupting the whole. People may describe it as “spaghetti code” because it is very much like a messy bowl of spaghetti. It is difficult to know where one component ends and another begins, making it challenging to unravel and fix problems. When everything is tied to everything else, it becomes far easier to ignore problems and leave things as they are. Over time, technical debt increases, and the complexity of the application increases at an exponential rate.
To avoid monoliths, you must have a mindset that allows you to build composable solutions. Unified architectures and traditional systems are not necessarily monolithic. By the same token, decoupled, headless, or other “modern” applications are not necessarily composable. In fact, it is incredibly common to see monolithic decoupled applications.
Above all else, we should focus on composability. True composability leads to stable systems that are easier to maintain and upgrade over time. Composability is what makes it possible to limit technical debt and enable us to take advantage of new technologies as they come along.
1 Edsger Dijkstra, “The Humble Programmer” (ACM Turing lecture, published in Communications of the ACM 15, no. 10, October 1972: 859–66).
Get Creating the Virtuous Cycle with Headless, Hybrid, and Low Code now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.