Chapter 1. Origins of Software Architecture

We are most of us governed by epistemologies that we know to be wrong.

Gregory Bateson

The purpose of this book is to help you design systems well and to help you realize your designs in practice. This book is quite practical and intended to help you do your work better. We must begin theoretically and historically. This chapter is meant to introduce you to a new way of thinking about your role as a software architect that will inform both the rest of this text and the way in which you approach your projects moving forward.

Software’s Conceptual Origins

We shape our buildings, and thereafter they shape us.

Winston Churchill

FADE IN:

INT. A CONFERENCE HALL IN GARMISCH GERMANY, OCTOBER 1968 — DAY

The scene: The NATO Software Engineering Conference.

Fifty international computer professors and craftspeople assembled to determine the state of the industry in software. The use of the phrase software engineering in the conference name was deliberately chosen to be “provocative” because at the time the makers of software were considered so far from performing a scientific effort that calling themselves “engineers” would be bound to upset the established apple cart.

MCILROY

We undoubtedly get the short end of the stick in confrontations with hardware people because they are the industrialists and we are the crofters.
(pause)
The creation of software is backwards as an industry.

KOLENCE

Agreed. Programming management will continue to deserve its current poor reputation for cost and schedule effectiveness until such time as a more complete understanding of the program design process is achieved.

Though these words were spoken, and recorded in the conference minutes in 1968, they would scarce be thought out of place if stated today.

At this conference, the idea took hold was that we must make software in an industrial process. 

That seemed natural enough, because one of their chief concerns was that software was having trouble defining itself as a field as it pulled away from hardware. At the time, the most incendiary, most scary topic at the conference was “the highly controversial question of whether software should be priced separately from hardware.” This topic comprised a full day of the four-day conference.

This is a way of saying that software didn’t even know it existed as its own field, separate from hardware, a mere 50 years ago. Very smart, accomplished professionals in the field were not sure whether software was even a “thing,” something that had any independent value. Let that sink in for a moment.

Software was born from the mother of hardware. For decades, the two were (literally) fused together and could hardly be conceived of as separate matters. One reason is that software at the time was “treated as though it were of no financial value” because it was merely a necessity for the hardware, the true object of desire.

Yet today you can buy a desktop computer for $100 that’s more powerful than any computer in the world was in 1968. (At the time of the NATO Conference, a 16-bit computer—that’s two bytes—would cost you around $60,000 in today’s dollars.)

And hardware is produced on a factory line, in a clear, repeatable process, determined to make dozens, thousands, millions of the same physical object.

Hardware is a commodity.

A commodity is something that is interchangeable with something of the same type. You can type a business email or make a word-processing document just as well on a laptop from any of 50 manufacturers.

And the business people want to form everything around the efficiencies of a commodity except one thing: their “secret sauce.” Coca-Cola has nearly 1,000 plants around the world performing repeated manufacturing, putting Coke into bottles and cans and bags to be loaded and shipped, thousands of times each day, every day, in the same way. It’s a heavily scrutinized, sharply measured business: an internal commodity. Coke is bottled in factories in identical bottles in identical ways, millions of times every day. Yet only a handful of people know the secret formula for making the drink itself. Coke is copied millions of times a day, every day, and bottled in an identical process. But making the recipe a commodity would put Coke out of business.

In our infancy, we in software have failed to recognize the distinction between the commodities representing repeated, manufacturing-style processes, and the more mysterious, innovative, one-time work of making the recipe.

Coke is the recipe. Its production line is the factory. Software is the recipe. Its production line happens at runtime in browsers, not in the cubicles of your programmers.

Our conceptual origins are in hardware and factory lines, and borrowed from building architecture. These conceptual origins have confused us and dominated and circumscribed our thinking in ways that are not optimal, and not necessary. And this is a chief contributor to why our project track record is so dismal.

The term “architect” as used in software was not popularized until the early 1990s. Perhaps the first suggestion that there would be anything for software practitioners to learn from architects came in that NATO Software Engineering conference in Germany in 1968, from Peter Naur:

Software designers are in a similar position to architects and civil engineers, particularly those concerned with the design of large heterogeneous constructions, such as towns and industrial plants. It therefore seems natural that we should turn to these subjects for ideas about how to attack the design problem. As one single example of such a source of ideas, I would like to mention: Christopher Alexander: Notes on the Synthesis of Form (Harvard Univ. Press, 1964) (emphasis mine).

This, and other statements from the elder statesmen of our field at this conference in 1968, are the progenitors of how we thought we should think about software design. The problem with Naur’s statement is obvious: it’s simply false. It’s also unsupported. To state that we’re in a “similar position to architects” has no more bearing logically, or truthfully, to stating that we’re in a similar position to, say, philosophy professors, or writers, or aviators, or bureaucrats, or rugby players, or bunnies, or ponies. An argument by analogy is always false. Here, no argument is even given. Yet here this idea took hold, the participants returning to their native lands around the world, writing and teaching and mentoring for decades, shaping our entire field. This now haunts and silently shapes—perhaps even circumscribes and mentally constrains, however artificially—how we conduct our work, how we think about it, what we “know” we do.

Origins

To be clear, the participants at the NATO conference in 1968 were very smart, accomplished people, searching for a way to talk about a field that barely yet existed and was in the process of forming and announcing itself. This is a monumental task. I hold them in the highest esteem. They created programming languages such as ALGOL60, won Turing Awards, and created notations. They made our future possible, and for this I am grateful, and in awe. The work here is only to understand our origins, in hopes of improving our future. We are all standing on the shoulders of giants.

Some years later, in 1994, the Gang of Four created their Design Patterns book. They explicitly cite as inspiration the work of Christopher Alexander, a professor of architecture at University of California at Berkeley and author of A Pattern Language, which is concerned with proven aspects of architecting towns, public spaces, buildings, and homes. The Design Patterns book was pivotal work, one which advanced the area of software design and bolstered support for the nascent idea that software designers are architects, or are “like” them, and that we should draw our own concerns and methods and ideas from that prior field.

This same NATO conference was attended by now-famous Dutch systems scientist Edsger Dijkstra, one of the foremost thinkers in modern computing technology. Dijkstra participated in these conversations, and then some years later, during his chairmanship at the Department of Computer Science at the University of Texas, Austin, he voiced his vehement opposition to the mechanization of software, refuting the use of the term “software engineering,” likening the term “computer science” to calling surgery “knife science.” He concluded, rather, that “the core challenge for computing science is hence a conceptual one; namely, what (abstract) mechanisms we can conceive without getting lost in the complexities of our own making” (emphasis mine).

This same conference saw the first suggestion that software needed a “computer engineer,” though this was an embarrassing notion to many involved, given that engineers did “real” work, had a discipline and known function, and software practitioners were by comparison ragtag. “Software belongs to the world of ideas, like music and mathematics, and should be treated accordingly.” Interesting. Let’s hang on to that for a moment.

* * *

Cut to:

INT. PRESIDENT’S OFFICE, WARSAW, POLAND — DAY

The scene: The president of the Republic of Poland updates the tax laws.

In Poland, software developers are classified as creative artists, and as such receive a government tax break of up to 50% of their expenses (see Deloitte report). These are the professions categorized as creative artists in Poland:

  • Architectural design of buildings

  • Interior and landscape

  • Urban planning

  • Computer software

  • Fiction and poetry

  • Painting and sculpture

  • Music, conducting, singing, playing musical instruments, and choreography

  • Violin making

  • Folk art and journalism

  • Acting, directing, costume design, stage design

  • Dancing and circus acrobatics

Each of these are explicitly listed in the written law. In the eyes of the Polish government, software development is in the same professional category as poetry, conducting, choreography, and folk art.

And Poland is one of the leading producers of software in the world.

Cut to: HERE—PRESENT DAY.

Perhaps something has occurred in the history of the concept of structure that could be called an event, a rupture that precipitates ruptures.

This rupture would not have been represented in a single explosive moment, a comfortingly locatable and suitably dramatic moment. It would have emerged among the ocean tides of thought and expression, across universes, ebbing and flowing, with fury and with lazy ease, over time, until the slow trickling of traces and cross-pollination reveal, only later, something had transformed. Eventually, these traces harden into trenches, fixing thought, and thereby fixing expression and realization.

What this categorization illuminates is the tide of language, the patois of a practice that shapes our ideas, conversation, understanding, methods, means, ethics, patterns, and designs. We name things, and thereafter, they shape us. They circumscribe our thought patterns, and that shapes our work.

The concept of structure within a field, such as we might call “architecture” within the field of technology, is thereby first an object of language.

Our language is constituted of an interplay of signs and of metaphors. A metaphor is a poetic device whereby we call something something that it isn’t in order to reveal a deeper or hidden truth about that object by underscoring or highlighting or offsetting certain attributes. “All the world’s a stage, and all the men and women merely players” is a well-known line from Shakespeare’s As You Like It.

We use metaphors so freely and frequently that sometimes we even forget they are metaphors. When that happens, the metaphor “dies” (a metaphor itself!) and becomes the name itself, drained of its original juxtaposition that gave the phrase depth of meaning. We call these “dead metaphors.” Common dead metaphors include the “leg” of a chair, or when we “fall” in love, or when we say time is “running out,” as would sand from an hourglass. When we say these things in daily conversation, we do not fancy ourselves poets making metaphors. We don’t see the metaphor, or intend one. It’s now just The Thing.

In technology, “architecture” is a nonnecessary metaphor. That word, and all it’s encumbered by, directs our attention to certain facets of our work.

Architecture is a dead metaphor: we mistake the metaphor for The Case, the fact.

There has been considerable hot debate, for decades, over the use of the term architect as applied to the field of technology. There are hardware architectures, application architectures, information architectures, and so forth. So can we claim that architecture is a dead metaphor if we don’t quite understand what it is we’re even referring to? We use the term without quite understanding what we mean by it,  what the architect’s process is, and what documents they produce toward what value. “Architect” means, from its trace in Greek language, “master builder.”

What difference does it make?

Copies and Creativity

No person who is not a great sculptor or painter can be an architect. If he is not a sculptor or painter, he can only be a builder.

John Ruskin, “True and Beautiful”

Dividing roles into distinct responsibilities within a process is one useful and very popular way to approach production in business.  Such division makes the value of each moment in the process, each contribution to the whole, more direct and clear. This fashioning of the work, the “division of labor,” has the additional value of making each step observable and measurable.

This, in turn, affords us opportunities to state these in terms of SMART goals, and thereby reward and punish and promote and fire those who cannot meet the objective measurements. Credit here goes at least in some part to Henry Ford, who designed his car manufacturing facilities more than 100 years ago. His specific aim was to make his production of cars cheap enough that he could sell them to his own poorly compensated workers who made them, ensuring that what he could not keep in pure profit after the consumption of raw materials—his paid labor force—would return to him in the form of revenue.

This way of approaching production, however, is most (or only) useful when what is being produced is well defined and you will make many (dozens, thousands, or millions) of copies of identical items.

In Lean Six Sigma, processes are refined until the rate of failure is reduced to six standard deviations from the mean, such that your production process allows 3.4 quality failures per million opportunities. We seek to define our field, to find the proper names, in order to codify, and make repeatable processes, and improve our happiness as workers (the coveted “role clarity”), and improve the quality of our products.

But one must ask, how are our names serving us?

Processes exist to create copies. Do we ever create copies of the software itself? Of course, we create copies of software for distribution purposes: we used to burn copies of web browsers onto compact discs and send them in the mail, and today we distribute copies of software over the internet. That is a process facilitating distribution, however, and has little relation to the act of creating that single software application in the first place. In fact, we never do that.

Processes exist, too, in order to repeat the act of doing the same kind of thing, if not making the same exact thing. A software development methodology catalogs the work to be done, and software development departments have divisions and (typically vague) notions of the processes we undergo in the act of creating any software product or system. So, to produce software of some kind, we define roles that participate in some aspect of the process, which might or might not be formally represented, communicated, and executed accordingly.

This problem of determining our proper process, our best approach to our work, within the context of large organizations that expect measurable results according to a quarterly schedule, is exacerbated because competition and innovation are foregrounded in our field of technology. We must innovate, make something new and compelling, in order to compete and win in the market. As such, we squarely and specifically aim not to produce something again that has already been produced before. Yet our embedded language urges us toward processes and attendant roles that might not be optimally serving us.

Such inventing suggests considerable uncertainty, which is at odds with the Fordian love of repeatable and measurable process. And the creation of software itself is something the planet has done for only a few decades. So, to improve our chances of success, we look at how things are done in other, well-established fields. We have embraced terms like “engineer” and “architect,” borrowed from the world of construction, which lends a decidedly more specification-oriented view of our own process. We created jobs to encapsulate their responsibilities but through a software lens, and in the past few decades hired legions of people so titled, with great hopes.

More recently, we in technology turned our sights on an even more venerable mode of inquiry, revered for its precision and repeatability: science itself. We now have data “scientists.” Although the term “computer scientist” has been around perhaps the longest, no one has a job called “computer scientist” except research professors, whose domain all too often remains squarely in the theoretical sphere.

The design of software is no science.

Our processes should not pretend to be a factory model that we do not have and do not desire.

Such category mistakes silently cripple our work.

Why Software Projects Fail

As I mentioned earlier in this chapter, software projects fail at an astonishing rate:

  • In 2008, IBM reported that 60% of IT projects fail. In 2009, ZDNet reported that 68% of software projects fail.

  • By 2018, Information Age reported that number had worsened to 71% of software projects being considered failures.

  • Deloitte characterized our failure rate as “appalling.” It warns that 75% of Enterprise Resource Planning projects fail, and Smart Insights reveals that 84% of digital transformation projects fail. In 2017 Tech Republic reported that big data projects fail 85% of the time. 

  • According to McKinsey, 17% of the time, IT projects go so badly that they threaten the company’s very existence.

Numbers like this rank our success rate somewhere worse than meteorologists and fortune tellers.

Our projects across the board do not do well. Considering how much of the world is run on software, this is an alarming state for our customers, us as practitioners, and those who depend on us.

Over the past 20 years, that situation has grown worse, not better.

A McKinsey study of 5,600 companies found the following:

On average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted. Software projects run the highest risk of cost and schedule overruns.

Of course, some projects come in on time and on budget. But these are likely the “IT” projects cited in the McKinsey study, which include things like datacenter moves, lift-and-shift projects, disaster-recovery site deployments, and so forth. These are complex projects (sort of: mostly they’re just big). I have certainly been involved in several such projects. They’re different than trying to conceive of a new software system that would be the object of design.

The obvious difference is that the “IT” projects are about working with actual physical materials and things: servers to move and cables to plug in. It’s decidable and clear when you’re done and what precise percentage of cables you have left to plug in. That is, many CIO IT projects of the back-office variety are not much more creative than moving your house or loading and unloading packages at a warehouse: just make the right lists and tick through them at the right time.

It’s the CTO’s software projects that run the greatest risk and that fail most spectacularly. That’s because they require the most creative, conceptual work. They demand making a representation of the world. When you do this, you become involved in signs, in language, in the meaning of things, and how things relate. You’re stating a philosophical point of view based in your epistemology.

You’re inventing the context wherein you can posit signs that make sense together and form a representation of the real world.

That’s so much harder.

It’s made harder still when we don’t even recognize that that’s what we are doing. We don’t recognize what kind of projects our software projects are. We are in the semantic space, not the space of physical buildings.

The McKinsey study demarcates IT projects as if they are all the same because they are about “computer stuff” (I speculate). The results would look very different if McKinsey had thought better and saw that these IT projects should be lumped in with facilities management. The creation of a software product is an entirely different matter, not part of “IT” any more than your design meetings in a conference room are part of the facilities company you lease your office space from.

But creative work need not always fail. Plenty of movies, shows, theatrical productions, music performances, and records all get produced on time and on budget. The difference is that we recognize that those are creative endeavors and manage them as such. We think we’re doing “knife science” or “computer science” or “architecture”: we’re not. We’re doing semantics: creating a complex conceptual structure of signs, whose meaning, value, and very existence is purely logical and linguistic.

This assumes that everyone from the executive sponsors to the project team had fair and reasonable understanding of what was wanted, time to offer their input on the scope, the budget, the deadline. We all well know that they do not. Even if they did, they’re still guessing at best, because what they are doing by definition has never been done before. And it’s potentially endless, because the world changes, and the world is an infinite conjunct of propositions. Where do you want to draw the line? Where, really, is the “failure” here?

Because software is by its nature semantic, it’s as if people who aren’t software developers don’t quite believe it exists. These are hedge fund managers, executives, MBA-types who are used to moving things on a spreadsheet and occasionally giving motivating speeches. They don’t make anything for a living.

Software projects often fail because of a lack of good management.

The team knows from the beginning the project cannot possibly be delivered on time. They want to please people, they worry that management will just get someone else to lie to them and say the project can be delivered on the date that was handed to them.

As a technology leader in your organization, it’s part of your job to help stop this way of thinking and have the healthy, hard conversations with management to set expectations up front. They can have some software in six months. It’s not clear what exactly that will be. Software projects succeed when smart, strategic, supportive executives understand that this is the deal and take that leap of faith with you to advance the business. When greedy, ignorant executives who worry about losing a deal or getting fired themselves dictate an impossible deadline and tremendous scope, you must refuse it. This is in part how the failed software of the Boeing 737 Max was created.

The McKinsey study goes on to state the reasons it found for these problems:

  • Unclear objectives

  • Lack of business focus

  • Shifting requirements

  • Technical complexity

  • Unaligned team

  • Lack of skills

  • Unrealistic schedules

  • Reactive planning

These are the reasons that software projects fail.

If we could address even half of these, we could dramatically improve our rate of success. Indeed, when we focus on the semantic relations, on the concept of what we are designing, and shift our focus to set-theorizing the idea of the world that our software represents, our systems do better.

Of these reasons, the first five could be addressed by focusing on the concept: the idea of the software, what it’s for, and the clear and true representation of the world of which the software is an image. The remaining three are just good old fashioned bad management.

The Impact of Failures

So perhaps now we can say there is a rupture between our stated aims, the situation in which we find ourselves as technologists, and how we conceptualize and approach our work. We are misaligned. The rupture is not singular. It shows itself in tiny cracks emerging along the surface of the porcelain.

But what does it mean for a software project to fail? Although metrics vary, in general these refer to excessive overruns of the budget and the proposed timeline, and whether the resulting software works as intended. Of course, there are not purely “failed” and purely “successful” projects, but not meeting these three criteria means that expectations and commitments were not met.

And even when the project is done (whether considered a failure or not), if some software has shipped because of it, the resulting software doesn’t always hit the mark. Tech Republic cites a study showing that in 2017 alone, software failures “affected 3.6 billion people, and caused $1.7 trillion in financial losses and a cumulative total of 268 years of downtime.”

Worse, some of these have more dire consequences. A Gallup study highlights the FBI’s Virtual Case File software application, which “cost U.S. taxpayers $100 million and left the FBI with an antiquated system that jeopardizes its counterterrorism efforts.” A 2011 Harvard Business Review article states that the failures in our IT projects cost the US economy alone as much as $150 billion annually.

The same HBR article recounts the story of an IT project at Levi Strauss in 2008. The plan was to use SAP (a well-established vendor and leader in its technology) and Deloitte (a well-known, highly regarded leader in its field) to run the implementation. This typical project, with good names attached, to do nothing innovative whatsoever, was estimated at $5 million. It quickly turned into a colossal $200 million nightmare, resulting in the company having to take a $193 million loss against earnings and the forced resignation of the CIO.

Of course, that’s small stakes compared with what President Obama called the “unmitigated disaster” of the HealthCare.gov project in 2013, in which the original cost was budgeted at $93 million, soon exploding to a cost 18 times that, of $1.7 billion, for a website that was so poorly designed it was able to handle a load of only 1,100 concurrent users, not the 250,000 concurrent users it was receiving.

Discovering a root cause in all this history will be overdetermined: there are failures of leadership, management, process design, project management, change management, requirements gathering, requirements expression, specification, understanding, estimating, design, testing, listening, courage, and in raw coding chops.

Where are the heroes of architecture and Agile across all this worsening failure?

Our industry’s collective work on methods, tooling, and practices has not improved our situation: in fact, it is only becoming markedly worse. We have largely made mere exchanges, instead of improvements.

It’s also worth nothing that we in software love to tout the importance of failure. Failure itself, of course is horrible. It is not something to be desired.

What people mean, or at least should mean, when they say this, is that what is important is to learn and to do something new to address the aspects that helped lead to a failure, and that sometimes (often) failure accompanies doing something truly new. It’s easy to repeat a known formula, but we must be supported in attempts to try something different, and take a long view.

The importance of failure, in this context, is not to celebrate it. It is to underscore that we are not doing good enough work. We can do better. There is no easy fix. As Fred Brooks stated in his follow-up essay to 1975’s excellent book, The Mythical Man Month: Essays on Software Engineering, there is no silver bullet.

But there is a way.

It starts with a question. What would our work look like if instead of borrowing broken metaphors and language that cripple our work, we stripped away these traces, and rethought the essence of our work? 

What we would be left with are concepts, which are at the center of a semantic approach to software design. The next chapter unpacks the idea of concepts as they apply to our proposed approach to your role in designing effective software.

Get Semantic Software Design now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.