Chapter 1. Why Platform Engineering Is Becoming Essential
She swallowed the cat to catch the bird, she swallowed the bird to catch the spider, she swallowed the spider to catch the fly; I don’t know why she swallowed a fly—Perhaps she’ll die!
Nursery rhyme
Over the past 25 years, software organizations have experienced a problem: what to do with all of the code, tools, and infrastructure that is shared among multiple teams? In reaching for a solution, most have tried creating central teams to take responsibility for these shared demands. Unfortunately, in most cases this has not worked particularly well. Common criticisms have been that central teams provide offerings that are hard to use, they ignore customer needs in favor of their own priorities, their systems aren’t stable enough, and sometimes all of the above.
Instead of fixing these central teams, some have tried getting rid of them entirely, giving each application team access to the cloud and their choice of open source software (OSS). However, this exposes those application teams to the operational and maintenance complexity of their choices, so instead of creating efficiencies and economies of scale, even small teams end up needing site reliability engineering and DevOps specialists. And even with these dedicated specialists, the cost of managing the complexity continues to threaten the productivity of the application teams.
Others, while embracing the best of the cloud and OSS, have not given up on central teams; they’ve stuck with the model, certain that the benefits outweigh the downsides. The best have succeeded by building platforms: developing shared offerings that other engineers can comfortably build on top of. They have become experts at managing the complexity of the cloud and OSS while presenting stability to their users, and they are willing to listen to and partner with the application teams to continually evolve and meet the company’s needs. Whether or not they’ve called their efforts platform engineering, they embody the mindset, skills, and approach necessary for solving the problem of ever-growing complexity (the fly) without swallowing ever-larger animals in the process.
To set the stage, in this chapter we’ll cover:
-
What we mean by platforms, and a few other important terms we’ll use throughout the book
-
How system complexity has gotten worse in the era of cloud computing and OSS, leaving us in an “over-general swamp” of exposed complexity
-
How platform engineering manages this complexity and so frees us from the swamp
This chapter has a slight emphasis on infrastructure and developer tooling, but don’t worry, this book isn’t just for people working on infrastructure or developer platforms! We’ll use systems common to all developers to provide a tangible illustration of the current state of affairs, but the underlying challenge of managing complexity is common to all kinds of internal platform development.
Defining “Platform” and Other Important Terms
Before we get started, let’s define several important terms we’ll be using throughout this book, so we all have the same frame of reference:
- Platform
-
We use Evan Bottcher’s definition from 2018, with a couple of terms updated. A platform is a foundation of self-service APIs, tools, services, knowledge, and support that are arranged as a compelling internal product. Autonomous application teams1 can make use of the platform to deliver product features at a higher pace, with reduced coordination.
-
A corollary here is to ask: what, then, isn’t a platform? Well, for the purposes of this book, a platform requires you to be doing platform engineering. So, a wiki page isn’t a platform, because there’s no engineering to be done. “The cloud” also is not a platform by itself; you can bring cloud products together to create an internal platform, but on its own the cloud is an overwhelming array of offerings that is too large to be seen as a coherent platform.
- Platform engineering
-
Platform engineering is the discipline of developing and operating platforms. The goal of this discipline is to manage overall system complexity in order to deliver leverage to the business. It does this by taking a curated product approach to developing platforms as software-based abstractions that serve a broad base of application developers, operating them as foundations of the business. We will elaborate on this in Chapter 2.
- Leverage
-
Core to the value of platform engineering is the concept of leverage—meaning, the work of a few engineers on a platform team reduces the work of the greater organization. Platforms achieve leverage in two ways: making applications engineers more productive as they go about their jobs creating business value, and making the engineering organization more efficient by eliminating duplicate work across application engineering teams.
- Product
-
We believe that it is essential to view a platform as a product. Developing platforms as compelling products means that we take a customer-centric approach when deciding on the features of a platform. This implies a core focus on the users, but it requires more than just performatively hiring product managers and calling it a day. With the word “product” we strive to achieve for platforms what Steve Jobs created with Apple products: against a broad range of demand for features the product is deliberately and tastefully curated, both through what it does and, more importantly, through what it leaves out.
The Over-General Swamp
There are many types of internal platforms, and the advice in this book is relevant to all of them. However, we see the most acute pain today in the infrastructure and developer tooling (DevTools) spaces, and we see this driving the most demand for platform engineering. That is because these systems are the ones most closely integrated with the public cloud and OSS. These two trends have driven a lot of industry change over the last 25 years, but rather than making things uniformly better, they are increasing the ownership costs of systems over time. They make applications easier to build but harder to maintain, and the more your system grows, the slower you get—like you’re walking through a swamp.
This comes back to the economic realities of writing and maintaining software. You might believe that the major cost of software is associated with the act of writing it. In fact, most of the cost is related to its upkeep, support, and maintenance.2 Estimates suggest that at least 60–75% of the lifetime cost of software accrues after initial development, with about a quarter of that dedicated purely to migrations and other “adaptive” maintenance.3 Between required upgrades for security patches, retesting of the software, migrations to new versions of underlying dependencies, and everything else, software costs a lot of engineering time in maintenance overhead.
Rather than reducing maintenance overhead, the cloud and OSS have amplified this problem, because they provide an ever-growing layer of primitives: general-purpose building blocks that provide broad capabilities but are not integrated with one another.4 To function, they need “glue”—our term for the integration code, one-off automation, configuration, and management tools. While this glue holds everything together, it also creates stickiness, making future changes much harder.
The over-general swamp forms as the glue spreads. Each application team makes independent choices across the array of primitives, selecting those that allow them to quickly build their own applications with the desired cutting-edge capabilities. In their rush to deliver, they create whatever custom glue is needed to hold everything together, and they’re rewarded with praise for shipping fast. As this repeats over time, the company ends up with the type of architecture seen in Figure 1-1.
The problem with the swamp isn’t just the messy architecture diagram; it’s how difficult it is to change that sticky mess over time. That’s important because applications are constantly evolving, due to new features or operational requirements. Every OSS and cloud primitive also undergoes regular changes, and all of this requires updating the glue that binds them. With the glue smeared everywhere, seemingly trivial updates to primitives (say, a security patch) require extensive organization-wide engineering time for integration and then testing, creating a massive tax on organizational productivity.
The key to avoiding this situation is to constrain how much glue there is, which aligns with the old architectural principle of “more boxes, fewer lines.” Platforms allow us to do this, and thus to extract ourselves from the swamp. By abstracting over a limited set of OSS and vendor choices in an opinionated manner, specific to your organizational needs, they enable separation of concerns. You end up with an architecture more like Figure 1-2.
In sum, platforms constrain the amount of glue needed by implementing the concepts of abstraction and encapsulation and creating interfaces that protect users from underlying complexity (including the complexity of an implementation that needs to change). These concepts are about as old as computer science itself—but if they’re so well known, why does the industry need platform engineering? To answer that question, we’ll start with a look at how enterprise software engineering has changed over the last quarter century.
How We Got Stuck in the Over-General Swamp
The software industry has changed immensely over the past 25 years, kicking off with the widespread use of the internet. For those of you who have been in the industry for a while, we don’t need to tell you how much this affected every aspect of software development, but for the relative newcomers, it’s no exaggeration to say that the over-general swamp largely exists due to the internet itself and the pressure to ship more, faster, without failure. Let’s look at the key changes that led to us getting stuck here, and the implications of that result.
Change #1: Explosion of Choice
The internet generated incredible demand for new software, and software has to run on hardware, no matter what the name “serverless” might imply. The initial wave was realized by provisioning a lot more hardware in data centers, and this led to the growth of infrastructure engineering. Every company was buying a lot more servers and network gear, negotiating with their data center providers, installing hardware in ever greater quantities all across the world—big I infrastructure doing big E engineering powering the big I internet.
We don’t want to minimize the challenges that were overcome in this relatively short period of time. However, application developers interacting with infrastructure teams were constantly frustrated by the extent of hardware issues they had to deal with. They suffered from a limited but constantly changing menu of server choices, frequent data center capacity issues, and weird hardware-related operational problems that no one would help debug—the common response was “nothing in the system logs, must be your software.”
It’s no surprise that when the public cloud came along, frustrated application developers were eager to jump over to a world where they could call an API and seemingly control their own destiny. Despite reasonable concerns about the architectural complexity, security risks, reliability, and cost, even large, conservative companies were driven to some level of cloud adoption.
Unfortunately, those reasonable concerns have proven not just valid, but worse than feared. While the cloud promised platforms (PaaS) that would make applications independent of infrastructure, what has seen wide adoption is IaaS, which in many cases has tied applications to infrastructure even more than before. Reminding you of the difference:
-
With infrastructure as a service (IaaS), the vendor’s APIs are used to provision a virtualized computing environment with various other infrastructure primitives, which run an application more or less like it would be run on physical hosts.
-
With platform as a service (PaaS), the vendor takes full ownership of operating the application’s infrastructure, which means rather than offering primitives, they offer higher-level abstractions so that the application runs in a scalable sandbox.
Figure 1-3 shows a high-level comparison of the two approaches.
Initially, it was hoped that application teams would embrace fully supported PaaS offerings—solutions as user-friendly as Heroku but capable of handling greater complexity.5 Unfortunately, these platforms have struggled to support a wide range of applications and to integrate with existing applications and infrastructure. As a result, almost all companies doing in-house software development at scale embrace IaaS to run that software, preferring to accept the added complexity of provisioning and operating their infrastructure in order to get the flexibility of choice.
The rise of the orchestration system Kubernetes is in many ways an admission that both PaaS and IaaS have failed to meet enterprise needs. It is an attempt to simplify the IaaS ecosystem by forcing applications to be “cloud native” and thus need less infrastructure-specific glue. However, for as much as it standardizes, Kubernetes has not been a complexity win. As an intermediary layer trying to support as many different types of compute configurations as possible, it is a classic “leaky” abstraction, requiring far too much detailed configuration to support each application correctly. Yes, applications have more YAML glue and less Terraform glue,6 but as we’ve discussed, a goal of platform engineering is to reduce the total amount of glue.
Kubernetes is also an example of the second source of complexity we mentioned. Matching the rise of the cloud has been the rise of OSS ecosystems for all types of software. Where once you paid a vendor for your development tools and middleware, now there are thriving and evolving ecosystems for a wide array of development tools, libraries, and even full independent systems like Kubernetes. The problem with OSS is the proliferation of choice. Application teams with specific needs can usually find an OSS solution that is optimal for them but not necessarily for anyone else at the company. The bespoke choice that lets them quickly ship their initial launch eventually turns into a burden, as they must independently manage the maintenance costs that came with their “free, like a puppy”7 OSS choice.
Change #2: Higher Operational Needs
In parallel with this explosion of infrastructure primitives and applications using them came the question of who was going to operate them, and how. If you went back to the 1990s, before the internet took off, and looked at how companies developed and operated their in-house software applications, you would typically find two roles, which in most cases were staffed in entirely separate teams:
- Software developer
-
Responsible for architecture, coding, testing, etc., leading to software applications being delivered as monolithic distributions, handed off to someone else to operate
- Systems administrator
-
Responsible for all aspects of the production operation of software (in-house applications as well as vendor software and OSS) on the company’s computers
As the internet took off and in-house software became more important to companies’ success, these roles started to mutate. The importance of 24/7 operational support for an increasing number of applications initially led to the growth of operations engineering teams, which tended to be filled with a lot of early-career systems administrators—this was the proving ground they had to face before graduating into a less operational role.
You still see pockets of operations engineering in companies today, but the role is declining. As the 2000s progressed, software developers adopted the “Agile” model of regular releases of incremental functionality, as a better way to get feedback and so ship a better product. Agile brought a challenge to the operations engineering model: with one team taking on all the responsibility for making code changes and pushing for fast release cycles and the other team taking on all the frontline responsibilities when the code had problems, there was some tension. As anyone who lived through it knows, “some tension” is putting it mildly; particularly after an outage caused by something that had been “thrown over the fence,” there was usually a large amount of finger-pointing about which side was to blame. The problem was that there was generally no clear answer, because Agile had blurred the lines of responsibility.
This led to the creation and broad adoption of what the industry now calls DevOps. DevOps was framed as a model to integrate application development and operations activities, and it became associated as much with a culture change as a set of specific technologies or roles to adopt. That being said, the operational work didn’t go away, and on the ground teams implemented it in two different ways:
- Split
-
Keep the separation between operations and development teams, but have the operations team do some amount of development, particularly around creating glue for pushing code to production and integrating it with infrastructure. Thus, the old operations team with operations engineers was now the DevOps team with DevOps engineers.
- Merged
-
Merge the operations and development teams into one. With this approach, described as “you build it, you run it,” everyone who works on a system is on the same team, with all of them sharing in the operational work (the most salient aspect being part of the on-call rotation). While many teams succeeded with 100% software developers, others were more cross-functional, with specialists to own the glue that pushed code to production and integrated with infrastructure. At some companies, these engineers were also called DevOps engineers.8
In an act of parallel evolution, in about 2004 Google moved away from operations engineering toward something they called site reliability engineering (SRE). In 2015, during the upswing of DevOps popularity, Google published a book on its practices, Site Reliability Engineering: How Google Runs Production Systems (O’Reilly). This caused a lot of excitement, because while many companies had been adopting DevOps, plenty were struggling with the practical complexities of making it work. With its heavy emphasis on reliability-oriented processes and organizational responsibilities, some thought SRE was the silver bullet the industry needed to finally balance operational and development needs, enabling the creation of much more reliable systems.
We would argue that SRE, as it was originally sold, has not been a widespread success outside of Google. The processes were too heavyweight; their success relied too much on the specific cultural capital and organizational focus that came from Google being the world’s biggest search company. This was well summarized by former director of SRE at Google, Dave O’Connor, who after a couple of stints outside Google wrote a post in 2023 titled “6 Reasons You Don’t Need an SRE Team” that concludes, “The next stage in removing our production training wheels as an industry is to tear down the fence between SRE and Product Engineering, and make rational investments in reliability as a mindset, based on specific needs.”
There is no getting away from the needs of operating software. Every company that offers online software systems must have operational support for this software during applicable usage times (which may be working hours, 24/7, or somewhere in between). But how do you manage this in the most cost-effective yet sustainable way possible? You want to limit the places where you must have dedicated operations teams (or, using the terminology introduced earlier, “split” DevOps/SRE teams) and make it as easy as possible for the developers of the software to deploy and operate it themselves, achieving the initial vision of DevOps.
Result: Drowning in the Swamp
So you’ve got more application teams, making more choices, over a more complex set of underlying OSS and cloud primitives. Application teams get into this situation because they want to deliver quickly, and using the best systems of the day that fit the problem (or the systems they know best) will help them do that. Plus, if they’ve gotta own all the operational responsibility for this system themselves, they might as well pick their own poison!
Add to this that application engineers with new features are not the only ones wanting to ship as quickly as possible. The increasing surface of internet-accessible systems has led to an escalation of cyberattacks and vulnerability discoveries, which in turn means that infrastructure and OSS are changing faster to address these risks. We’ve seen upgrade cycles for systems and components move from years to months, and these changes mean work for application teams who must update their glue and retest or even migrate their software in response.
The pressure for change has created a swampy mess of glue mixed with the long-term consequences of individual team decisions. Every new greenfield project adds more choices and glue to this bog of complexity, and over time your developers get stuck in the mire. It’s hard to navigate, slow to move through, and full of hungry operational alligators (or worse, crocs!). How do you extract yourself from this morass? It’s no surprise that we think the answer is platform engineering, and next we will cover the ways in which it helps you do just that.
How Platform Engineering Clears the Swamp
If you’ve been stuck in the over-general swamp, you can appreciate the intellectual appeal of platform engineering. You’re hiring more people in roles like infrastructure, DevTools, DevOps, and SRE engineer, but you never seem able to keep up with the new complexity arising from OSS and cloud systems. Your applications grow more complex, your application developers become less productive, and you need a way out. Building platforms to manage this complexity sounds great.
But building platforms takes significant investment. This includes the costs to build and support them, as well as the overhead associated with limiting application teams’ choices of OSS and cloud primitives. Additionally, establishing a platform engineering team can incur organizational costs through reorganizations, role changes, and the overhead of rolling out a new focus area for the company. In this section, we explain how platforms and platform engineering will justify these investments and deliver long-term value.
Limiting Primitives While Minimizing Overhead
The explosion of choice wasn’t all bad: greenfield applications can ship much faster now than in the past, and application developers feel more autonomy and ownership when they have systems they enjoy using. These benefits often get forgotten when companies start to focus on reducing the support burden and long-term costs that arise from the diversity of choices. In this situation, the first instinct of leadership is to prescribe a set of standards using appeals to authority. “Because I am the expert in databases,” they say, “I will choose which databases you, the application teams, can use.” Or, “I am the architect, so I decide on all of the software tools and packages.” Or, “I am the CTO, so I decide everything.” Inevitably, these experts will struggle to understand the business needs well enough to make optimal choices, and application teams will suffer. Standardization via authority isn’t enough.
Platform engineering recognizes that modern engineering teams should have systems that they enjoy using, provided by teams that are responsive to them as customers and not just focused on cost reduction or their own support burden. Instead of prescribing a set of standards based on appeals to authority, platform engineering takes a customer-focused product approach that curates a small set of primitives able to meet a broad range of requirements. This requires compromises in light of business realities, incremental delivery of good platform architecture, and a willingness to partner directly with application teams and listen to what they need. When done well, you can point to the demonstrated leverage of partnering to use the platform-provided offerings instead of appealing to the authority of the architect, database administrator, CTO, or platform VP. In this way, you can reduce the number of OSS and cloud primitives used, without the worst consequences of top-down mandates.
Reducing Per-Application Glue
On top of reducing the number of primitives in use, platform engineering aims to go one step further and reduce the coupling “glue” to those that remain. This removes most of the application-level glue, by abstracting the primitives into systemic platform capabilities that are able to meet broader needs. To illustrate this, we’ll dive into the common challenge of managing Terraform.
OSS and cloud offerings are complex in a lot of ways, with one of the most costly ways being their configuration—the endless lists of parameters that, if not specified correctly, will eventually lead to issues in production. Nowhere is this more of a problem than in cloud configuration, for which the 2024 state-of-the-art tool is an OSS infrastructure as code (IaC) system called Terraform that provides a perfect illustration of how platform engineering addresses the downsides of glue.
When application engineering teams all started pushing hard for the smorgasbord of the IaaS cloud, most companies decided that the path of least friction was to give each team the power and responsibility to provision their own individual cloud infrastructure with their own configuration. In practice, that meant they became part-time cloud engineering teams, versed in configuration management and infrastructure provisioning. If you want infrastructure that is repeatable, rebuildable, and can be secured and validated, you need a configuration management and provisioning template like Terraform. So, the common approach was to have application development teams learn Terraform. In our experience, this led to the following progression:
-
Most engineers don’t want to learn a whole new toolset for infrequent tasks. Infrastructure setup and provisioning are not an everyday core focus—not even for teams doing mature resiliency testing and regularly rebuilding the system from scratch. So, over time the work would get shunted either to unsuspecting new hires, or to the rare engineers who were interested in DevOps. In the best case this would lead to one or two people evolving into infrastructure provisioning experts who could write Terraform and own all of this for the team. However, most of the time these engineers didn’t stick around on application teams for long, which pushed the work back onto new hires, who usually made a mess of it.
-
The shortage, combined with people cobbling together their own Terraform all over the company, often led leadership to centralize the work across multiple teams (or even the whole company). But rather than centralizing with the goal of building a platform, all the Terraform engineers were just pulled into a team that provided Terraform-writing services.
-
These centralized Terraform-writing teams became trapped in a feature shop mindset, taking in work requests and pumping them out. This meant no strong developers (the type that can change the structure of the Terraform to provide better abstractions) wanted to be part of it. Over time, the codebase devolved into a spaghetti mess, which slowed down application teams who wanted something slightly out of the norm and eventually created a security nightmare.
A better path is to realize that you need to do something more coherent than offer centralized Terraform-writing support, and think about how to evolve this group of experts from a “glue” maintenance center into an engineering center that builds things—namely, a platform. This will require you to go one level deeper in understanding your customers’ needs, to develop opinions about which solutions to offer rather than just trying to make it easier for people to get access to whatever they want, and to think about what you can build that takes you beyond just the provisioning step.
As you move into new models for providing underlying infrastructure, it is important to centralize expertise and create efficiencies. Instead of each engineering team hiring their own DevOps and SRE engineers to support the infrastructure, a platform team can pool these experts and expand their remit to identifying broader solutions for the company. This not only supports the one-off changes but permits their expertise to be leveraged to create platforms that abstract the underlying complexity. This is where the magic starts to happen.
Centralizing the Cost of Migrations
We will mention migrations often in this book, as we believe managing migrations is an important part of a platform’s value. Applications and primitives have long but independent lifetimes, during which they each undergo many changes. The combination of these changes creates high maintenance costs. Platform engineering reduces these costs by:
- Reducing the diversity of OSS and cloud systems in use
-
The fewer primitives you have, the less likely it is that you’ll need to do a migration because of one.
- Encapsulating OSS and vendor systems with APIs
-
While platform APIs are often imperfect at encapsulating all aspects of the OSS and vendor systems they leverage, even “good enough” APIs that abstract a lot of their implementation will allow the platform to protect its applications from needing to change when the underlying systems change.
- Creating observability of platform usage
-
Platforms can provide various mechanisms to standardize collection of metadata around both their own use and that of underlying OSS and vendor systems. This visibility into the dependency state of the applications using your platform should allow you to ease the burden of upgrades when those dependencies need to change.
- Giving ownership of OSS and cloud systems to teams with software developers
-
When APIs are later shown to be imperfect, unlike traditional infrastructure organizations, platform teams have software developers who can write the nontrivial migration tooling that makes the migration transparent to most application teams.
Allowing Application Developers to Operate What They Develop
The goal of mature DevOps was to simplify accountability through a “you build it, you own it” approach. Despite this having been a popular idea for over a decade, many companies have not managed to execute on this model. We believe that, for those that have succeeded, a major contributor to this success is the leverage that their platforms provide through abstracting the operational complexity of underlying dependencies.
No one loves being on call. But when teams are only on call for issues caused by their own applications, we have found that a surprising number are willing to take on operational responsibility. After all, why wouldn’t they stand behind the business-critical systems they spend their days creating? For too many companies, however, the operational problems caused by the infrastructure, OSS, and its glue completely dominate the problems in the application code itself.
An example of this can be seen as applications seeking higher resiliency are deployed across multiple availability zones, cloud regions, or data centers. This leaves application teams exposed to intermittent cloud provider issues such as networking problems, and the 2 a.m. alerts that inevitably follow. Platform engineering addresses this by building resilient abstractions that can handle application failover on behalf of the application teams, reducing the number of late-night wakeup calls they receive.
When most of the underlying systems’ operational complexity is hidden behind platform abstractions, this complexity can be owned and operated by your platform team. This requires you to limit the options that you support, so that you can push the abstraction boundary upward into a core set of offerings, each handling a broad set of application use cases. It also requires that you have high operational standards within your platform team, so that application teams are comfortable relying on them.
Yes, building and operating platforms that handle these issues is hard, especially when it comes to getting application teams to accept limitations on their choices. But the only alternatives are either directly exposing your entire organization to these issues or perpetuating your use of operations teams (by any name), and so in turn perpetuating the accountability problems, negative impact on agile development, and finger-pointing.
Empowering Teams to Focus on Building Platforms
If you want to leverage OSS and vendor primitives but reduce the complexity that slows progress later, you need teams that can build platforms to manage those primitives and their complexity. There are four platform-adjacent approaches that are popular today, all of which bring valuable skills to the organization, but none of which are set up to have the combination of focus and skills needed for building platforms. Table 1-1 summarizes these approaches and why they are not adapted to this task.
Approach | Focus | Why they struggle to build platforms |
---|---|---|
Infrastructure | Robust operation of underlying infrastructure | Little focus on abstracting infrastructure to simplify applications, particularly across multiple infrastructure components |
DevTools | Developer productivity up to production delivery | Little focus on solving developer productivity challenges related to systems in production running on complex infrastructure |
DevOps | Application delivery to production | Little focus on ensuring their automation/tools help the widest possible audience |
SRE | System reliability | Little focus on systemic issues other than reliability, often delivering impact through organizational practices instead of developing better systems |
Individuals from each of these backgrounds might assert that they personally want to build more platforms rather than glue, but their organization won’t let them. We empathize; we are not describing individuals, but rather how these approaches have evolved within organizations and how organizations typically define the respective teams’ missions. However, the problem remains—individuals’ roles are limited by the mission of their team, and changing a team’s mission is not easy when the greater organization expects it to just do what it always has done.
Platform engineering asks each of these groups of engineers to come out of their silos and work in teams with a broader mission to create platforms that provide balance. This involves:
-
For infrastructure teams, balancing infrastructure capabilities with developer-centered simplicity
-
For DevTools teams, balancing development experience with production support experience
-
For DevOps teams, balancing optimal per-application glue with more general software to support a lot more applications
-
For SRE teams, balancing reliability with other system attributes like feature agility, cost efficiency, security, and performance
As a deliberate reset of organizational expectations, platform engineering gives you the ability to create teams that focus on building the technologies to finally clear the swamp.
Wrapping Up
We’re on a complexity collision course, and many of us are already hitting the wall. Whether it’s with the challenge of making DevOps effective, dealing with a million snowflake decisions, managing the increasing complexity of infrastructure as code, or simply dealing with the required upgrades and migrations that come with all software products, we need help. This is the reason that we believe platform engineering is becoming more and more important for the industry. By combining a product mindset with software and systems engineering expertise, you can build platforms that give you the leverage to manage this complexity for your company.
1 We’ll sometimes call these teams your “users” or “customers,” if it makes more sense in the context.
2 For a good diagram of the software lifecycle, see https://oreil.ly/iDM5u.
3 See Jussi Koskinen’s paper on software maintenance costs at https://oreil.ly/EFNZ6.
4 This is literally what they were called in the 2003 AWS vision document (see https://oreil.ly/n4ie_).
5 Other full service PaaSes that failed to see broad success were Force.com, AWS Elastic Beanstalk, and Google AppEngine. As a result, vendors often use the term PaaS for more flexible offerings, which means they need to be combined with other IaaS and so have similar problems around complexity.
6 We will discuss what this looks like in Chapter 2.
7 As per former Sun Microsystems CEO Scott McNealy, alluding to the long-term cost of adopting either OSS or puppies.
8 In other companies, they were called systems engineers or systems development engineers.
9 This is the platform equivalent of “shadow IT”—systems deployed by departments other than the central department, to fill gaps or bypass limitations and restrictions that have been imposed by central systems.
Get Platform Engineering now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.