O'Reilly logo

Web Performance Warrior by Andy Still

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 1. Phase 1: Acceptance
“Performance Doesn’t Come For Free”

The journey of a thousand miles starts with a single step. For a performance warrior, that first step is the realization that good performance won’t just happen: it will require time, effort, and expertise.

Often this realization is reached in the heat of battle, as your systems are suffering under the weight of performance problems. Users are complaining, the business is losing money, servers are falling over, there are a lot of angry people about demanding that something be done about it. Panicked actions will take place: emergency changes, late nights, scattergun fixes, new kit. Eventually a resolution will be found, and things will settle down again.

When things calm down, most people will lose interest and go back to their day jobs. Those that retain interest are performance warriors.

In an ideal world, you could start your journey to being a performance warrior before this stage by eliminating performance problems before they start to impact the business.

Convincing Others

The next step after realizing that performance won’t come for free is convincing the rest of your business.

Perhaps you are lucky and have an understanding company that will listen to your concerns and allocate time, money, and resources to you to resolve these issues and a development team that is on board with the process and wants to work with you to make it happen. In this case, skip ahead to Chapter 2.

Still reading? Then you are working a typical organization that has only a limited interest in the performance of its web systems. It becomes the job of the performance warrior to convince colleagues it is something they need to be concerned about.

For many people across the company (both technical and non-technical, senior and junior) in all types of business (old and new, traditional and techy), this will be a difficult step to take. It involves an acceptance that performance won’t just come along with good development but needs to be planned, tested, and budgeted for. This means that appropriate time, money, and effort will have to be provided to ensure that systems are performant.

You must be prepared to meet this resistance and understand why people feel this way.

Developer Objections

It may sound obvious that performance will not just happen on its own, but many developers need to be educated to understand this.

A lot of teams have never considered performance because they have never found it to be an issue.  Anything written by a team of reasonably competent developers can probably be assumed to be reasonably performant. By this I mean that for a single user, on a test platform with a test-sized data set, it will perform to a reasonable level. We can hope that developers should have enough pride in what they are producing to ensure that the minimum standard has been met. (OK, I accept that this is not always the case.)

For many systems, the rigors of production are not massively greater than the test environment, so performance doesn’t become a consideration. Or if it turns out to be a problem, it is addressed on the basis of specific issues that are treated as functional bugs.

Performance can sneak up on teams that have not had to deal with it before.

Developers often feel sensitive to the implications of putting more of a performance focus into the development process. It is important to appreciate why this may be the case:

Professional pride
It is an implied criticism of the quality of work they are producing. While we mentioned the naiveté of business users in expecting performance to just come from nowhere, there is often a sense among developers that good work will automatically perform well, and they regard lapses in performance as a failure on their part.
Fear of change
There is a natural resistance to change. The additional work that may be needed to bring the performance of systems to the next level may well take developers out of their comfort zone. This will then lead to a natural fear that they will not be able to manage the new technologies, working practices, etc.
Fear for their jobs
The understandable fear with many developers, when admitting that the work they have done so far is not performant, is that it will be seen by the business as an admission that they are not up to the job and therefore should be replaced. Developers are afraid, in other words, that the problem will be seen not as a result of needing to put more time, skills, and money into performance, just as having the wrong people.

Business Objections

Objections you face from within the business are usually due to the increased budget or timescales that will be required to ensure better performance.

Arguments will usually revolve around the following core themes:

How hard can it be?

There is no frame of reference for the business to be able to understand the unique challenges of performance in complex systems. It may be easy for a nontechnical person to understand the complexities of the system’s functional requirements, but the complexities caused by doing these same activities at scale are not as apparent.

Beyond that, business leaders often share the belief that if a developer has done his/her job well, then the system will be performant.

There needs to be an acceptance that this is not the case and that this is not the fault of the developer. Getting a truly performant system requires dedicated time, effort, and money.

It worked before. Why doesn’t it work now?

This question is regularly seen in evolving systems. As levels of usage and data quantities grow, usually combined with additional functionality, performance will start to suffer. 

Performance challenges will become exponentially more complex as the footprint of a system grows (levels of usage, data quantities, additional functionality, interactions between systems, etc.). This is especially true of a system that is carrying technical debt (i.e., most systems).

Often this can be illustrated to the business by producing visual representations of the growth of the system. However, it will then often lead to the next argument.

Why didn’t you build it properly in the first place?

Performance problems are an understandable consequence of system growth, yet the fault is often placed at the door of developers for not building a system that can scale.  

There are several counterarguments to that:

  • The success criteria for the system and levels of usage, data, and scaling that would eventually be required were not defined or known at the start, so the developers couldn’t have known what they were working toward.
  • Time or money wasn’t available to invest in building the system that would have been required to scale.
  • The current complexity of the system was not anticipated when the system was first designed.
  • It would actually have been irresponsible to build the system for this level of usage at the start of the process, when the evolution of the system and its usage were unknown. Attempts to create a scalable system may actually have resulted in more technical debt. Millions of hours of developer time is wasted every year in supporting systems that were over-engineered because of overly ambitious usage expectations that were set at the start of a project.

Although all these arguments may be valid, often the argument as to why this has happened is much simpler. Developers are only human, and high-volume systems create challenges that are complex. Therefore, despite their best efforts, developers make decisions that in hindsight turn out to be wrong or that don’t anticipate how components integrate.

Action Plan

Separate Performance Validation, Improvement, and Optimization from Standard Development

A simple step: if no one realizes that performance requires work, start pointing it out. When estimating or doing sprint planning, create distinct tasks for performance optimization and validation. Highlight the importance so that, if performance is not explicitly put into the development plan by the organization, it has to make a conscious choice not to do so.

Complete a Performance Maturity Assessment

This is an exercise in assessing how mature your performance process is. Evaluate your company’s processes, and determine how well suited it is for ensuring that the application being built is suitably performant. Also evaluate it against industry best practice (or the best practices that you feel should be introduced; remember to be realistic).

Produce this as a document with a score to indicate the current state of performance within the company.

Define a Strategy and Roadmap to Good Performance

Create an explicit plan for how to get from where you are to where you need to be. This should be in achievable, incremental steps and have some ideas of the time, effort, and costs that will be involved. It is important that developers, testers, managers, and others have input into this process so that they buy in to the process.

Once the roadmap is created, regularly update and track progress against it. Every step along the roadmap should increase your performance maturity score.

Performance won’t come for free. This is your chance to illustrate to your business what is needed.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required