The journey of a thousand miles starts with a single step. For a performance warrior, that first step is the realization that good performance won’t just happen: it will require time, effort, and expertise.
Often this realization is reached in the heat of battle, as your systems are suffering under the weight of performance problems. Users are complaining, the business is losing money, servers are falling over, there are a lot of angry people about demanding that something be done about it. Panicked actions will take place: emergency changes, late nights, scattergun fixes, new kit. Eventually a resolution will be found, and things will settle down again.
When things calm down, most people will lose interest and go back to their day jobs. Those that retain interest are performance warriors.
In an ideal world, you could start your journey to being a performance warrior before this stage by eliminating performance problems before they start to impact the business.
The next step after realizing that performance won’t come for free is convincing the rest of your business.
Perhaps you are lucky and have an understanding company that will listen to your concerns and allocate time, money, and resources to you to resolve these issues and a development team that is on board with the process and wants to work with you to make it happen. In this case, skip ahead to Chapter 2.
Still reading? Then you are working a typical organization that has only a limited interest in the performance of its web systems. It becomes the job of the performance warrior to convince colleagues it is something they need to be concerned about.
For many people across the company (both technical and non-technical, senior and junior) in all types of business (old and new, traditional and techy), this will be a difficult step to take. It involves an acceptance that performance won’t just come along with good development but needs to be planned, tested, and budgeted for. This means that appropriate time, money, and effort will have to be provided to ensure that systems are performant.
You must be prepared to meet this resistance and understand why people feel this way.
It may sound obvious that performance will not just happen on its own, but many developers need to be educated to understand this.
A lot of teams have never considered performance because they have never found it to be an issue. Anything written by a team of reasonably competent developers can probably be assumed to be reasonably performant. By this I mean that for a single user, on a test platform with a test-sized data set, it will perform to a reasonable level. We can hope that developers should have enough pride in what they are producing to ensure that the minimum standard has been met. (OK, I accept that this is not always the case.)
For many systems, the rigors of production are not massively greater than the test environment, so performance doesn’t become a consideration. Or if it turns out to be a problem, it is addressed on the basis of specific issues that are treated as functional bugs.
Performance can sneak up on teams that have not had to deal with it before.
Developers often feel sensitive to the implications of putting more of a performance focus into the development process. It is important to appreciate why this may be the case:
Objections you face from within the business are usually due to the increased budget or timescales that will be required to ensure better performance.
Arguments will usually revolve around the following core themes:
There is no frame of reference for the business to be able to understand the unique challenges of performance in complex systems. It may be easy for a nontechnical person to understand the complexities of the system’s functional requirements, but the complexities caused by doing these same activities at scale are not as apparent.
Beyond that, business leaders often share the belief that if a developer has done his/her job well, then the system will be performant.
There needs to be an acceptance that this is not the case and that this is not the fault of the developer. Getting a truly performant system requires dedicated time, effort, and money.
This question is regularly seen in evolving systems. As levels of usage and data quantities grow, usually combined with additional functionality, performance will start to suffer.
Performance challenges will become exponentially more complex as the footprint of a system grows (levels of usage, data quantities, additional functionality, interactions between systems, etc.). This is especially true of a system that is carrying technical debt (i.e., most systems).
Often this can be illustrated to the business by producing visual representations of the growth of the system. However, it will then often lead to the next argument.
Performance problems are an understandable consequence of system growth, yet the fault is often placed at the door of developers for not building a system that can scale.
There are several counterarguments to that:
Although all these arguments may be valid, often the argument as to why this has happened is much simpler. Developers are only human, and high-volume systems create challenges that are complex. Therefore, despite their best efforts, developers make decisions that in hindsight turn out to be wrong or that don’t anticipate how components integrate.
A simple step: if no one realizes that performance requires work, start pointing it out. When estimating or doing sprint planning, create distinct tasks for performance optimization and validation. Highlight the importance so that, if performance is not explicitly put into the development plan by the organization, it has to make a conscious choice not to do so.
This is an exercise in assessing how mature your performance process is. Evaluate your company’s processes, and determine how well suited it is for ensuring that the application being built is suitably performant. Also evaluate it against industry best practice (or the best practices that you feel should be introduced; remember to be realistic).
Produce this as a document with a score to indicate the current state of performance within the company.
Create an explicit plan for how to get from where you are to where you need to be. This should be in achievable, incremental steps and have some ideas of the time, effort, and costs that will be involved. It is important that developers, testers, managers, and others have input into this process so that they buy in to the process.
Once the roadmap is created, regularly update and track progress against it. Every step along the roadmap should increase your performance maturity score.
Performance won’t come for free. This is your chance to illustrate to your business what is needed.