4 performance testing pitfalls

Learn how to avoid these common application performance testing mistakes.

By Ian Molyneaux
March 7, 2017
Cave Cave (source: via Pixabay)

Promoting effective benchmarking of software application performance and scalability has always been a passion of mine. I have always tried to dispel the misconception that performance testing is a nice to have, often last minute requirement. Like any testing discipline there are many misconceptions and pitfalls that can trap the unwary performance engineer. Here are four of my favorite fails, and what you can do to dodge them.

1. Assuming performance testing is only useful for end-to-end testing

This most certainly is not the case. You can usefully performance test pretty much as soon as you have something that can be executed. Part of the DevOps message involves making functional testing part of continuous integration (CI), so why not performance testing? A recent consulting engagement of mine involved implementing performance testing as part of the client agile development model—in other words, benchmarking and trending code releases at a very early stage across builds within sprint so that any performance or scalability regressions could be quickly identified and corrected. This resulted in the twin win of markedly improving the quality of delivery and mitigating the accidental promotion of performance defects into higher levels of testing or production.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

2. Performance test environment (mis)management

The performance testing environment must be fit for purpose; otherwise you risk misleading results. You need an accurate understanding of how the environment differs from production in terms of configuration, horizontal and vertical scale, and database content. There must be a reliable process to restore the environment to a known state and, perhaps most important of all, test environments should be centrally owned, managed and provisioned. Finally, unless you really have no choice, try to keep performance testing environments for performance testing, as shared usage greatly increases the risk of environment inconsistency.

3. Not creating an accurate workload model

The workload model (WLM) is the foundation of any performance testing requirement, so it must be an accurate reflection of anticipated application usage. Taking short-cuts with the design risks invalidating your testing. You should consider the WLM as the blueprint for your performance testing requirement describing all the assets that need to be created to performance test effectively. It should always be based on empirical data collected by the business and validated by solution architects. The WLM must describe load distribution across use cases for volume, stress, and soak test scenarios, including realistic pacing and an appropriate distribution and provision of load injection. Where possible, it should also describe the performance metrics to be collected and trended, particularly those specific to the application tech-stack.

4. Lack of environment metrics

It is vital that you monitor the environment when performance testing. The current crop of performance test toolsets provides plenty of useful analysis from the perspective of the application client; however, to effectively triage problems or establish a meaningful performance benchmark you must understand how the load you generate impacts the hosting infrastructure. At an absolute minimum monitor CPU, available memory, disk, network I/O, and any additional metrics identified as part of the WLM. As a final note, make sure the monitoring interval you select will provide enough data for meaningful statistical analysis. I recommend a minimum of every 30 seconds.

Conclusion

Effective performance testing relies on an accurate workload model, a dedicated test environment that is truly fit for purpose and confidence that you are monitoring, and capturing all the essential KPI metrics that relate to your application. If you choose to short-cut any of these key requirements you risk a misleading view of your application’s true performance and scalability, which could result in a very black Friday.


This post is a collaboration between HPE and O’Reilly. See our statement of editorial independence.

Post topics: Performance
Share: