Preface

When I published the original edition of this book in January 2009, I had no idea how popular it would prove to be within the performance testing community. I received many emails from far and wide thanking me for putting pen to paper, which was an unexpected and very pleasant surprise. This book was and still is primarily written for the benefit of those who would like to become performance testing specialists. It is also relevant for IT professionals who are already involved in performance testing, perhaps as part of a web-first company (large or small), especially if they are directly responsible for the performance of the systems they work with.

While I believe that the fundamentals discussed in the original edition still hold true, a lot has happened in the IT landscape since 2009 that has impacted the way software applications are deployed and tested. Take cloud computing, for example: very much a novelty in 2009, with very few established cloud vendors. In 2014 the cloud is pretty much the norm for web deployment, with on-the-fly environment spin-up and spin-down for dev, test, and production requirements. I have added cloud considerations to existing chapter content where appropriate.

Then consider the meteoric rise of the mobile device, which, as of 2014, is expected to be the largest source of consumer traffic on the Internet. I made passing mention of it in the original edition, which I have now expanded into a whole new chapter devoted to performance testing the mobile device. End-user monitoring, or EUM, has also come of age in the past five years. It has a clear overlap with performance testing, so I have added two new chapters discussing how EUM data is an important component to understanding the real-world performance of your software application.

Pretty much all of the original chapters and appendixes have been revised and expanded with new material that I am confident will be of benefit to those involved with performance testing, be they novices or seasoned professionals. Businesses in today’s world continue to live and die by the performance of mission-critical software applications. Sadly, applications are often still deployed without adequate testing for scalability and performance. To reiterate, effective performance testing identifies performance bottlenecks quickly and early so they can be rapidly triaged, allowing you to deploy with confidence. The Art of Application Performance Testing, Second Edition, addresses a continuing need in the marketplace for reference material on this subject. However, this is still not a book on how to tune technology X or optimize technology Y. I’ve intentionally stayed well away from specific tech stacks except where they have a significant impact on how you go about performance testing. My intention remains to provide a commonsense guide that focuses on planning, execution, and interpretation of results and is based on over 15 years of experience managing performance testing projects.

In the same vein, I won’t touch on any particular industry performance testing methodology because—truth be told—they (still) don’t exist. Application performance testing is a unique discipline and is (still) crying out for its own set of industry standards. I’m hopeful that the second edition of this book will continue to carry the flag for the appearance of formal process.

My career has moved on since 2009, and although I continue to work for a company that’s passionate about performance, this book remains tool- and vendor-neutral. The processes and strategies described here can be used with any professional automated testing solution.

Hope you like the revised and updated edition!

—Ian Molyneaux, 2014

Audience

This book is intended as a primer for anyone interested in learning or updating their knowledge about application performance testing, be they seasoned software testers or complete novices.

I would argue that performance testing is very much an art in line with other software disciplines and should be not be undertaken without a consistent methodology and appropriate automation tooling. To become a seasoned performance tester takes many years of experience; however, the basic skills can be learned in a comparatively short time with appropriate instruction and guidance.

The book assumes that readers have some familiarity with software testing techniques, though not necessarily performance-related ones. As a further prerequisite, effective performance testing is really possible only with the use of automation. Therefore, to get the most from the book, you should have some experience or at least awareness of automated performance testing tools.

Some additional background reading that you may find useful includes the following:

  • Web Load Testing for Dummies by Scott Barber with Colin Mason (Wiley)

  • .NET Performance Testing and Optimization by Paul Glavich and Chris Farrell (Red Gate Books)

  • Web Performance Tuning by Patrick Killilea (O’Reilly)

  • Web Performance Warrior by Andy Still (O’Reilly)

About This Book

Based on a number of my jottings (that never made it to the white paper stage) and 10 years plus of hard experience, this book is designed to explain why it is so important to performance test any application before deploying it. The book leads you through the steps required to implement an effective application performance testing strategy.

Here are brief summaries of the book’s chapters and appendixes:

Chapter 1, Why Performance Test?, discusses the rationale behind application performance testing and looks at performance testing in the IT industry from a historical perspective.

Chapter 2, Choosing an Appropriate Performance Testing Tool, discusses the importance of automation and of selecting the right performance testing tool.

Chapter 3, The Fundamentals of Effective Application Performance Testing, introduces the building blocks of effective performance testing and explains their importance.

Chapter 4, The Process of Performance Testing, suggests a best-practice approach. It builds on Chapter 3, applying its requirements to a model for application performance testing. The chapter also includes a number of case studies to help illustrate best-practice approaches.

Chapter 5, Interpreting Results: Effective Root-Cause Analysis, teaches effective root-cause analysis. It discusses the typical output of a performance test and how to interpret results.

Chapter 6, Performance Testing and the Mobile Client, discusses performance and the mobile device and the unique challenges of performance testing mobile clients.

Chapter 7, End-User Experience Monitoring and Performance, describes the complementary relationship between end-user experience monitoring and performance testing.

Chapter 8, Integrating External Monitoring and Performance Testing, explains how to integrate end-user experience monitoring and performance testing.

Chapter 9, Application Technology and Its Impact on Performance Testing, discusses the impact of particular software tech stacks on performance testing. Although the approach outlined in this book is generic, certain tech stacks have specific requirements in the way you go about performance testing.

Chapter 10, Conclusion is something I omitted from the original edition. I thought it would be good to end with a look at future trends for performance testing and understanding the end-user experience.

Appendix A, Use-Case Definition Example, shows how to prepare use cases for inclusion in a performance test.

Appendix B, Proof of Concept and Performance Test Quick Reference, reiterates the practical steps presented in the book.

Appendix C, Performance and Testing Tool Vendors, lists sources for the automation technologies required by performance testing and performance analysis. Although I have attempted to include the significant tool choices available at the time of writing, this list is not intended as an endorsement for any particular vendor or to be definitive.

Appendix D, Sample Monitoring Templates: Infrastructure Key Performance Indicator Metrics, provides some examples of the sort of key performance indicators you would use to monitor server and network performance as part of a typical performance test configuration.

Appendix E, Sample Project Plan, provides an example of a typical performance test plan based on Microsoft Project.

Conventions Used in This Book

The following typographical conventions will be used:

Italic

Indicates new terms, URLs, email addresses, filenames, and file extensions.

Constant width

Used for program listings and also within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.

Constant width bold

Shows commands or other text that should be typed literally by the user.

Constant width italic

Shows text that should be replaced with user-supplied values or by values determined by context.

Signifies a tip or suggestion.

Indicates a general note.

Glossary

The following terms are used in this book:

APM

Application performance monitoring. Tooling that provides deep-dive analysis of application performance.

APMaaS

APM as a Service (in the cloud).

Application landscape

A generic term describing the server and network infrastructure required to deploy a software application.

AWS

Amazon Web Services.

CDN

Content delivery network. Typically a service that provides remote hosting of static and increasingly nonstatic website content to improve the end-user experience by storing such content local to a user’s geolocation.

CI

Continuous integration. The practice, in software engineering, of merging all developer working copies with a shared mainline several times a day. It was first named and proposed as part of extreme programming (XP). Its main aim is to prevent integration problems, referred to as integration hell in early descriptions of XP. (Definition courtesy of Wikipedia.)

DevOps

A software development method that stresses communication, collaboration, and integration between software developers and information technology professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services. (Definition courtesy of Wikipedia.)

EUM

End-user monitoring. A generic term for discrete monitoring of end-user response time and behavior.

IaaS

Infrastructure as a Service (in the cloud).

ICA

Independent Computing Architecture. A Citrix proprietary protocol.

ITIL

Information Technology Infrastructure Library.

ITPM

Information technology portfolio management.

ITSM

Information technology service management.

JMS

Java Message Service (formerly Java Message Queue).

Load injector

A PC or server used as part of an automated performance testing solution to simulate real end-user activity.

IBM/WebSphere MQ

IBM’s message-oriented middleware.

Pacing

A delay added to control the execution rate, and by extension the throughput, of scripted use-case deployments within a performance test.

PaaS

Platform as a Service (in the cloud).

POC

Proof of concept. Describes a pilot project often included as part of the sales cycle. It enables customers to compare the proposed software solution to their current application and thereby employ a familiar frame of reference. Often used interchangeably with proof of value.

RUM

Real-user monitoring. The passive form of end-user experience monitoring.

SaaS

Software as a Service (in the cloud).

SOA

Service-oriented architecture.

SUT

System under test. The configured performance test environment.

Think time

Similar to pacing, think time refers to pauses within scripted use cases representing human interaction with a software application. Used more to provide a realistic queueing model for requests than for throughput control.

Timing

A component of a transaction. Typically a discrete user action you are interested in timing, such as log in or add to bag.

Transaction

A typical piece of application functionality that has clearly defined start and end points, for example, the action of logging into an application or carrying out a search. Often used interchangeably with the term use case.

UEM

User experience monitoring. A generic term for monitoring and trending end-user experience, usually of live application deployments.

Use case, user journey

A set of end-user transactions that represent typical application activity. A typical transaction might be log in, navigate to a search dialog, enter a search string, click the search button, and log out. Transactions form the basis of automated performance testing.

WOSI

Windows Operating System Instance. Basically the (hopefully licensed) copy of Windows running on your workstation or server.

Using Code Examples

This book is here to help you get your job done. In general, you may use the code in this book in your programs and documentation. You do not need to contact us for permission unless you’re reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing a CD-ROM of examples from O’Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product’s documentation does require permission.

We appreciate, but do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: The Art of Application Performance Testing, Second Edition, by Ian Molyneaux. Copyright 2015 Ian Molyneaux, 978-1-491-90054-3.

If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at .

Safari® Books Online

Safari Books Online is an on-demand digital library that delivers expert content in both book and video form from the world’s leading authors in technology and business.

Technology professionals, software developers, web designers, and business and creative professionals use Safari Books Online as their primary resource for research, problem solving, learning, and certification training.

Safari Books Online offers a range of plans and pricing for enterprise, government, education, and individuals.

Members have access to thousands of books, training videos, and prepublication manuscripts in one fully searchable database from publishers like O’Reilly Media, Prentice Hall Professional, Addison-Wesley Professional, Microsoft Press, Sams, Que, Peachpit Press, Focal Press, Cisco Press, John Wiley & Sons, Syngress, Morgan Kaufmann, IBM Redbooks, Packt, Adobe Press, FT Press, Apress, Manning, New Riders, McGraw-Hill, Jones & Bartlett, Course Technology, and hundreds more. For more information about Safari Books Online, please visit us online.

How to Contact Us

Please address comments and questions concerning this book to the publisher:

  • O’Reilly Media, Inc.
  • 1005 Gravenstein Highway North
  • Sebastopol, CA 95472
  • 800-998-9938 (in the United States or Canada)
  • 707-829-0515 (international or local)
  • 707-829-0104 (fax)

We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at http://bit.ly/art-app-perf-testing.

To comment or ask technical questions about this book, send email to .

For more information about our books, courses, conferences, and news, see our website at http://www.oreilly.com.

Find us on Facebook: http://facebook.com/oreilly

Follow us on Twitter: http://twitter.com/oreillymedia

Watch us on YouTube: http://www.youtube.com/oreillymedia

Acknowledgments

Many thanks to everyone at O’Reilly who helped to make this book possible and put up with the fumbling efforts of a novice author. These include my editor, Andy Oram; assistant editor, Isabel Kunkle; managing editor, Marlowe Shaeffer; Robert Romano for the figures and artwork; Jacquelynn McIlvaine and Karen Tripp for setting up my blog and providing me with the materials to start writing; and Karen Tripp and Keith Fahlgren for setting up the DocBook repository and answering all my questions.

For the updated edition, I would also like to thank my assistant editor, Allyson MacDonald, and development editor, Brian Anderson, for their invaluable feedback and guidance.

In addition, I would like to thank my former employer and now partner, Compuware Corporation, for their kind permission to use screenshots from a number of their performance solutions to help illustrate points in this book. I would also like to thank the following specialists for their comments and assistance on a previous draft: Peter Cole, formerly president and CTO of Greenhat, for his help with understanding and expanding on the SOA performance testing model; Adam Brown of Quotium; Scott Barber, principal and founder of Association of Software Testers; David Collier-Brown, formerly of Sun Microsystems; Matt St. Onge; Paul Gerrard, principal of Gerrard Consulting; Francois MacDonald, formerly of Compuware’s Professional Services division; and Alexandre Mechain, formerly of Compuware France and now with AppDynamics.

I would also like to thank my esteemed colleague Larry Haig for his invaluable insight and assistance with the new chapters on end-user monitoring and its alignment with performance testing.

Finally, I would like to thank the many software testers and consultants whom I have worked with over the last decade and a half. Without your help and feedback, this book would not have been written!

Get The Art of Application Performance Testing, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.