It Can’t Be the Network

As it turned out, the eVersity project was canceled before the application made it into production (and by canceled, I mean that client just called one day to tell us that their entire division had been eliminated), so we never got a chance to see how accurate our performance testing had been. On the bright side, it meant that the team was available for the client-server to Web call-center conversion project that showed up a couple of weeks later.

The first several months of the project were uneventful from a performance testing perspective. Sam and the rest of the developers kept me in the loop from the beginning. Jim, the client VP who commissioned the project, used to be a mainframe developer who specialized in performance, so we didn’t have any trouble with the contract or deliverables definitions related to performance, and the historical system usage was already documented for us. Sure, we had the typical environment, test data, and scripting challenges, but we all worked through those together as they came up.

Then I ran across the strangest performance issue I’ve seen to this day. On the web pages that were requesting information from the database, I was seeing a response time pattern that I referred to as “random 4s.” It took some work and some help from the developers, but we figured out that half of the time these pages were requested, they returned in about .25 seconds. Half of the rest of the time, they’d return in about 4.25 seconds. Half of the rest of the time, in 8.25 seconds. And so on.

Working together, we systematically figured out all the things that weren’t causing the random 4s. In fact, we systematically eliminated every part of the system we had access to, which accounted for everything except the network infrastructure. Feeling good about how well things were going, I thought it was a joke when I was told that I was not allowed to talk to anyone in the IT department, but it wasn’t. It seems that some previous development teams had blamed everything on the IT department and wasted a ton of their time, so they’d created a policy to ensure that didn’t happen again.

The only way to interact with the IT department was for us to send a memorandum with our request signed by Jim, including detailed instructions, to the VP of the IT department through interdepartmental mail. I drafted a memo. Jim signed it and sent it. Two days later, Jim got it back with the word “No” written on it. Jim suggested that we send another memo that described the testing we’d done that was pointing us in the direction of the network. That memo came back with a note that said, “Checked. It’s not us.”

This went on for over a month. The more testing we did, the more convinced we were that this was the result of something outside of our control, and the only part of this application that was outside our control was the network. Eventually, Jim managed to arrange for a one-hour working conference call with the IT department, ostensibly to “get us off their back.” We set everything up so that all we had to do was literally click a button when the IT folks on the call were ready. Out entire team was dialed in on the call, just to make sure we could answer any question they may have had.

The IT folks dialed in precisely at the top of the hour and asked for identification numbers of the machines generating the load and the servers related to our application from the stickers their department put on the computers when they were installed. A few minutes later they told us to go ahead. We clicked the button. About five minutes of silence went by before we heard muffled speaking on the line. One of the IT staff asked us to halt the test. He said they were going to mute the line, but asked us to leave the line open. Another 20 minutes or so went by before they came back and asked us to restart the test and let them know if the problem was gone.

It took less than 10 minutes to confirm the problem was gone. During those 10 minutes, someone (I don’t remember who) asked the IT staff, who had never so much as told us their names, what they had found. All they would say is that it looked like a router had been physically damaged during a recent rack installation and that they had swapped out the router.

As far as we knew, this interaction didn’t make it any easier for the next team to work with this particular IT staff. I just kept thinking how lucky I was to be working on a team where I had the full help and support of the team. During the six weeks between the time I detected this problem and the IT department replaced the damaged router, the developers wrote some utilities, stubbed out sections of the system, stayed late to monitor after-hours tests in real time, and spent a lot of time helping me document the testing we’d done to justify our request for the IT department’s time. That interaction is what convinced me that performance testing could be beautiful.

It’s Too Slow; We Hate It

With the random 4s issue resolved, it was time for the real testing to begin: user acceptance testing (UAT). On some projects, UAT is little more than a formality, but on this project (and all of the projects I’ve worked on since dealing with call-center support software), UAT was central to go-live decisions. To that point, Susan, a call-center shift manager and UAT lead for this project, had veto authority over any decision about what was released into production and when.

The feature aspects of UAT went as expected. There were some minor revisions to be made, but nothing unreasonable or overly difficult to implement. The feedback that had us all confused and concerned was that every single user acceptance tester mentioned—with greater or lesser vehemence—something about the application being “slow” or “taking too long.” Obviously we were concerned, because there is nothing that makes a call-center representative’s day worse than having to listen to frustrated customers’ colorful epithets when told, “Thank you for your patience, our system is a little slow today.” We were confused because the website was fast, especially over the corporate network, and each UAT team was comprised of 5 representatives taking 10 simulated calls each, or about 100 calls per hour. Testing indicated that the application could handle up to nearly 1,000 calls per hour before slowing down noticeably.

We decided to strip all graphics and extras from the application to make it as fast as possible, and then have myself or one of the developers observe UAT so we could see for ourselves what was slow. It confused us even more that the application was even faster afterward, that not one of the people observing UAT had noticed a user waiting on the application even once, and that the feedback was still that the application was slow. Predictably, we were also getting feedback that the application was ugly.

Finally, I realized that all of our feedback was coming either verbally from Susan or from Susan’s summary reports, and I asked if I could see the actual feedback forms. While the protocol was that only the UAT lead got to see the actual forms, I was permitted to review them jointly with Susan. We were at the third or fourth form when I got some insight. The comment on that form was “It takes too long to process calls this way.” I asked if I could talk to the user who had made that comment, and Susan set up a time for us to meet.

The next afternoon, I met Juanita in the UAT lab. I asked her to do one of the simulations for me. I timed her as I watched. The simulation took her approximately 2.5 minutes, but it was immediately clear to me that she was uncomfortable with both the flow of the user interface and using the mouse. I asked her if she could perform the same simulation for me on the current system and she said she could. It took about 5 minutes for the current system to load and be ready to use after she logged in. Once it was ready, she turned to me and simply said, “Ready?”

Juanita typed furiously for a while, then turned and looked at me. After a few seconds, I said, “You’re done?” She smirked and nodded, and I checked the time: 47 seconds. I thanked her and told her that I had what I needed.

I called back to the office and asked folks to meet me in the conference room in 30 minutes. Everyone was assembled when I arrived. It took me fewer than 10 minutes to explain that when the user acceptance testers said “slow,” they didn’t mean response time; they meant that the design of the application was slowing down their ability to do their jobs.

My time on the project was pretty much done by then, so I don’t know what the redeveloped UI eventually looked like, but Sam told me that they had a lot more interaction with Susan and her user acceptance testers thereafter, and that they were thrilled with the application when it went live.

For a performance tester, there are few things as beautiful as call-center representatives who are happy with an application you have tested.

Get Beautiful Testing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.