As mentioned at the outset, there is a very important correlation between computer performance and productivity, and this provides a strong economic underpinning to the practice of performance management. Today, it is well established that good performance is an important element of human-computer interface design. (See Ben Schneiderman’s Designing the User Interface: Strategies for Effective Human-Computer Interaction.) We have to explore a little further what it means to have “good” performance, but systems that provide fast, consistent response time are generally more acceptable to the people who use them. Systems with severe performance problems are often rejected outright by their users and fail. This leads to costly delays, expensive rewrites, and loss of productivity.
As computer performance analysts, we are interested in finding out what it takes to turn “bad” performance into “good” performance. Generally, the analysis focuses on two flavors of computer measurements. The first type of measurement data describes what is going on with the hardware, operating systems software, and application software that is running. These measurements reflect both activityrates and the utilization of key hardware components: how busy the processor, the disks, and the network are. Windows 2000 provides a substantial amount of performance data in this area: quantitative information on how busy different hardware components are, what processes are running, how much memory they are using, etc.
The second type of measurement data measures the capacity of the computer to do productive work. As discussed earlier in this chapter, most common measures of productivity we have seen are throughput (usually measured in transactions per second) and response time. A measure of throughput describes the quantity of work performed, while response time measures how long it takes to complete the task. When users of your network complain about bad performance, you need a way to quantify what “bad” is. You need application response time measurements from the standpoint of the end user. Unfortunately, Windows 2000 is not very strong in this key area of measurement.
To measure the capacity of the computer to do productive work, we need to:
Characterize end user interactions in identifiable units of work
Measure and report how long it takes for the computer to process these units of work
Then, computer performance analysts try to determine why these processes are running slowly, and figure out how to speed things up. Sounds pretty simple, doesn’t it?
There is an implicit assumption that computer processing can be broken into identifiable units for analysis when we talk about the performance and tuning of computer processing workloads in this fashion. Conceptually, we are using what is known as the transaction model. Transactions are atomic units of processing—you cannot break a transaction down into anything smaller without losing whatever it is that makes it a transaction. A transaction might involve executing SQL statements to query a database, applying a special effect in an image-processing program, scrolling a word-processing document from its beginning to the point where you left off editing, or recalculating a column of figures in a spreadsheet. These are all recognizable units of processing with a definite beginning and end. A transaction corresponds to some specific action that a user takes to initiate work. This correspondence is the critical link between the transaction response time and user productivity.
Let’s look at some of the ways in which the transaction model is relevant to computer performance analysis. The performance of different configurations can be calibrated accurately by running any of the recognized industry benchmark streams that measure computer and network throughput, usually in transactions per second (or TPS). TPS is a common measure of throughput that is accepted across the industry. It is also possible to relate computer throughput to end user productivity, namely, how long it takes an individual to accomplish some computer-assisted task. Once we know how long it takes to process transactions, it may then be possible to determine what components are taking the most time (decomposition), why processing takes so long (bottleneck analysis), and what can be done to improve it (prescriptive tuning). These three analysis techniques are of fundamental importance, and we will revisit them many times in this book.
The transaction model also lends itself to the mathematical techniques associated with queuing theory. The two main elements of a queuing system are:
A set of transaction-oriented workloads
Networks of servers and queues
Many types of queuing systems can be analyzed mathematically if the arrival rate of transactions and their service demands are known. The theoretical model referred to back in Figure 1-3, for example, is based on very simple assumptions about the average rate in which new transactions are generated and the processing demands of these transactions.
Unfortunately, it is not always easy to group processing activities into distinct transactions in Windows 2000. An application running in the foreground (with the active title bar) processes mouse moves and keyboard events on a more or less continuous basis. It is hard to tell where a particular transaction begins or ends under these circumstances. Where work is not easily grouped into transactions, Windows 2000 does not measure and report transaction response time. In other cases, defining the boundaries of transactions is less problematic. In a client/server model, the point where the client application accesses a database on the server could easily mark the beginning of a transaction. When the server application finishes all its processing on behalf of the client request and returns control to the client application, the transaction is considered complete. An update transaction will issue a Commit command when it is finished processing, an unambiguous marker of the end of the transaction. If you are running client/server applications based on MS Exchange, MS SQL Server, or the Microsoft Transaction Server, it is possible to organize workloads into transaction units. It is disappointing that while these subsystems keep some transaction-oriented performance statistics, they generally do not report response time systematically.
Experienced performance analysts sorely miss having adequate transaction response time data to work with in Windows 2000. Without it, it is not possible to relate all the measurements about what is going on inside the computer to the work that people are doing with the computer. This complicates performance analysis, and it means that some of the best, proven methods cannot easily be applied to Windows 2000.
From the standpoint of an application developer, it is important to understand how to deliver software that performs well. A common fallacy is to measure system response time and equate it to productivity. While the two are related, they are far from the same thing. Consider a computer reservation system used by a trained operator. One implementation might break the task of making a reservation down into several subtasks, each of which is implemented to execute quickly. However, because the interaction with the computer requires a series of transactions, the actual process of making a reservation is long and complex. A second implementation might collapse all the user interactions into a single, more complicated transaction. It might take longer to process the bulkier transaction than several smaller transactions combined. According to objective measures of transaction response time, the first system is the better one. It is also the system people are apt to like better because it provides fast, consistent response. It just feels better.
Productivity is another matter. In this instance, productivity is measured as the number of reservations per day a skilled operator can perform. Under the second system, users may actually be able to perform more work, as measured objectively in reservations made per day per operator. The second system may also be cheaper to operate than the first because it does not process quite so many transactions, and it may even be able to support a heavier reservation load. Because workers are more productive using the second system, the company does not need quite so many operators and can be more profitable. User satisfaction with the second system, however, is probably somewhat lower than with the first alternative, presuming that it is possible to reduce these subjective feelings to an objective, quantitative scale.
Which system do you think your company will want to use? Which system do you think you would like to use? It is important to understand that any large-scale application system that you design and deliver is subject to very restrictive cost and performance constraints. All parties to the application development process, including the end user, must understand what these limitations are and accept them as reasonable.
If satisfaction is in the eye of the beholder, as a software developer you should look for ways to improve the performance of the second system to make it more acceptable. Perhaps procuring more expensive hardware can be justified. Alternately, there may be things that can be done to the design of the second system to make it more user-friendly without actually speeding it up. The psychological aspects of human-computer interaction are outside the scope of the book, but this is an area that experienced software designers should know something about. There are many simple design techniques and tricks that can make systems more palatable even when they are not lightning-fast.
Get Windows 2000 Performance Guide now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.