6.4. DISCUSSION OF PAPERS 49

0 31 2

X (1) X (2) (3)X

λ λ λ

. . .

λ

system

approx.

+

where

system

X ( K )

=

K

Figure 6.5: An open system can be approximated with a closed network as a load-dependent queue.

approx.

.

.

.

N jobs

system

average think time = Z

l =

N

Z

system

Figure 6.6: A closed interactive system can be approximated as an open system if N is large.

6.4 DISCUSSION OF PAPERS

We now discuss the bottlenecks and ﬂow equivalence in the previous papers, as well as three

others: OpenClosed [35], DatabaseSerializability [3], and Rooﬂine [49].

InternetServices [48]

Since the paper uses a closed interactive model, the response time is R D T Z, where T

should have the shape in Fig. 6.2. However, the response time curve for the baseline model

in Fig. 4(a) looks diﬀerent, but a closer look shows a small nonlinear curve near the origin.

In other words, Fig. 4(a) shows mostly the linear increase imposed by the bottleneck in

the basic model. (See also the response and residence times in subsequent plots.) If one is

interested in this bottleneck constraint, then there is no need for the elaborate model in

Fig. 3.

Similarly, Fig. 4(b) shows good agreement between the enhanced model and the measured

response time in the high load region where the concurrency limit is in eﬀect. Here, the

model is helped by the assumption that connections are turned away when the concurrency

limit is reached; if these rejected requests were buﬀered and served later, modeling the re-

sponse time accurately may be harder.

50 6. BOTTLENECKS AND FLOW EQUIVALENCE

StorageAvailability [13]

Bottlenecks in a system may be caused not by hardware but by software. Sec. 7 gives an

example of such a bottleneck : a lock held by the HE causes writes to be serialized.

NetworkProcessor [27]

Although the overall model is open (with packet arrival rate as an input parameter), this is

later solved through ﬂow equivalence with a closed queueing subnetwork. Except for the

variable J , Eq. (3-10) to Eq. (3-15) are equations obtained by solving the Markov chain

for Fig. 6.5.

e network can be solved directly (without using ﬂow equivalence) by using Jackson’s e-

orem for open networks. is works with multi-server queues, without need for the queue

decomposition in Fig. 3; however, it is more compute-intensive, since the solution works

out the entire joint distribution of queue sizes, instead of just the average queue sizes.

e paper also uses bottleneck analysis (Eq. (3-4)) to make the MVA algorithm converge.

is possibly refers to setting a threshold for stopping the MVA iteration (when calculated

throughput is suﬃciently close to the throughput bound set by the bottleneck).

RouterBuﬀer [1]

is paper gives an example of the idea in Fig. 6.5: the router analysis is in terms of n, the

number of ﬂows. In eﬀect, a closed model is used to analyze an open system.

DependabilitySecurity [46]

In ﬂow equivalence (Sec. 6.2), we extract a subnetwork of queues, solve it, plug it back as one

load-dependent queue, then solve the bigger network with a birth-death process with rates

A

and X

D

.n/. is idea of hierarchical decomposition extends beyond queueing networks.

In the virtualization reliability model (Fig. 5) and router availability model (Fig. 9), each

block may represent, say, a separate Markov chain. Moreover, in the case of Fig. 9, the

metric is not performance, but dependability.

Sometimes, a submodel may have a parameter whose value is known only after the bigger

model is solved. For example, the Markov chain in the submodel has an arrival rate that

is determined by the throughput of the bigger model. One can deal with such circularity

with an iterative solution: Initialize the throughput somehow; the submodel uses the corre-

sponding arrival rate for a solution; this submodel solution is used to solve the bigger model

and improve the throughput estimate and then the solutions iterate till convergence. is

is an example of the ﬁxed point iterative model (see Table 1).

PipelineParallelism [31]

Fig. 2 and Fig. 3 illustrate how the bottleneck in a system can shift — in this case, from an

intermediate stage to an IO stage as the number of threads increases. Note the bottleneck

analysis in Eq. (7), (9) and (10).

Get *Analytical Performance Modeling for Computer Systems, 2nd Edition* now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.