1.2. TRAFFIC ARRIVAL MODELS 3

It turns out that using the tools of probability and stochastic processes, queueing theory in

particular, gives us very a powerful framework for predicting network behavior and thus leads to

good rules for designing them. One of the key assumptions will be that of stationarity (in a statistical

sense). Although, stationarity is very special; it turns out that one can indeed identify long periods

when network performance is well predicted using stationary models. Thus, this will be an implicit

assumption in the sequel unless speciﬁcally mentioned to the contrary.

Network performance is usually deﬁned in terms of statistical quantities called the QoS

parameters:

• Call blocking probabilities in the context of ﬂows and circuit-switched architectures–often

called the Grade of Service (GoS).

• Packet or bit loss probabilities.

• Moments and distributions of packet delay, denoted D, such as the mean delay E[D], the

variance var(D) which is related to the jitter, the tail distribution P(D ≥ t), etc.

• The mean or average throughput (average number of packets or bits transmitted per sec),

usually in kbits/sec, Mbits/sec, etc.

The sources of randomness in the internet arise from the following:

• Call, session, and packet arrivals are unpredictable (random).

• Holding times, durations of calls, ﬁle sizes, etc are random.

• Transmission facilities, switches, etc are shared.

• Numbers of users are usually not known a priori, and they arrive and depart randomly.

• Statistical variation, noise, and interference on wireless channels.

Queueing arises because instantaneous speeds exceed server or link capacities momentarily

because bit ﬂows or packet ﬂows depend on the devices or routers feeding a link. Queueing leads to

delays, and these delays depend not only on trafﬁc characteristics but the way packets are processed.

The goal of performance analysis is to estimate the statistical effects of variations. In the following,

we begin by ﬁrst presenting several useful models of trafﬁc, in particular ways of describing packet

or bit arrivals.

We ﬁrst begin by introducing models for trafﬁc arrivals.

1.2 TRAFFIC ARRIVAL MODELS

It must be kept in mind that the primary purpose of modeling is to obtain insights and qualitative

information. There are some situations when the models match empirical measurements. In that

case, the analytical results also provide quantitative estimates of performance. To paraphrase the

4 INTRODUCTION TO TRAFFIC MODELS AND ANALYSIS

words of the famous statistician Box, models are always wrong but the insights they provide can be

useful. It is with this caveat that one must approach the issue of trafﬁc engineering.

As far as studying the performance of networks, the key is to understand how trafﬁc arrives

and how packets or bits are processed. Modeling these processes is the basic building blocks of

queueing theory. However, even in this case, there is no one way to do it. It depends on the effect

we are studying and the time scale of relevance.

Let us look at this issue a bit further. Packet arrivals can be as discrete events when packets

arrive on a link or router buffer, or in the case, when a source transmits at very high speed and viewed

at the time-scale of bits, one could think of arrivals as ﬂuid with periods of activity punctuated with

periods of inactivity. The former leads to so-called point process models of arrivals while the latter

are called ﬂuid inputs. In essence, we are just viewing arrivals at different time-scales. Both models

are relevant in modeling real systems, but we will restrict our selves to point process models for the

most part. The two arrival patterns are illustrated below.

T

1

T

3

T

6

T

5

T

4

T

7

T

2

t

0

Point process model

Fluid arrival model

0

t

Figure 1.1: Discrete and ﬂuid trafﬁc arrival models.

Deﬁnition 1.1

A stochastic process is called a simple point process if it is characterized as follows:

Let {T

n

}

∞

n=−∞

be a real sequence of random variables (or points) in R with ···<T

−1

<T

0

≤

0 <T

1

<T

2

< ··· and lim

n→∞

T

n

=∞a.s..

Deﬁne N

t

as :

N

t

=

n

1I

(0<T

n

≤t)

(1.1)

Then N

t

is the counting process of {T

n

} and is commonly called a point process.

Remark 1.2

1. N

t

counts the number of points T

n

that occur in the interval (0,t]. The notation N(a,b] is

also used to indicate the number of points in (a, b]. Thus, N

t

= N(0,t].

2. lim

t→∞

N

t

=∞a.s. by deﬁnition.

3. The random variables S

n

= T

n

− T

n−1

are called the inter-arrival times.

Get *Performance Modeling, Loss Networks, and Statistical Multiplexing* now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.