97
CHAPTER 4
Statistical Multiplexing
INTRODUCTION
Statistical multiplexing refers to the phenomenon whereby sources with statistically varying rates
are mixed together into a common server or buffer. Typical sources related to real-time phenomena
such as speech or video are bursty — there are periods when they generate bits or packets at a high
rate (ON state) while there are other periods when they generate a few or no packets (OFF state).
Typical source behavior is depicted in the Figure 4.1:
C
Figure 4.1: Statistical multiplexing of bursty sources.
Because of statistical independence between different users or sources, it is a very unlikely
scenario that all sources will be simultaneously in the ON state (especially when there are many),
and thus to design a server to serve at a rate corresponding to the maximum sum rate of all the
sources would be very wasteful. If we allow for a fraction of the offered traffic to be lost, then we
will see that it is possible that a link of given capacity can carry much more traffic than would be
the case if we assumed that the traffic was synchronized.
A caricature of a typical scenario whereby one could exploit the statistical independence is
shown in Figure 4.2.
In this chapter, we will study the issue of statistical multiplexing with an aim to quantify the
gains. Moreover, we will see that there is a very important concept that emerges, namely the notion
of effective bandwidths. This has been one of the major conceptual advances that emerged in the
1990s when the issue of providing Quality of Service (QoS) became important in ATM networks.
The attractiveness of this idea is that it allows us to map a packet level phenomenon that can include
queueing to a flow or connection level phenomenon i.e., allows us to ignore a queueing model and
convert it into a loss model. This idea will be made precise in the sequel.
98 STATISTICAL MULTIPLEXING
Source 1
Source 2
Source 3
A
1
bandwidth units
A
2
bandwidth units
A
3
bandwidth units
C bandwidth units
Overflow
xxxx
Figure 4.2: How statistical multiplexing works.
Quality of Service (QoS) is a buzzword that became popular in the late 1980s when the idea
of Asynchronous Transfer Mode (ATM) networks emerged. The idea here was that the transport
of packets or bits within a network was going to be assured by an architecture that resembled a
circuit-multiplexed scenario (fixed paths) except that rather than reserving resources along a path
fixed at a given level, the traffic streams were to be split into packets of fixed size called cells and the
statistical variation in cell generation as in Figure 4.1 was to be exploited to "pack in" many more
connections by allowing the variability in rates to be used. However, since one could not predict the
OFF moments there would be periods when there would be temporary overload of the capacity and
cells would be lost as in Figure 4.2. So, if one could tolerate the loss of some bits (not critical in many
voice and video applications) then one could pack in many more sources; otherwise in the situation
shown in Figure 4.2 source 3 would be blocked. In the ATM context, the probability of packet (cell)
loss was set to be of the order of 10
6
. This is indeed a very small probability, and thus the question
arising is how does one determine that indeed such a criterion is being met? Simulations are one
way, but that would be extremely cumbersome given the extremely small probabilities that need to
be estimated. Thus, arose a need to develop methods to estimate quantities like cell or packet loss
of very small magnitude. We will discuss these issues later in the chapter. We first begin by defining
the various QoS metrics of interest.

Get Performance Modeling, Loss Networks, and Statistical Multiplexing now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.