MDRR Implementation 119
Queue 1 is served next. It deﬁcit counter is initialized to 3000. This allows three packets to
be sent, leaving the deﬁcit counter to be 3000 – 1500 – 500 – 1500 = –500. Figure 5-14
shows the queues and the deﬁcit counters at this stage.
Figure 5-14 MDRR After Serving Queue 1, Its First Pass
Queue 0 is the next queue serviced and sends two packets, making the deﬁcit counter
1500 – 1000 – 1500 = –500. Because the queue is now empty, the deﬁcit counter is reset
to 0. Figure 5-15 depicts the queues and counters at this stage.
Queue 1 serves the remaining packet in a similar fashion in its next pass. Because the queue
becomes empty, its deﬁcit counter is reset to 0.
Cisco 12000 series routers support MDRR. MDRR can run on the output interface queue
(transmit [TX] side) or on the input interface queue (receive [RX] side) when feeding the
fabric queues to the output interface.
Different hardware revisions of line cards termed as engine 0, 1, 2, 3, and so on, exist for
Cisco 12000 series routers. The nature of MDRR support on a line card depends on the line
card’s hardware revision. Engine 0 supports MDRR software implementation. Line card
hardware revisions, Engine 2 and above, support MDRR hardware implementation.
120 Chapter 5: Per-Hop Behavior: Resource Allocation II
Figure 5-15 MDRR After Serving Queue 0, Its Second Pass
MDRR on the RX
MDRR is implemented in either software or hardware on a line card. In a software
implementation, each line card can send trafﬁc to 16 destination slots because the 12000
series routers use a 16x16 switching fabric. For each destination slot, the switching fabric
has eight CoS queues, making the total number of CoS queues 128 (16 x 8). You can
conﬁgure each CoS queue independently.
In the hardware implementation, each line card has eight CoS queues per destination
interface. With 16 destination slots and 16 interfaces per slot, the maximum number of CoS
queues is 16 × 16 × 8 = 2048. All the interfaces on a destination slot have the same CoS
MDRR on the TX
Each interface has eight CoS queues, which you can conﬁgure independently in both
hardware- and software-based MDRR implementations.
Flexible mapping between IP precedence and the eight possible queues is offered in the
MDRR implementation. MDRR allows a maximum of eight queues so that each IP
precedence value can be made its own queue. The mapping is ﬂexible, however. The
MDRR Implementation 121
number of queues needed and the precedence values mapped to those queues are user-
conﬁgurable. You can map one or more precedence values into a queue.
MDRR also offers individualized drop policy and bandwidth allocation. Each queue has its
own associated Random Early Detection (RED) parameters that determine its drop
thresholds and DRR quantum, the latter which determines how much bandwidth it gets. The
quantum (in other words, the average number of bytes taken from the queue for each
service) is user-conﬁgurable.
Case Study 5-2: Bandwidth Allocation and Minimum Jitter
Conﬁguration for Voice Trafﬁc with Congestion Avoidance Policy
Trafﬁc is classiﬁed into different classes so that a certain minimum bandwidth can be
allocated for each class depending on the need and importance of the trafﬁc. An ISP
implements ﬁve trafﬁc classes—gold, silver, bronze, best-effort, and a voice class carrying
voice trafﬁc and requiring minimum jitter.
You need four queues, 0–3, to carry the four trafﬁc classes (best-effort, bronze, silver, gold),
and a ﬁfth low-latency queue to carry the voice trafﬁc.
This example shows three OC3 Point-to-Point Protocol (PPP) over Synchronous Optical
Network (SONET) (PoS) interfaces, one each in slots 1–3. Listing 5-3 gives a sample
conﬁguration for this purpose.
Listing 5-3 Deﬁning Trafﬁc Classes and Allocating Them to Appropriate Queues with a Minimum Bandwidth
destination-slot 0 cos-a
destination-slot 1 cos-a
destination-slot 2 cos-a
rx-cos-slot 1 table-a
rx-cos-slot 2 table-a
rx-cos-slot 3 table-a
precedence all random-detect-label 0
precedence 0 queue 0
precedence 1 queue 1