scispace - formally typeset
Search or ask a question
Journal ArticleDOI

REM: active queue management

01 May 2001-IEEE Network (IEEE NETWORK)-Vol. 15, Iss: 3, pp 48-53
TL;DR: A new active queue management scheme, random exponential marking (REM), is described that aims to achieve both high utilization and negligible loss and delay in a simple and scalable manner and presents simulation results of its performance in wireline and wireless networks.
Abstract: We describe a new active queue management scheme, random exponential marking (REM), that aims to achieve both high utilization and negligible loss and delay in a simple and scalable manner. The key idea is to decouple the congestion measure from the performance measure such as loss, queue length, or delay. While the congestion measure indicates excess demand for bandwidth and must track the number of users, the performance measure should be stabilized around their targets independent of the number of users. We explain the design rationale behind REM and present simulation results of its performance in wireline and wireless networks.

Summary (3 min read)

Introduction

  • The second feature is essential in a network where users typically go through multiple congested links.
  • They contrast sharply with random early detection (RED) [1].
  • The authors explain how REM can help address this problem and present simulation results of its performance.
  • For the rest of this article, unless otherwise specified, by “marking” the authors mean either dropping a packet or setting its explicit congestion notification (ECN) bit [2] probabilistically.

RED

  • A main purpose of active queue management is to provide congestion information for sources to set their rates.
  • In fact, RED only decides the first two questions.
  • For RED, the congestion measure is queue length and it is automatically updated by the buffer process.

REM

  • REM differs from RED only in the first two design questions: it uses a different definition of congestion measure and a different marking probability function.
  • These differences lead to the two key features mentioned in the last section, as the authors now explain.
  • Detailed derivation and justification, a pseudocode implementation, and much more extensive simulations can be found in [4, 5].

Match Rate Clear Buffer

  • The first idea of REM is to stabilize both the input rate around link capacity and the queue around a small target, regardless of the number of users sharing the link.
  • When the number of users increases, the mismatches in rate and queue grow, pushing up price and hence marking probability.
  • Whereas the congestion measure (queue length) in RED is automatically updated by the buffer process according to Eq. 1, REM explicitly controls the update of its price to bring about its first property.
  • This can hold only if the input rate equals capacity (xl = cl) and the backlog equals its target (bl = b*l), leading to the first feature mentioned at the beginning of the article.
  • When the target queue length b* is nonzero, the authors can bypass the measurement of rate mismatch xl(t) – cl(t) in the price update, Eq. 2. Notice that xl(t) – cl(t) is the rate at which the queue length grows when the buffer is nonempty.

Sum Prices

  • The output queue marks each arrival packet not already marked at an upstream queue, with a probability that is exponentially increasing in the current price.
  • When, and only when, individual link marking probability is exponential in its link price, this end-to-end marking probability will be exponentially increasing in the sum of the link prices at all the congested links in its path.
  • Since it is embedded in the end-to-end marking probability, it can easily be estimated by sources from the fraction of their own packets that are marked, and used to design their rate adaptation.

Modularized Features

  • The price adjustment rule given by Eq. 2 or 3 leads to the feature that REM attempts to equalize user rates with network capacity while stabilizing queue length around a target value, possibly zero.
  • The exponential marking probability function given by Eq. 4 leads to the feature that the end-to-end marking probability conveys to a user the aggregate price, aggregated over all routers in its path.
  • These two features can be implemented independent of each other.
  • One may choose to use price to measure congestion but use a different marking probability function, say, one that is RED-like or some other increasing function of the price, to implement the first, but not the second, feature.
  • Alternatively, one may choose to measure congestion differently, perhaps by using loss, delay, or queue length (but see the next subsection for caution), but mark with an exponential marking probability function, in order to implement the second, but not the first, feature.

Congestion and Performance Measures

  • Reno without active queue management measures congestion with buffer overflow, Vegas measures it with queuing (not including propagation) delay [6], RED measures it with average queue length, and REM measures it with price.
  • It is thus inevitable that the average queue under RED grows with the number of users, gentle or not.
  • Hence, in times of congestion, RED can be tuned to achieve either high link utilization or low delay and loss, but not both.
  • In contrast, by decoupling congestion and performance measures, a queue can be stabilized around its target independent of traffic load, leading to high utilization and low delay and loss in equilibrium.
  • These are illustrated in the simulation results in the next section.

Stability and Utility Function

  • It has recently been shown that major TCP congestion control schemes — Reno/DropTail, Reno/RED, Reno/REM, Vegas/DropTail, Vegas/RED, and Vegas/REM — can all be interpreted as approximately carrying out a gradient algorithm to maximize aggregate source utility [3, 6]; see also [7, 8] for a related model.
  • The duality model thus provides a convenient way to study the stability, optimality, and fairness properties of these schemes, and, more important, to explore their interaction.
  • This confirms their extensive real-life and simulation experience with these TCP schemes when window sizes are relatively small.
  • First, even though users typically do not know what utility functions they should use, by designing their rate adaptation they have implicitly chosen a particular utility function.
  • Recently, a proportional-plus-integral (PI) controller was proposed in [11] as an alternative active queue management scheme to RED, and simulation results were presented to demonstrate its superior equilibrium and transient performance.

1 Unlike REM, however, the goodput under RED is higher with dropping than with marking (Fig. 2). This is intriguing and seems to happen with

  • The authors mark or drop packets according to the probability determined by the link algorithm.
  • As time increases on the xaxis, the number of sources increases from 20 to 160 and the average window size decreases from 32 packets to 4 packets.
  • The loss rate is about the same under REM with dropping as under DropTail, as predicted by the duality model of [3].
  • The goodput for DropTail upper bounds that of all variations of RED, because it keeps a substantially larger mean queue.

Wireless TCP

  • TCP (or more precisely, the AIMD algorithm) was originally designed for wireline networks where congestion is measured, and conveyed to users, by packet losses due to buffer overflows.
  • The third approach aims to eliminate packet loss due to buffer overflow, so the source only sees wireless losses.
  • This phenomenon also manifests itself in the cumulative packet losses due only to buffer overflow shown in Fig. 4: loss is heaviest with NewReno, negligible with RED(10:30) and REM, and moderate with RED(20:80).
  • A possible solution is for routers to somehow indicate their ECN capability, possibly making use of one of the two ECN bits proposed in [2].

Conclusion

  • The authors have proposed a new active queue management scheme, REM, that attempts to achieve both high utilization and negligible loss and delay.
  • As time increases on the x-axis, the number of sources increases from 20 to 100 and the average window size decreases from 22 packets to 4 packets.
  • Simulation results suggest that this goal seems achievable without sacrificing the simplicity and scalability of the original RED.
  • The authors emphasize, however, that it is an equilibrium property, and REM’s transient behavior needs more careful study.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

IEEE Network • May/June 2001
48
n this article we describe a new active queue manage-
ment scheme, Random Exponential Marking (REM),
that has the following key features:
Match rate clear buffer: It attempts to match user rates
to network capacity while clearing buffers (or stabilize
queues around a small target), regardless of the number of
users.
Sum prices: The end-to-end marking (or dropping) probabil-
ity observed by a user depends in a simple and precise man-
ner on the sum of link prices (congestion measures),
summed over all the routers in the path of the user.
The first feature implies that, contrary to the conventional
wisdom, high utilization is not achieved by keeping large back-
logs in the network, but by feeding back the right information
for users to set their rates. We present simulation results
which demonstrate that REM can maintain high utilization
with negligible loss or queuing delay as the number of users
increases.
The second feature is essential in a network where users
typically go through multiple congested links. It clarifies the
meaning of the congestion information embedded in the end-
to-end marking (or dropping) probability observed by a user,
and thus can be used to design its rate adaptation.
In the following, we describe REM and explain how it
achieves these two features. They contrast sharply with ran-
dom early detection (RED) [1]. It will become clear that these
features are independent of each other, and one can be imple-
mented without the other. We then compare the performance
of DropTail, RED, and REM in wireline networks through
simulations. It is well known that TCP performs poorly over
wireless links because it cannot differentiate between losses
due to buffer overflow and those due to wireless effects such
as fading, interference, and handoffs. We explain how REM
can help address this problem and present simulation results
of its performance.
For the rest of this article, unless otherwise specified, by
“marking” we mean either dropping a packet or setting its
explicit congestion notification (ECN) bit [2] probabilistically.
If a packet is marked by setting its ECN bit, its mark is car-
ried to the destination and then conveyed back to the source
via acknowledgment.
We start by interpreting RED.
RED
A main purpose of active queue management is to provide
congestion information for sources to set their rates. The
design of active queue management algorithms must answer
three questions, assuming packets are probabilistically marked:
How is congestion measured?
How is the measure embedded in the probability function?
How is it fed back to users?
RED answers these questions as follows.
First, RED measures congestion by (exponentially weighted
average) queue length. Importantly, the choice of congestion
measure determines how it is updated to reflect congestion
(see below) [3]. Second, the probability function is a piecewise
linear and increasing function of the congestion measure, as
illustrated in Fig. 1. Finally, the congestion information is con-
veyed to the users by either dropping a packet or setting its
ECN bit probabilistically. In fact, RED only decides the first
two questions. The third question is largely independent.
RED interacts with TCP: as source rates increase, queue
length grows, more packets are marked, prompting the
sources to reduce their rates, and the cycle repeats. TCP
defines precisely how the source rates are adjusted while
active queue management defines how the congestion mea-
sure is updated. For RED, the congestion measure is queue
length and it is automatically updated by the buffer process.
The queue length in the next period equals the current queue
length plus aggregate input minus output:
b
l
(t + 1) = [b
l
(t) + x
l
(t) – c
l
(t)]
+
(1)
where [z]
+
= max {z, 0}. Here, b
l
(t) is the aggregate queue
length at queue l in period t, x
l
(t) is the aggregate input rate
to queue l in period t, and c
l
(t) is the output rate in period t.
REM
REM differs from RED only in the first two design questions:
it uses a different definition of congestion measure and a dif-
ferent marking probability function. These differences lead to
the two key features mentioned in the last section, as we now
explain. Detailed derivation and justification, a pseudocode
implementation, and much more extensive simulations can be
found in [4, 5].
0890-8044/01/$10.00 © 2001 IEEE
REM: Active Queue Management
Sanjeewa Athuraliya and Steven H. Low, California Institute of Technology
Victor H. Li and Qinghe Yin, CUBIN, University of Melbourne
Abstract
We describe a new active queue management scheme, Random Exponential Mark-
ing (REM), that aims to achieve both high utilization and negligible loss and delay in
a simple and scalable manner. The key idea is to decouple congestion measure from
performance measure such as loss, queue length, or delay. While congestion mea-
sure indicates excess demand for bandwidth and must track the number of users,
performance measure should be stabilized around their targets independent of the
number of users. We explain the design rationale behind REM and present simula-
tion results of its performance in wireline and wireless networks.
I
I

IEEE Network • May/June 2001
49
Match Rate Clear Buffer
The first idea of REM is to stabilize both the input rate
around link capacity and the queue around a small target,
regardless of the number of users sharing the link.
Each output queue that implements REM maintains a
variable we call price as a congestion measure. This variable
is used to determine the marking probability, as explained
in the next subsection. Price is updated, periodically or
asynchronously, based on rate mismatch (i.e., difference
between input rate and link capacity) and queue mismatch
(i.e., difference between queue length and target). The price
is incremented if the weighted sum of these mismatches is
positive, and decremented otherwise. The weighted sum is
positive when either the input rate exceeds the link capacity
or there is excess backlog to be cleared, and negative other-
wise. When the number of users increases, the mismatches
in rate and queue grow, pushing up price and hence mark-
ing probability. This sends a stronger congestion signal to
the sources, which then reduce their rates. When the source
rates are too small, the mismatches will be negative, pushing
down price and marking probability and raising source
rates, until eventually the mismatches are driven to zero,
yielding high utilization and negligible loss and delay in
equilibrium. The buffer will be cleared in equilibrium if the
target queue is set to zero.
Whereas the congestion measure (queue length) in RED is
automatically updated by the buffer process according to Eq.
1, REM explicitly controls the update of its price to bring
about its first property. Precisely, for queue l, the price p
l
(t) in
period t is updated according to
p
l
(t + 1) = [p
l
(t) + g(a
l
(b
l
(t) – b
*
l
) + x
l
(t) – c
l
(t))]
+
, (2)
where g > 0 and a
l
> are small constants and [z]
+
= max {z,
0}. Here b
l
(t) is the aggregate buffer occupancy at queue l in
period t and b
*
l
0 is target queue length, x
l
(t) is the aggregate
input rate to queue l in period t, and c
l
(t) is the available
bandwidth to queue l in period t. The difference x
l
(t) – c
l
(t)
measures rate mismatch and the difference b
l
(t) – b
*
l
measures
queue mismatch. The constant a
l
can be set by each queue
individually, and trades off utilization and queuing delay dur-
ing transient. The constant g controls the responsiveness of
REM to changes in network conditions. Hence, from Eq. 2,
the price is increased if the weighted sum of rate and queue
mismatches, weighted by a
l
, is positive, and decreased other-
wise. In equilibrium the price stabilizes, and this weighted sum
must be zero (i.e., a
l
(b
l
b
*
l
) + x
l
c
l
= 0). This can hold only
if the input rate equals capacity (x
l
= c
l
) and the backlog
equals its target (b
l
= b
*
l
), leading to the first feature men-
tioned at the beginning of the article.
We make two remarks on implementation. First, REM
uses only local and aggregate information — in particular,
no per-flow information is needed — and works with any
work-conserving service discipline. It updates its price inde-
pendent of other queues or routers. Hence, its complexity is
independent of the number of users or the size of the net-
work or its capacity.
Second, it is usually easier to sample queue length than
rate in practice. When the target queue length b* is nonzero,
we can bypass the measurement of rate mismatch x
l
(t) – c
l
(t)
in the price update, Eq. 2. Notice that x
l
(t) – c
l
(t) is the rate at
which the queue length grows when the buffer is nonempty.
Hence, we can approximate this term by the change in back-
log, b
l
(t + 1) – b
l
(t). Then the update rule of Eq. 2 becomes
p
l
(t + 1) = [p
l
(t) + g(b
l
(t + 1) – (1 – a
l
) b
l
(t) – a
l
b*)]
+
, (3)
that is, the price is updated based only on the current and pre-
vious queue lengths.
The update rule expressed in Eq. 2 or 3 contrasts sharply
with RED. As the number of users increases, the marking
probability should grow to increase the intensity of congestion
signal. Since RED uses queue length to determine the mark-
ing probability, this means that the mean queue length must
steadily increase as the number of users increases. In contrast,
the update rule of Eq. 3 uses queue length to update a price
which is then used to determine the marking probability.
Hence, under REM, the price steadily increases while the
mean queue length is stabilized around the target b
*
l
, as the
number of users increases. We will come back to this point in
a later section.
Sum Prices
The second idea of REM is to use the sum of the link prices
along a path as a measure of congestion in the path, and to
embed it in the end-to-end marking probability that can be
observed at the source.
The output queue marks each arrival packet not already
marked at an upstream queue, with a probability that is
exponentially increasing in the current price. This marking
probability is illustrated in Fig. 1. The exponential form of
the marking probability is critical in a large network where
the end-to-end marking probability for a packet that tra-
verses multiple congested links from source to destination
depends on the link marking probability at every link in the
path. When, and only when, individual link marking proba-
bility is exponential in its link price, this end-to-end mark-
ing probability will be exponentially increasing in the sum
of the link prices at all the congested links in its path. This
sum is a precise measure of congestion in the path. Since it
is embedded in the end-to-end marking probability, it can
easily be estimated by sources from the fraction of their
own packets that are marked, and used to design their rate
adaptation.
Precisely, suppose a packet traverses links l = 1, 2, …, L
that have prices p
l
(t) in period t. Then the marking probability
m
l
(t) at queue l in period t is
m
l
(t) = 1 – f
p
l
(t)
, (4)
where f > 1 is a constant. The end-to-end marking probabili-
ty for the packet is then
(5)
11 1
1
1
-- =-
=
(()) ;
()
mt
l
l
L
pt
l
f
Figure 1. Marking probability of (gentle) RED and REM.
500
Probability
Congestion measure
REM
RED
Marking probability
0.1
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
100 150 200 250

IEEE Network • May/June 2001
50
that is, the end–to–end marking probability is high when the
congestion measure of its path, S
l
p
l
(t), is large.
When the link marking probabilities m
l
(t) are small, and
hence the link prices p
l
(t) are small, the end-to-end marking
probability given by Eq. 5 is approximately proportional to the
sum of the link prices in the path:
(6)
Modularized Features
The price adjustment rule given by Eq. 2 or 3 leads to the fea-
ture that REM attempts to equalize user rates with network
capacity while stabilizing queue length around a target value,
possibly zero. The exponential marking probability function
given by Eq. 4 leads to the feature that the end-to-end mark-
ing probability conveys to a user the aggregate price, aggregat-
ed over all routers in its path. These two features can be
implemented independent of each other.
For example, one may choose to use price to measure con-
gestion but use a different marking probability function, say,
one that is RED-like or some other increasing function of the
price, to implement the first, but not the second, feature.
Alternatively, one may choose to measure congestion differ-
ently, perhaps by using loss, delay, or queue length (but see
the next subsection for caution), but mark with an exponential
marking probability function, in order to implement the sec-
ond, but not the first, feature.
Congestion and Performance Measures
Reno without active queue management measures congestion
with buffer overflow, Vegas measures it with queuing (not
including propagation) delay [6], RED measures it with aver-
age queue length, and REM measures it with price. A critical
difference among them is the coupling of congestion measure
with performance measure, such as loss, delay, or queue
length, in the first three schemes. This coupling implies that,
as the number of users increases, congestion grows and per-
formance deteriorates (i.e., “congestion” necessarily means
“bad performance,” e.g., large loss or delay). If they are
decoupled, as in REM, “congestion” (i.e., high link prices)
simply signals that “demand for exceeds supply of” network
resources. This curbs demand but maintains good perfor-
mance, such as low delay and loss.
By “decoupling,” we mean that the equilibrium value of the
congestion measure is independent of the equilibrium loss,
queue length, or delay. Notice that in Eq. 3, queue length
determines the update of the congestion measure in REM
during transience, but not its equilibrium value. As the num-
ber of users grows, prices in REM grow, but queues stabilize
around their targets. Indeed, the equilibrium value of conges-
tion measure, price in REM, and average queue length in
RED, is determined solely by the network topology and the
number of users [3], not by the way it is updated.
It is thus inevitable that the average queue under RED
grows with the number of users, gentle or not. With the
original RED, it can grow to the maximum queue threshold
max_th where all packets are marked. If max_th is set too
high, the queuing delay can be excessive; if it is set too low,
the link can be underutilized due to severe buffer oscillation.
Moreover, if the congestion signal is fed back through random
dropping rather than marking, packet losses can be very fre-
quent. Hence, in times of congestion, RED can be tuned to
achieve either high link utilization or low delay and loss, but
not both. In contrast, by decoupling congestion and perfor-
mance measures, a queue can be stabilized around its target
independent of traffic load, leading to high utilization and low
delay and loss in equilibrium. These are illustrated in the sim-
ulation results in the next section.
Performance
Stability and Utility Function
It has recently been shown that major TCP congestion control
schemes — Reno/DropTail, Reno/RED, Reno/REM,
Vegas/DropTail, Vegas/RED, and Vegas/REM — can all be
interpreted as approximately carrying out a gradient algorithm
to maximize aggregate source utility [3, 6]; see also [7, 8] for a
related model. Different TCP schemes, with or without mark-
ing, merely differ in their choice of user utility functions. The
duality model thus provides a convenient way to study the sta-
bility, optimality, and fairness properties of these schemes,
and, more important, to explore their interaction. In particu-
lar, the gradient algorithm has been proved mathematically to
be stable even in an asynchronous environment [9, 10]. This
confirms our extensive real-life and simulation experience
with these TCP schemes when window sizes are relatively
small. It also has two implications.
First, even though users typically do not know what utility
functions they should use, by designing their rate adaptation
they have implicitly chosen a particular utility function. By
making this apparent, the optimization models [3, 6–8] deep-
en our understanding of the current protocols and suggest a
way to design new protocols by tailoring utility functions to
applications.
Second, the utility function may be determined not only
by users’ rate adaptation, but also by the marking algo-
rithm. This is true for Reno; that is, Reno/DropTail,
Reno/RED, and Reno/REM have slightly different utility
functions. This is a consequence of our requirement that
the additive-increase-multiplicative-decrease (AIMD) algo-
rithm react to packet losses in the same way regardless of
whether they are due to buffer overflow, or RED or REM,
even though congestion is measured and embedded very
differently in these schemes.
Recently, a proportional-plus-integral (PI) controller was
proposed in [11] as an alternative active queue management
scheme to RED, and simulation results were presented to
demonstrate its superior equilibrium and transient perfor-
mance. It turns out that this PI controller and REM as
expressed in Eq. 3 are equivalent.
Utilization, Loss, and Delay
We have conducted extensive simulations to compare the
performance of REM and RED with both Reno and
NewReno, with a single link and multiple links, with various
numbers of sources, link capacities, and propagation delays
[4, 5]. The relative performance of REM and RED, as expect-
ed, is similar with both Reno and NewReno since the proper-
ties discussed earlier are properties of active queue
management, independent of the source algorithms.
1
In this
subsection we present some of these results, comparing the
performance of NewReno/DropTail, NewReno/REM, and
NewReno/RED.
The simulation is conducted in the ns-2.1b6 simulator for
a single link that has a bandwidth capacity of 64 Mb/s and a
buffer capacity of 120 packets. Packets are all 1 kbyte in size.
This link is shared by 160 NewReno users with the same
round-trip propagation delay of 80 ms. Twenty users are ini-
end -to-end marketing probability -
˜
(log ) ( ).
el
l
ptf
Â
1
Unlike REM, however, the goodput under RED is higher with dropping
than with marking (Fig. 2). This is intriguing and seems to happen with
NewReno but not Reno.

IEEE Network • May/June 2001
51
tially active at time 0, and every 50 s thereafter 20 more users
activate, until all 160 users are active. Two sets of parameters
are used for RED. The first set, referred to as RED(20:80),
has a minimum queue threshold min_th = 20 packets, a max-
imum queue threshold max_th = 80 packets, and max_p =
0.1. The second set, referred to as RED(10:30), has a mini-
mum queue threshold min_th = 10 packets, a maximum
queue threshold max_th = 30 packets, and max_p = 0.1. For
both sets, q_weight = 0.002. The parameter values of REM
are f = 1.001, a = 0.1, g = 0.001, and b* = 20 packets. We
have conducted experiments with both marking and dropping
packets as ways of congestion feedback. We mark or drop
packets according to the probability determined by the link
algorithm.
The results are shown in Fig. 2. As time increases on the x-
axis, the number of sources increases from 20 to 160 and the
average window size decreases from 32 packets to 4 packets.
The y-axis illustrates the performance in each period (in
between the introduction of new sources). Goodput is the
ratio of the total number of nonduplicate packets received at
all destinations per unit time to link capacity. Loss rate is the
ratio of the total number of packets dropped to the total num-
ber of packets sent.
The left panel compares the performance of REM with
DropTail. In this set of experiments, REM achieves a slightly
higher goodput than DropTail at almost all window sizes with
either dropping or ECN marking. As the number of sources
grows, REM stabilizes the mean queue around the target b*=
20 packets, whereas the mean queue under DropTail steadily
increases. The loss rate is about the same under REM with
dropping as under DropTail, as predicted by the duality
model of [3]. The loss rate under REM with marking is nearly
zero regardless of the number of sources (not shown).
The right panel compares the performance of RED with
DropTail. The goodput for DropTail upper bounds that of all
variations of RED, because it keeps a substantially larger mean
queue. The mean queue under all these five schemes steadily
increases as the number of sources grows, as discussed earlier.
As expected, RED(20:80) has both a higher goodput and mean
queue than RED(10:30) at all window sizes.
Wireless TCP
TCP (or more precisely, the AIMD algorithm) was originally
designed for wireline networks where congestion is measured,
and conveyed to users, by packet losses due to buffer over-
flows. In wireless networks, however, packets are lost mainly
because of bit errors, due to fading and interference, and
Figure 2. Performance of NewReno/DropTail, NewReno/RED, NewReno/REM. As time increases on the x-axis, the number of users
increases from 20 to 160 and the average window size decreases from 32 packets to 4 packets.
500
Goodput (%)
Time (sec.)
80
75
85
90
95
100
100 150 200 250 300 350 400
Drop Tail
REM dropping
REM marking
500
Goodput (%)
Time (sec.)
80
75
85
90
95
100
100 150 200 250 300 350 400
500
Mean queue length (pkts.)
Time (sec.)
20
0
40
60
80
100
120
100 150 200 250 300 350 400
Drop Tail
REM dropping
REM marking
500
Mean queue length (pkts.)
Time (sec.)
20
0
40
60
80
100
120
100 150 200 250 300 350 400
500
Loss rate
Time (sec.)
0
0.05
0.1
0.15
100 150 200 250 300 350 400
Drop Tail
REM dropping
Drop Tail
RED dropping(10:30)
RED dropping(20:80)
500
Loss rate
Time (sec.)
0.05
75
0.1
0.15
100 150 200 250 300 350 400
Drop Tail
RED dropping (10:30)
RED dropping (20:80)
RED marking (10:30)
RED marking (20:80)
Drop Tail
RED dropping (10:30)
RED dropping (20:80)
RED marking (10:30)
RED marking (20:80)

IEEE Network • May/June 2001
52
because of intermittent connectivity, due to handoffs. The
coupling between packet loss and congestion measure and
feedback in TCP leads to poor performance over wireless
links. This is because a TCP source cannot differentiate
between losses due to buffer overflow and those due to wire-
less effects, and halves its window on each loss event.
Three approaches have been proposed to address this
problem [12]. The first approach hides packet losses on
wireless links, so the source only sees congestion-induced
losses. This involves various interference suppression tech-
niques, and error control and local retransmission algo-
rithms on the wireless links. The second approach informs
the source, using TCP options fields, which losses are due
to wireless effects, so that the source will not halve its rate
after retransmission.
The third approach aims to eliminate packet loss due to
buffer overflow, so the source only sees wireless losses. This
violates TCP’s assumption: losses no longer indicate buffer
overflow. Congestion must be measured and fed back using a
different mechanism. Exploiting the first feature of REM
(match rate clear buffer), we propose to use REM with ECN
marking for this purpose. Then a TCP source only retransmits
on detecting a loss and halves its window when seeing a mark.
Note that the first and third approaches are complementary
and can be combined.
We now present preliminary simulation results to illustrate
the promise of this approach. The simulation is conducted in
the ns-2 simulator for a single wireless link that has a band-
width capacity of 2 Mb/s and a buffer capacity of 100 packets.
It loses packets randomly according to a Bernoulli loss model
with a loss probability of 1 percent (see [5] for simulations
with a bursty loss model.). A small packet size of 382 bits is
chosen to mitigate the effect of random loss. This wireless link
is shared by 100 NewReno users with the same round-trip
propagation delay of 80 ms. Twenty users are initially active at
time 0, and every 50 s thereafter 20 more users activate until
all 100 users are active.
With active queue management, the ECN bit is set to 1 in
ns-2 so that packets are probabilistically marked according to
RED or REM. Packets are dropped only when they arrive at
a full buffer. We modify NewReno so that it halves its window
when it receives a mark or detects a loss through timeout, but
retransmits without halving its window when it detects a loss
through duplicate acknowledgments. We compare the perfor-
mance of NewReno/DropTail, (modified) NewReno/RED,
and (modified) NewReno/REM. The parameters of RED and
REM have the same values as in the previous section.
Figure 3 shows the goodput within each period under the
four schemes. It shows that the introduction of ECN marking
is very effective in improving the goodput of NewReno, rais-
ing it from between 62 and 91 percent to between 82 and 96
percent, depending on the number of users. Comparison
between REM and RED has a similar conclusion as in wire-
line networks: REM and RED(20:80) maintain a higher good-
put (between 90 and 96 percent) than RED(10:30) (between
82 and 95 percent). As the number of sources increases, the
mean queue stabilizes under REM, while it steadily increases
under DropTail and RED.
This phenomenon also manifests itself in the cumulative
packet losses due only to buffer overflow shown in Fig. 4: loss
is heaviest with NewReno, negligible with RED(10:30) and
REM, and moderate with RED(20:80). Under REM and
RED(10:30) buffer overflows only during transient following
introduction of new sources, and hence their cumulative losses
jump up at the beginning of each period but stay constant
between jumps. Under RED(20:80) and DropTail, on the
other hand, buffer overflows also in equilibrium, and hence
their cumulative losses steadily increase between jumps.
A challenge with this approach is its application in a het-
erogeneous network where some, but not all, routers are
ECN-capable. Routers that are not ECN-capable continue to
rely on dropping to feed back congestion information. TCP
sources that adapt their rates only based on marks run the
risk of overloading these routers. A possible solution is for
routers to somehow indicate their ECN capability, possibly
making use of one of the two ECN bits proposed in [2]. This
may require that all routers are at least ECN-aware. A source
reacts to marks only if all routers in its path are ECN-capable,
but reacts to loss as well, like a conventional TCP source, if its
path contains a router that is not ECN-capable.
Conclusion
We have proposed a new active queue management scheme,
REM, that attempts to achieve both high utilization and negli-
gible loss and delay. The key idea is to decouple congestion
measure (price) from performance measure (loss and queue)
Figure 3. Wireless TCP: goodput (%). As time increases on the
x-axis, the number of sources increases from 20 to 100 and the
average window size decreases from 22 to 4 packets.
NewReno
REM
RED(20:80)
RED(10:30)
1000
Goodput (%)
Time (s)
70
60
65
75
80
85
90
95
100
150 200 250
Figure 4. Wireless TCP: cumulative loss due to buffer overflow
(packets). As time increases on the x-axis, the number of sources
increases from 20 to 100 and the average window size decreases
from 22 packets to 4 packets.
NewReno
REM
RED(20:80)
RED(10:30)
500
Loss due to congestion (packets.)
Time (s)
2000
0
4000
6000
8000
10000
12000
14000
16000
18000
100 150 200 250

Citations
More filters
Proceedings ArticleDOI
19 Aug 2002
TL;DR: XCP as mentioned in this paper generalizes the Explicit Congestion Notification proposal (ECN) and decouples utilization control from fairness control, which allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.
Abstract: Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links.To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.

1,191 citations

Journal ArticleDOI
TL;DR: A recently developed dynamic model of TCP congestion-avoidance mode relates key network parameters such as the number of TCP sessions, link capacity and round-trip time to the underlying feedback control problem and analyzes the present de facto AQM standard: random early detection (RED) and determines that REDs queue-averaging is not beneficial.
Abstract: In active queue management (AQM), core routers signal transmission control protocol (TCP) sources with the objective of managing queue utilization and delay. It is essentially a feedback control problem. Based on a recently developed dynamic model of TCP congestion-avoidance mode, this paper does three things: 1) it relates key network parameters such as the number of TCP sessions, link capacity and round-trip time to the underlying feedback control problem; 2) it analyzes the present de facto AQM standard: random early detection (RED) and determines that REDs queue-averaging is not beneficial; and 3) it recommends alternative AQM schemes which amount to classical proportional and proportional-integral control. We illustrate our results using ns simulations and demonstrate the practical impact of proportional-integral control on managing queue utilization and delay.

858 citations

Journal ArticleDOI
TL;DR: An optimization-based framework is described that provides an interpretation of various flow control mechanisms, in particular, the utility being optimized by the protocol's equilibrium structure, and presents a new protocol that overcomes limitations and provides stability in a way that is scalable to arbitrary networks, link capacities, and delays.
Abstract: This article reviews the current transmission control protocol (TCP) congestion control protocols and overviews recent advances that have brought analytical tools to this problem. We describe an optimization-based framework that provides an interpretation of various flow control mechanisms, in particular, the utility being optimized by the protocol's equilibrium structure. We also look at the dynamics of TCP and employ linear models to exhibit stability limitations in the predominant TCP versions, despite certain built-in compensations for delay. Finally, we present a new protocol that overcomes these limitations and provides stability in a way that is scalable to arbitrary networks, link capacities, and delays.

822 citations


Cites methods from "REM: active queue management"

  • ...Another link algorithm, proposed in [45] and [46], is...

    [...]

  • ...To communicateqi, the technique of random exponential marking [45] can be used....

    [...]

  • ...This protocol can be implemented in a similar fashion to RED, but simulations in [45] and [46] have shown a marked improvement over RED in terms of achieving fast responses and low queues....

    [...]

Journal ArticleDOI
TL;DR: A duality model of end-to-end congestion control is proposed and applied to understand the equilibrium properties of TCP and active queue management schemes to maximize aggregate utility subject to capacity constraints.
Abstract: We propose a duality model of end-to-end congestion control and apply it to understanding the equilibrium properties of TCP and active queue management schemes. The basic idea is to regard source rates as primal variables and congestion measures as dual variables, and congestion control as a distributed primal-dual algorithm over the Internet to maximize aggregate utility subject to capacity constraints. The primal iteration is carried out by TCP algorithms such as Reno or Vegas, and the dual iteration is carried out by queue management algorithms such as DropTail, RED or REM. We present these algorithms and their generalizations, derive their utility functions, and study their interaction.

701 citations


Cites background or methods from "REM: active queue management"

  • ...A larger generally yields a higher utilization especially when the queue oscillates widely [1], [2]....

    [...]

  • ...Different protocols, such as Reno, Vegas, RED, and REM [1], [2], all solve the same prototypical problem with different utility functions, and we derive these functions explicitly (Sections III and IV)....

    [...]

  • ...Since this is not used by Reno, other increasing functions can also be used, as explained in [1] and [2]....

    [...]

  • ...4 REM: REM [1], [2] also maintains two internal variables, instantaneous queue length and a quantity called “price” ....

    [...]

Journal ArticleDOI
TL;DR: Stochastic Fair Blue is proposed and evaluated, a queue management algorithm which can identify and rate-limit nonresponsive flows using a very small amount of state information and is shown to perform significantly better than Red, both in terms of packet loss rates and buffer size requirements in the network.
Abstract: In order to stem the increasing packet loss rates caused by an exponential increase in network traffic, the IETF has been considering the deployment of active queue management techniques such as RED (random early detection) (see Floyd, S. and Jacobson, V., IEEE/ACM Trans. Networking, vol.1, p.397-413, 1993). While active queue management can potentially reduce packet loss rates in the Internet, we show that current techniques are ineffective in preventing high loss rates. The inherent problem with these algorithms is that they use queue lengths as the indicator of the severity of congestion. In light of this observation, a fundamentally different active queue management algorithm, called BLUE, is proposed, implemented and evaluated. BLUE uses packet loss and link idle events to manage congestion. Using both simulation and controlled experiments, BLUE is shown to perform significantly better than RED, both in terms of packet loss rates and buffer size requirements in the network. As an extension to BLUE, a novel technique based on Bloom filters (see Bloom, B., Commun. ACM, vol.13, no.7, p.422-6, 1970) is described for enforcing fairness among a large number of flows. In particular, we propose and evaluate stochastic fair BLUE (SFB), a queue management algorithm which can identify and rate-limit nonresponsive flows using a very small amount of state information.

587 citations

References
More filters
Journal ArticleDOI
TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Abstract: The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. >

6,198 citations


"REM: active queue management" refers background in this paper

  • ...They contrast sharply with random early detection (RED) [ 1 ]....

    [...]

Journal ArticleDOI
TL;DR: This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks, which provide natural generalisations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion.
Abstract: This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing.

5,566 citations

Journal ArticleDOI
01 Jan 1997
TL;DR: This paper addresses the issues of charging, rate control and routing for a communication network carrying elastic traffic, such as an ATM network offering an available bit rate service, from which max-min fairness of rates emerges as a limiting special case.
Abstract: This paper addresses the issues of charging, rate control and routing for a communication network carrying elastic traffic, such as an ATM network offering an available bit rate service. A model is described from which max-min fairness of rates emerges as a limiting special case; more generally, the charges users are prepared to pay influence their allocated rates. In the preferred version of the model, a user chooses the charge per unit time that the user will pay; thereafter the user's rate is determined by the network according to a proportional fairness criterion applied to the rate per unit charge. A system optimum is achieved when users' choices of charges and the network's choice of allocated rates are in equilibrium.

3,067 citations

Journal ArticleDOI
TL;DR: An optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates to solve the dual problem using a gradient projection algorithm.
Abstract: We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using a gradient projection algorithm. In this system, sources select transmission rates that maximize their own benefits, utility minus bandwidth cost, and network links adjust bandwidth prices to coordinate the sources' decisions. We allow feedback delays to be different, substantial, and time varying, and links and sources to update at different times and with different frequencies. We provide asynchronous distributed algorithms and prove their convergence in a static environment. We present measurements obtained from a preliminary prototype to illustrate the convergence of the algorithm in a slowly time-varying environment. We discuss its fairness property.

2,101 citations

Journal ArticleDOI
TL;DR: The results show that a reliable link-layer protocol that is TCP-aware provides very good performance and it is possible to achieve good performance without splitting the end-to-end connection at the base station.
Abstract: Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements.

1,325 citations


"REM: active queue management" refers background in this paper

  • ...Three approaches have been proposed to address this problem [12]....

    [...]

Frequently Asked Questions (17)
Q1. What are the contributions in this paper?

The authors describe a new active queue management scheme, Random Exponential Marking ( REM ), that aims to achieve both high utilization and negligible loss and delay in a simple and scalable manner. The authors explain the design rationale behind REM and present simulation results of its performance in wireline and wireless networks. 

With active queue management, the ECN bit is set to 1 in ns-2 so that packets are probabilistically marked according to RED or REM. 

As time increases on the x-axis, the number of sources increases from 20 to 100 and the average window size decreases from 22 packets to 4 packets. 

The exponential form of the marking probability is critical in a large network where the end-to-end marking probability for a packet that traverses multiple congested links from source to destination depends on the link marking probability at every link in the path. 

Whereas the congestion measure (queue length) in RED is automatically updated by the buffer process according to Eq. 1, REM explicitly controls the update of its price to bring about its first property. 

When the link marking probabilities ml(t) are small, and hence the link prices pl(t) are small, the end-to-end marking probability given by Eq. 5 is approximately proportional to the sum of the link prices in the path:(6)The price adjustment rule given by Eq. 2 or 3 leads to the feature that REM attempts to equalize user rates with network capacity while stabilizing queue length around a target value, possibly zero. 

This involves various interference suppression techniques, and error control and local retransmission algorithms on the wireless links. 

The authors modify NewReno so that it halves its window when it receives a mark or detects a loss through timeout, but retransmits without halving its window when it detects a loss through duplicate acknowledgments. 

and only when, individual link marking probability is exponential in its link price, this end-to-end marking probability will be exponentially increasing in the sum of the link prices at all the congested links in its path. 

The simulation is conducted in the ns-2.1b6 simulator for a single link that has a bandwidth capacity of 64 Mb/s and a buffer capacity of 120 packets. 

The second approach informs the source, using TCP options fields, which losses are due to wireless effects, so that the source will not halve its rate after retransmission. 

Since RED uses queue length to determine the marking probability, this means that the mean queue length must steadily increase as the number of users increases. 

Here bl(t) is the aggregate buffer occupancy at queue l in period t and b*l ≥ 0 is target queue length, xl(t) is the aggregate input rate to queue l in period t, and cl(t) is the available bandwidth to queue l in period t. 

The queue length in the next period equals the current queue length plus aggregate input minus output:bl(t + 1) = [bl(t) + xl(t) – cl(t)]+ (1) where [z]+ 

one may choose to measure congestion differently, perhaps by using loss, delay, or queue length (but see the next subsection for caution), but mark with an exponential marking probability function, in order to implement the second, but not the first, feature. 

When the target queue length b* is nonzero, the authors can bypass the measurement of rate mismatch xl(t) – cl(t) in the price update, Eq. 2. Notice that xl(t) – cl(t) is the rate at which the queue length grows when the buffer is nonempty. 

The weighted sum is positive when either the input rate exceeds the link capacity or there is excess backlog to be cleared, and negative otherwise. 

Trending Questions (1)
Why s rem measured in schozphnreia?

The provided paper does not mention anything about REM being measured in schizophrenia.