scispace - formally typeset
Open AccessJournal ArticleDOI

REM: active queue management

Reads0
Chats0
TLDR
A new active queue management scheme, random exponential marking (REM), is described that aims to achieve both high utilization and negligible loss and delay in a simple and scalable manner and presents simulation results of its performance in wireline and wireless networks.
Abstract
We describe a new active queue management scheme, random exponential marking (REM), that aims to achieve both high utilization and negligible loss and delay in a simple and scalable manner. The key idea is to decouple the congestion measure from the performance measure such as loss, queue length, or delay. While the congestion measure indicates excess demand for bandwidth and must track the number of users, the performance measure should be stabilized around their targets independent of the number of users. We explain the design rationale behind REM and present simulation results of its performance in wireline and wireless networks.

read more

Content maybe subject to copyright    Report

IEEE Network • May/June 2001
48
n this article we describe a new active queue manage-
ment scheme, Random Exponential Marking (REM),
that has the following key features:
Match rate clear buffer: It attempts to match user rates
to network capacity while clearing buffers (or stabilize
queues around a small target), regardless of the number of
users.
Sum prices: The end-to-end marking (or dropping) probabil-
ity observed by a user depends in a simple and precise man-
ner on the sum of link prices (congestion measures),
summed over all the routers in the path of the user.
The first feature implies that, contrary to the conventional
wisdom, high utilization is not achieved by keeping large back-
logs in the network, but by feeding back the right information
for users to set their rates. We present simulation results
which demonstrate that REM can maintain high utilization
with negligible loss or queuing delay as the number of users
increases.
The second feature is essential in a network where users
typically go through multiple congested links. It clarifies the
meaning of the congestion information embedded in the end-
to-end marking (or dropping) probability observed by a user,
and thus can be used to design its rate adaptation.
In the following, we describe REM and explain how it
achieves these two features. They contrast sharply with ran-
dom early detection (RED) [1]. It will become clear that these
features are independent of each other, and one can be imple-
mented without the other. We then compare the performance
of DropTail, RED, and REM in wireline networks through
simulations. It is well known that TCP performs poorly over
wireless links because it cannot differentiate between losses
due to buffer overflow and those due to wireless effects such
as fading, interference, and handoffs. We explain how REM
can help address this problem and present simulation results
of its performance.
For the rest of this article, unless otherwise specified, by
“marking” we mean either dropping a packet or setting its
explicit congestion notification (ECN) bit [2] probabilistically.
If a packet is marked by setting its ECN bit, its mark is car-
ried to the destination and then conveyed back to the source
via acknowledgment.
We start by interpreting RED.
RED
A main purpose of active queue management is to provide
congestion information for sources to set their rates. The
design of active queue management algorithms must answer
three questions, assuming packets are probabilistically marked:
How is congestion measured?
How is the measure embedded in the probability function?
How is it fed back to users?
RED answers these questions as follows.
First, RED measures congestion by (exponentially weighted
average) queue length. Importantly, the choice of congestion
measure determines how it is updated to reflect congestion
(see below) [3]. Second, the probability function is a piecewise
linear and increasing function of the congestion measure, as
illustrated in Fig. 1. Finally, the congestion information is con-
veyed to the users by either dropping a packet or setting its
ECN bit probabilistically. In fact, RED only decides the first
two questions. The third question is largely independent.
RED interacts with TCP: as source rates increase, queue
length grows, more packets are marked, prompting the
sources to reduce their rates, and the cycle repeats. TCP
defines precisely how the source rates are adjusted while
active queue management defines how the congestion mea-
sure is updated. For RED, the congestion measure is queue
length and it is automatically updated by the buffer process.
The queue length in the next period equals the current queue
length plus aggregate input minus output:
b
l
(t + 1) = [b
l
(t) + x
l
(t) – c
l
(t)]
+
(1)
where [z]
+
= max {z, 0}. Here, b
l
(t) is the aggregate queue
length at queue l in period t, x
l
(t) is the aggregate input rate
to queue l in period t, and c
l
(t) is the output rate in period t.
REM
REM differs from RED only in the first two design questions:
it uses a different definition of congestion measure and a dif-
ferent marking probability function. These differences lead to
the two key features mentioned in the last section, as we now
explain. Detailed derivation and justification, a pseudocode
implementation, and much more extensive simulations can be
found in [4, 5].
0890-8044/01/$10.00 © 2001 IEEE
REM: Active Queue Management
Sanjeewa Athuraliya and Steven H. Low, California Institute of Technology
Victor H. Li and Qinghe Yin, CUBIN, University of Melbourne
Abstract
We describe a new active queue management scheme, Random Exponential Mark-
ing (REM), that aims to achieve both high utilization and negligible loss and delay in
a simple and scalable manner. The key idea is to decouple congestion measure from
performance measure such as loss, queue length, or delay. While congestion mea-
sure indicates excess demand for bandwidth and must track the number of users,
performance measure should be stabilized around their targets independent of the
number of users. We explain the design rationale behind REM and present simula-
tion results of its performance in wireline and wireless networks.
I
I

IEEE Network • May/June 2001
49
Match Rate Clear Buffer
The first idea of REM is to stabilize both the input rate
around link capacity and the queue around a small target,
regardless of the number of users sharing the link.
Each output queue that implements REM maintains a
variable we call price as a congestion measure. This variable
is used to determine the marking probability, as explained
in the next subsection. Price is updated, periodically or
asynchronously, based on rate mismatch (i.e., difference
between input rate and link capacity) and queue mismatch
(i.e., difference between queue length and target). The price
is incremented if the weighted sum of these mismatches is
positive, and decremented otherwise. The weighted sum is
positive when either the input rate exceeds the link capacity
or there is excess backlog to be cleared, and negative other-
wise. When the number of users increases, the mismatches
in rate and queue grow, pushing up price and hence mark-
ing probability. This sends a stronger congestion signal to
the sources, which then reduce their rates. When the source
rates are too small, the mismatches will be negative, pushing
down price and marking probability and raising source
rates, until eventually the mismatches are driven to zero,
yielding high utilization and negligible loss and delay in
equilibrium. The buffer will be cleared in equilibrium if the
target queue is set to zero.
Whereas the congestion measure (queue length) in RED is
automatically updated by the buffer process according to Eq.
1, REM explicitly controls the update of its price to bring
about its first property. Precisely, for queue l, the price p
l
(t) in
period t is updated according to
p
l
(t + 1) = [p
l
(t) + g(a
l
(b
l
(t) – b
*
l
) + x
l
(t) – c
l
(t))]
+
, (2)
where g > 0 and a
l
> are small constants and [z]
+
= max {z,
0}. Here b
l
(t) is the aggregate buffer occupancy at queue l in
period t and b
*
l
0 is target queue length, x
l
(t) is the aggregate
input rate to queue l in period t, and c
l
(t) is the available
bandwidth to queue l in period t. The difference x
l
(t) – c
l
(t)
measures rate mismatch and the difference b
l
(t) – b
*
l
measures
queue mismatch. The constant a
l
can be set by each queue
individually, and trades off utilization and queuing delay dur-
ing transient. The constant g controls the responsiveness of
REM to changes in network conditions. Hence, from Eq. 2,
the price is increased if the weighted sum of rate and queue
mismatches, weighted by a
l
, is positive, and decreased other-
wise. In equilibrium the price stabilizes, and this weighted sum
must be zero (i.e., a
l
(b
l
b
*
l
) + x
l
c
l
= 0). This can hold only
if the input rate equals capacity (x
l
= c
l
) and the backlog
equals its target (b
l
= b
*
l
), leading to the first feature men-
tioned at the beginning of the article.
We make two remarks on implementation. First, REM
uses only local and aggregate information — in particular,
no per-flow information is needed — and works with any
work-conserving service discipline. It updates its price inde-
pendent of other queues or routers. Hence, its complexity is
independent of the number of users or the size of the net-
work or its capacity.
Second, it is usually easier to sample queue length than
rate in practice. When the target queue length b* is nonzero,
we can bypass the measurement of rate mismatch x
l
(t) – c
l
(t)
in the price update, Eq. 2. Notice that x
l
(t) – c
l
(t) is the rate at
which the queue length grows when the buffer is nonempty.
Hence, we can approximate this term by the change in back-
log, b
l
(t + 1) – b
l
(t). Then the update rule of Eq. 2 becomes
p
l
(t + 1) = [p
l
(t) + g(b
l
(t + 1) – (1 – a
l
) b
l
(t) – a
l
b*)]
+
, (3)
that is, the price is updated based only on the current and pre-
vious queue lengths.
The update rule expressed in Eq. 2 or 3 contrasts sharply
with RED. As the number of users increases, the marking
probability should grow to increase the intensity of congestion
signal. Since RED uses queue length to determine the mark-
ing probability, this means that the mean queue length must
steadily increase as the number of users increases. In contrast,
the update rule of Eq. 3 uses queue length to update a price
which is then used to determine the marking probability.
Hence, under REM, the price steadily increases while the
mean queue length is stabilized around the target b
*
l
, as the
number of users increases. We will come back to this point in
a later section.
Sum Prices
The second idea of REM is to use the sum of the link prices
along a path as a measure of congestion in the path, and to
embed it in the end-to-end marking probability that can be
observed at the source.
The output queue marks each arrival packet not already
marked at an upstream queue, with a probability that is
exponentially increasing in the current price. This marking
probability is illustrated in Fig. 1. The exponential form of
the marking probability is critical in a large network where
the end-to-end marking probability for a packet that tra-
verses multiple congested links from source to destination
depends on the link marking probability at every link in the
path. When, and only when, individual link marking proba-
bility is exponential in its link price, this end-to-end mark-
ing probability will be exponentially increasing in the sum
of the link prices at all the congested links in its path. This
sum is a precise measure of congestion in the path. Since it
is embedded in the end-to-end marking probability, it can
easily be estimated by sources from the fraction of their
own packets that are marked, and used to design their rate
adaptation.
Precisely, suppose a packet traverses links l = 1, 2, …, L
that have prices p
l
(t) in period t. Then the marking probability
m
l
(t) at queue l in period t is
m
l
(t) = 1 – f
p
l
(t)
, (4)
where f > 1 is a constant. The end-to-end marking probabili-
ty for the packet is then
(5)
11 1
1
1
-- =-
=
(()) ;
()
mt
l
l
L
pt
l
f
Figure 1. Marking probability of (gentle) RED and REM.
500
Probability
Congestion measure
REM
RED
Marking probability
0.1
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
100 150 200 250

IEEE Network • May/June 2001
50
that is, the end–to–end marking probability is high when the
congestion measure of its path, S
l
p
l
(t), is large.
When the link marking probabilities m
l
(t) are small, and
hence the link prices p
l
(t) are small, the end-to-end marking
probability given by Eq. 5 is approximately proportional to the
sum of the link prices in the path:
(6)
Modularized Features
The price adjustment rule given by Eq. 2 or 3 leads to the fea-
ture that REM attempts to equalize user rates with network
capacity while stabilizing queue length around a target value,
possibly zero. The exponential marking probability function
given by Eq. 4 leads to the feature that the end-to-end mark-
ing probability conveys to a user the aggregate price, aggregat-
ed over all routers in its path. These two features can be
implemented independent of each other.
For example, one may choose to use price to measure con-
gestion but use a different marking probability function, say,
one that is RED-like or some other increasing function of the
price, to implement the first, but not the second, feature.
Alternatively, one may choose to measure congestion differ-
ently, perhaps by using loss, delay, or queue length (but see
the next subsection for caution), but mark with an exponential
marking probability function, in order to implement the sec-
ond, but not the first, feature.
Congestion and Performance Measures
Reno without active queue management measures congestion
with buffer overflow, Vegas measures it with queuing (not
including propagation) delay [6], RED measures it with aver-
age queue length, and REM measures it with price. A critical
difference among them is the coupling of congestion measure
with performance measure, such as loss, delay, or queue
length, in the first three schemes. This coupling implies that,
as the number of users increases, congestion grows and per-
formance deteriorates (i.e., “congestion” necessarily means
“bad performance,” e.g., large loss or delay). If they are
decoupled, as in REM, “congestion” (i.e., high link prices)
simply signals that “demand for exceeds supply of” network
resources. This curbs demand but maintains good perfor-
mance, such as low delay and loss.
By “decoupling,” we mean that the equilibrium value of the
congestion measure is independent of the equilibrium loss,
queue length, or delay. Notice that in Eq. 3, queue length
determines the update of the congestion measure in REM
during transience, but not its equilibrium value. As the num-
ber of users grows, prices in REM grow, but queues stabilize
around their targets. Indeed, the equilibrium value of conges-
tion measure, price in REM, and average queue length in
RED, is determined solely by the network topology and the
number of users [3], not by the way it is updated.
It is thus inevitable that the average queue under RED
grows with the number of users, gentle or not. With the
original RED, it can grow to the maximum queue threshold
max_th where all packets are marked. If max_th is set too
high, the queuing delay can be excessive; if it is set too low,
the link can be underutilized due to severe buffer oscillation.
Moreover, if the congestion signal is fed back through random
dropping rather than marking, packet losses can be very fre-
quent. Hence, in times of congestion, RED can be tuned to
achieve either high link utilization or low delay and loss, but
not both. In contrast, by decoupling congestion and perfor-
mance measures, a queue can be stabilized around its target
independent of traffic load, leading to high utilization and low
delay and loss in equilibrium. These are illustrated in the sim-
ulation results in the next section.
Performance
Stability and Utility Function
It has recently been shown that major TCP congestion control
schemes — Reno/DropTail, Reno/RED, Reno/REM,
Vegas/DropTail, Vegas/RED, and Vegas/REM — can all be
interpreted as approximately carrying out a gradient algorithm
to maximize aggregate source utility [3, 6]; see also [7, 8] for a
related model. Different TCP schemes, with or without mark-
ing, merely differ in their choice of user utility functions. The
duality model thus provides a convenient way to study the sta-
bility, optimality, and fairness properties of these schemes,
and, more important, to explore their interaction. In particu-
lar, the gradient algorithm has been proved mathematically to
be stable even in an asynchronous environment [9, 10]. This
confirms our extensive real-life and simulation experience
with these TCP schemes when window sizes are relatively
small. It also has two implications.
First, even though users typically do not know what utility
functions they should use, by designing their rate adaptation
they have implicitly chosen a particular utility function. By
making this apparent, the optimization models [3, 6–8] deep-
en our understanding of the current protocols and suggest a
way to design new protocols by tailoring utility functions to
applications.
Second, the utility function may be determined not only
by users’ rate adaptation, but also by the marking algo-
rithm. This is true for Reno; that is, Reno/DropTail,
Reno/RED, and Reno/REM have slightly different utility
functions. This is a consequence of our requirement that
the additive-increase-multiplicative-decrease (AIMD) algo-
rithm react to packet losses in the same way regardless of
whether they are due to buffer overflow, or RED or REM,
even though congestion is measured and embedded very
differently in these schemes.
Recently, a proportional-plus-integral (PI) controller was
proposed in [11] as an alternative active queue management
scheme to RED, and simulation results were presented to
demonstrate its superior equilibrium and transient perfor-
mance. It turns out that this PI controller and REM as
expressed in Eq. 3 are equivalent.
Utilization, Loss, and Delay
We have conducted extensive simulations to compare the
performance of REM and RED with both Reno and
NewReno, with a single link and multiple links, with various
numbers of sources, link capacities, and propagation delays
[4, 5]. The relative performance of REM and RED, as expect-
ed, is similar with both Reno and NewReno since the proper-
ties discussed earlier are properties of active queue
management, independent of the source algorithms.
1
In this
subsection we present some of these results, comparing the
performance of NewReno/DropTail, NewReno/REM, and
NewReno/RED.
The simulation is conducted in the ns-2.1b6 simulator for
a single link that has a bandwidth capacity of 64 Mb/s and a
buffer capacity of 120 packets. Packets are all 1 kbyte in size.
This link is shared by 160 NewReno users with the same
round-trip propagation delay of 80 ms. Twenty users are ini-
end -to-end marketing probability -
˜
(log ) ( ).
el
l
ptf
Â
1
Unlike REM, however, the goodput under RED is higher with dropping
than with marking (Fig. 2). This is intriguing and seems to happen with
NewReno but not Reno.

IEEE Network • May/June 2001
51
tially active at time 0, and every 50 s thereafter 20 more users
activate, until all 160 users are active. Two sets of parameters
are used for RED. The first set, referred to as RED(20:80),
has a minimum queue threshold min_th = 20 packets, a max-
imum queue threshold max_th = 80 packets, and max_p =
0.1. The second set, referred to as RED(10:30), has a mini-
mum queue threshold min_th = 10 packets, a maximum
queue threshold max_th = 30 packets, and max_p = 0.1. For
both sets, q_weight = 0.002. The parameter values of REM
are f = 1.001, a = 0.1, g = 0.001, and b* = 20 packets. We
have conducted experiments with both marking and dropping
packets as ways of congestion feedback. We mark or drop
packets according to the probability determined by the link
algorithm.
The results are shown in Fig. 2. As time increases on the x-
axis, the number of sources increases from 20 to 160 and the
average window size decreases from 32 packets to 4 packets.
The y-axis illustrates the performance in each period (in
between the introduction of new sources). Goodput is the
ratio of the total number of nonduplicate packets received at
all destinations per unit time to link capacity. Loss rate is the
ratio of the total number of packets dropped to the total num-
ber of packets sent.
The left panel compares the performance of REM with
DropTail. In this set of experiments, REM achieves a slightly
higher goodput than DropTail at almost all window sizes with
either dropping or ECN marking. As the number of sources
grows, REM stabilizes the mean queue around the target b*=
20 packets, whereas the mean queue under DropTail steadily
increases. The loss rate is about the same under REM with
dropping as under DropTail, as predicted by the duality
model of [3]. The loss rate under REM with marking is nearly
zero regardless of the number of sources (not shown).
The right panel compares the performance of RED with
DropTail. The goodput for DropTail upper bounds that of all
variations of RED, because it keeps a substantially larger mean
queue. The mean queue under all these five schemes steadily
increases as the number of sources grows, as discussed earlier.
As expected, RED(20:80) has both a higher goodput and mean
queue than RED(10:30) at all window sizes.
Wireless TCP
TCP (or more precisely, the AIMD algorithm) was originally
designed for wireline networks where congestion is measured,
and conveyed to users, by packet losses due to buffer over-
flows. In wireless networks, however, packets are lost mainly
because of bit errors, due to fading and interference, and
Figure 2. Performance of NewReno/DropTail, NewReno/RED, NewReno/REM. As time increases on the x-axis, the number of users
increases from 20 to 160 and the average window size decreases from 32 packets to 4 packets.
500
Goodput (%)
Time (sec.)
80
75
85
90
95
100
100 150 200 250 300 350 400
Drop Tail
REM dropping
REM marking
500
Goodput (%)
Time (sec.)
80
75
85
90
95
100
100 150 200 250 300 350 400
500
Mean queue length (pkts.)
Time (sec.)
20
0
40
60
80
100
120
100 150 200 250 300 350 400
Drop Tail
REM dropping
REM marking
500
Mean queue length (pkts.)
Time (sec.)
20
0
40
60
80
100
120
100 150 200 250 300 350 400
500
Loss rate
Time (sec.)
0
0.05
0.1
0.15
100 150 200 250 300 350 400
Drop Tail
REM dropping
Drop Tail
RED dropping(10:30)
RED dropping(20:80)
500
Loss rate
Time (sec.)
0.05
75
0.1
0.15
100 150 200 250 300 350 400
Drop Tail
RED dropping (10:30)
RED dropping (20:80)
RED marking (10:30)
RED marking (20:80)
Drop Tail
RED dropping (10:30)
RED dropping (20:80)
RED marking (10:30)
RED marking (20:80)

IEEE Network • May/June 2001
52
because of intermittent connectivity, due to handoffs. The
coupling between packet loss and congestion measure and
feedback in TCP leads to poor performance over wireless
links. This is because a TCP source cannot differentiate
between losses due to buffer overflow and those due to wire-
less effects, and halves its window on each loss event.
Three approaches have been proposed to address this
problem [12]. The first approach hides packet losses on
wireless links, so the source only sees congestion-induced
losses. This involves various interference suppression tech-
niques, and error control and local retransmission algo-
rithms on the wireless links. The second approach informs
the source, using TCP options fields, which losses are due
to wireless effects, so that the source will not halve its rate
after retransmission.
The third approach aims to eliminate packet loss due to
buffer overflow, so the source only sees wireless losses. This
violates TCP’s assumption: losses no longer indicate buffer
overflow. Congestion must be measured and fed back using a
different mechanism. Exploiting the first feature of REM
(match rate clear buffer), we propose to use REM with ECN
marking for this purpose. Then a TCP source only retransmits
on detecting a loss and halves its window when seeing a mark.
Note that the first and third approaches are complementary
and can be combined.
We now present preliminary simulation results to illustrate
the promise of this approach. The simulation is conducted in
the ns-2 simulator for a single wireless link that has a band-
width capacity of 2 Mb/s and a buffer capacity of 100 packets.
It loses packets randomly according to a Bernoulli loss model
with a loss probability of 1 percent (see [5] for simulations
with a bursty loss model.). A small packet size of 382 bits is
chosen to mitigate the effect of random loss. This wireless link
is shared by 100 NewReno users with the same round-trip
propagation delay of 80 ms. Twenty users are initially active at
time 0, and every 50 s thereafter 20 more users activate until
all 100 users are active.
With active queue management, the ECN bit is set to 1 in
ns-2 so that packets are probabilistically marked according to
RED or REM. Packets are dropped only when they arrive at
a full buffer. We modify NewReno so that it halves its window
when it receives a mark or detects a loss through timeout, but
retransmits without halving its window when it detects a loss
through duplicate acknowledgments. We compare the perfor-
mance of NewReno/DropTail, (modified) NewReno/RED,
and (modified) NewReno/REM. The parameters of RED and
REM have the same values as in the previous section.
Figure 3 shows the goodput within each period under the
four schemes. It shows that the introduction of ECN marking
is very effective in improving the goodput of NewReno, rais-
ing it from between 62 and 91 percent to between 82 and 96
percent, depending on the number of users. Comparison
between REM and RED has a similar conclusion as in wire-
line networks: REM and RED(20:80) maintain a higher good-
put (between 90 and 96 percent) than RED(10:30) (between
82 and 95 percent). As the number of sources increases, the
mean queue stabilizes under REM, while it steadily increases
under DropTail and RED.
This phenomenon also manifests itself in the cumulative
packet losses due only to buffer overflow shown in Fig. 4: loss
is heaviest with NewReno, negligible with RED(10:30) and
REM, and moderate with RED(20:80). Under REM and
RED(10:30) buffer overflows only during transient following
introduction of new sources, and hence their cumulative losses
jump up at the beginning of each period but stay constant
between jumps. Under RED(20:80) and DropTail, on the
other hand, buffer overflows also in equilibrium, and hence
their cumulative losses steadily increase between jumps.
A challenge with this approach is its application in a het-
erogeneous network where some, but not all, routers are
ECN-capable. Routers that are not ECN-capable continue to
rely on dropping to feed back congestion information. TCP
sources that adapt their rates only based on marks run the
risk of overloading these routers. A possible solution is for
routers to somehow indicate their ECN capability, possibly
making use of one of the two ECN bits proposed in [2]. This
may require that all routers are at least ECN-aware. A source
reacts to marks only if all routers in its path are ECN-capable,
but reacts to loss as well, like a conventional TCP source, if its
path contains a router that is not ECN-capable.
Conclusion
We have proposed a new active queue management scheme,
REM, that attempts to achieve both high utilization and negli-
gible loss and delay. The key idea is to decouple congestion
measure (price) from performance measure (loss and queue)
Figure 3. Wireless TCP: goodput (%). As time increases on the
x-axis, the number of sources increases from 20 to 100 and the
average window size decreases from 22 to 4 packets.
NewReno
REM
RED(20:80)
RED(10:30)
1000
Goodput (%)
Time (s)
70
60
65
75
80
85
90
95
100
150 200 250
Figure 4. Wireless TCP: cumulative loss due to buffer overflow
(packets). As time increases on the x-axis, the number of sources
increases from 20 to 100 and the average window size decreases
from 22 packets to 4 packets.
NewReno
REM
RED(20:80)
RED(10:30)
500
Loss due to congestion (packets.)
Time (s)
2000
0
4000
6000
8000
10000
12000
14000
16000
18000
100 150 200 250

Citations
More filters
Journal ArticleDOI

Improving MANET performance by a hop-aware and energy-based buffer management scheme

TL;DR: Simulation results show that the proposed algorithm reduces loss rates, power consumption, and end-to-end delays for real-time traffic, considerably improving the efficiency of queue management in MANET.
Proceedings ArticleDOI

The effects of AQM on the performance of assured forwarding services

TL;DR: An empirical study of the effects of active queue management mechanisms on the performance of Assured Forwarding Services in the Differentiated Services (DiffServ) framework and the interaction of AQMs with intelligent traffic conditioners.
Proceedings ArticleDOI

Local Analysis of Structural Limitations of Network Congestion Control

TL;DR: This work studies the structural limitations that so called primal/dual congestion control algorithms impose by studying the aggregated information from a network path and proposes a specialized congestion control paradigm where all sources share a common time-base.
Journal ArticleDOI

Active Queue Management of TCP Flows with Self-scheduled Linear Parameter Varying Controllers

TL;DR: Simulations show that the self-scheduled LPV controller for AQM has good performance for both constant and time-varying RTTs, and outperforms two other common control-theoretic approaches to AQM.
Proceedings ArticleDOI

Design and Implementation: Adaptive Active Queue Management Algorithm Based on Neural Network

TL;DR: This paper presents an adaptive active queue management algorithm GRPID based on classic PID controller, and incorporate the advantages of RBF neural network then validate the algorithm in NS2 Network simulator.
References
More filters
Journal ArticleDOI

Random early detection gateways for congestion avoidance

TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Journal ArticleDOI

Rate control for communication networks: shadow prices, proportional fairness and stability

TL;DR: This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks, which provide natural generalisations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion.
Journal ArticleDOI

Charging and rate control for elastic traffic

TL;DR: This paper addresses the issues of charging, rate control and routing for a communication network carrying elastic traffic, such as an ATM network offering an available bit rate service, from which max-min fairness of rates emerges as a limiting special case.
Journal ArticleDOI

Optimization flow control—I: basic algorithm and convergence

TL;DR: An optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates to solve the dual problem using a gradient projection algorithm.
Journal ArticleDOI

A comparison of mechanisms for improving TCP performance over wireless links

TL;DR: The results show that a reliable link-layer protocol that is TCP-aware provides very good performance and it is possible to achieve good performance without splitting the end-to-end connection at the base station.
Related Papers (5)
Frequently Asked Questions (17)
Q1. What are the contributions in this paper?

The authors describe a new active queue management scheme, Random Exponential Marking ( REM ), that aims to achieve both high utilization and negligible loss and delay in a simple and scalable manner. The authors explain the design rationale behind REM and present simulation results of its performance in wireline and wireless networks. 

With active queue management, the ECN bit is set to 1 in ns-2 so that packets are probabilistically marked according to RED or REM. 

As time increases on the x-axis, the number of sources increases from 20 to 100 and the average window size decreases from 22 packets to 4 packets. 

The exponential form of the marking probability is critical in a large network where the end-to-end marking probability for a packet that traverses multiple congested links from source to destination depends on the link marking probability at every link in the path. 

Whereas the congestion measure (queue length) in RED is automatically updated by the buffer process according to Eq. 1, REM explicitly controls the update of its price to bring about its first property. 

When the link marking probabilities ml(t) are small, and hence the link prices pl(t) are small, the end-to-end marking probability given by Eq. 5 is approximately proportional to the sum of the link prices in the path:(6)The price adjustment rule given by Eq. 2 or 3 leads to the feature that REM attempts to equalize user rates with network capacity while stabilizing queue length around a target value, possibly zero. 

This involves various interference suppression techniques, and error control and local retransmission algorithms on the wireless links. 

The authors modify NewReno so that it halves its window when it receives a mark or detects a loss through timeout, but retransmits without halving its window when it detects a loss through duplicate acknowledgments. 

and only when, individual link marking probability is exponential in its link price, this end-to-end marking probability will be exponentially increasing in the sum of the link prices at all the congested links in its path. 

The simulation is conducted in the ns-2.1b6 simulator for a single link that has a bandwidth capacity of 64 Mb/s and a buffer capacity of 120 packets. 

The second approach informs the source, using TCP options fields, which losses are due to wireless effects, so that the source will not halve its rate after retransmission. 

Since RED uses queue length to determine the marking probability, this means that the mean queue length must steadily increase as the number of users increases. 

Here bl(t) is the aggregate buffer occupancy at queue l in period t and b*l ≥ 0 is target queue length, xl(t) is the aggregate input rate to queue l in period t, and cl(t) is the available bandwidth to queue l in period t. 

The queue length in the next period equals the current queue length plus aggregate input minus output:bl(t + 1) = [bl(t) + xl(t) – cl(t)]+ (1) where [z]+ 

one may choose to measure congestion differently, perhaps by using loss, delay, or queue length (but see the next subsection for caution), but mark with an exponential marking probability function, in order to implement the second, but not the first, feature. 

When the target queue length b* is nonzero, the authors can bypass the measurement of rate mismatch xl(t) – cl(t) in the price update, Eq. 2. Notice that xl(t) – cl(t) is the rate at which the queue length grows when the buffer is nonempty. 

The weighted sum is positive when either the input rate exceeds the link capacity or there is excess backlog to be cleared, and negative otherwise. 

Trending Questions (1)
Why s rem measured in schozphnreia?

The provided paper does not mention anything about REM being measured in schizophrenia.