REM: active queue management
Summary (3 min read)
Introduction
- The second feature is essential in a network where users typically go through multiple congested links.
- They contrast sharply with random early detection (RED) [1].
- The authors explain how REM can help address this problem and present simulation results of its performance.
- For the rest of this article, unless otherwise specified, by “marking” the authors mean either dropping a packet or setting its explicit congestion notification (ECN) bit [2] probabilistically.
RED
- A main purpose of active queue management is to provide congestion information for sources to set their rates.
- In fact, RED only decides the first two questions.
- For RED, the congestion measure is queue length and it is automatically updated by the buffer process.
REM
- REM differs from RED only in the first two design questions: it uses a different definition of congestion measure and a different marking probability function.
- These differences lead to the two key features mentioned in the last section, as the authors now explain.
- Detailed derivation and justification, a pseudocode implementation, and much more extensive simulations can be found in [4, 5].
Match Rate Clear Buffer
- The first idea of REM is to stabilize both the input rate around link capacity and the queue around a small target, regardless of the number of users sharing the link.
- When the number of users increases, the mismatches in rate and queue grow, pushing up price and hence marking probability.
- Whereas the congestion measure (queue length) in RED is automatically updated by the buffer process according to Eq. 1, REM explicitly controls the update of its price to bring about its first property.
- This can hold only if the input rate equals capacity (xl = cl) and the backlog equals its target (bl = b*l), leading to the first feature mentioned at the beginning of the article.
- When the target queue length b* is nonzero, the authors can bypass the measurement of rate mismatch xl(t) – cl(t) in the price update, Eq. 2. Notice that xl(t) – cl(t) is the rate at which the queue length grows when the buffer is nonempty.
Sum Prices
- The output queue marks each arrival packet not already marked at an upstream queue, with a probability that is exponentially increasing in the current price.
- When, and only when, individual link marking probability is exponential in its link price, this end-to-end marking probability will be exponentially increasing in the sum of the link prices at all the congested links in its path.
- Since it is embedded in the end-to-end marking probability, it can easily be estimated by sources from the fraction of their own packets that are marked, and used to design their rate adaptation.
Modularized Features
- The price adjustment rule given by Eq. 2 or 3 leads to the feature that REM attempts to equalize user rates with network capacity while stabilizing queue length around a target value, possibly zero.
- The exponential marking probability function given by Eq. 4 leads to the feature that the end-to-end marking probability conveys to a user the aggregate price, aggregated over all routers in its path.
- These two features can be implemented independent of each other.
- One may choose to use price to measure congestion but use a different marking probability function, say, one that is RED-like or some other increasing function of the price, to implement the first, but not the second, feature.
- Alternatively, one may choose to measure congestion differently, perhaps by using loss, delay, or queue length (but see the next subsection for caution), but mark with an exponential marking probability function, in order to implement the second, but not the first, feature.
Congestion and Performance Measures
- Reno without active queue management measures congestion with buffer overflow, Vegas measures it with queuing (not including propagation) delay [6], RED measures it with average queue length, and REM measures it with price.
- It is thus inevitable that the average queue under RED grows with the number of users, gentle or not.
- Hence, in times of congestion, RED can be tuned to achieve either high link utilization or low delay and loss, but not both.
- In contrast, by decoupling congestion and performance measures, a queue can be stabilized around its target independent of traffic load, leading to high utilization and low delay and loss in equilibrium.
- These are illustrated in the simulation results in the next section.
Stability and Utility Function
- It has recently been shown that major TCP congestion control schemes — Reno/DropTail, Reno/RED, Reno/REM, Vegas/DropTail, Vegas/RED, and Vegas/REM — can all be interpreted as approximately carrying out a gradient algorithm to maximize aggregate source utility [3, 6]; see also [7, 8] for a related model.
- The duality model thus provides a convenient way to study the stability, optimality, and fairness properties of these schemes, and, more important, to explore their interaction.
- This confirms their extensive real-life and simulation experience with these TCP schemes when window sizes are relatively small.
- First, even though users typically do not know what utility functions they should use, by designing their rate adaptation they have implicitly chosen a particular utility function.
- Recently, a proportional-plus-integral (PI) controller was proposed in [11] as an alternative active queue management scheme to RED, and simulation results were presented to demonstrate its superior equilibrium and transient performance.
1 Unlike REM, however, the goodput under RED is higher with dropping than with marking (Fig. 2). This is intriguing and seems to happen with
- The authors mark or drop packets according to the probability determined by the link algorithm.
- As time increases on the xaxis, the number of sources increases from 20 to 160 and the average window size decreases from 32 packets to 4 packets.
- The loss rate is about the same under REM with dropping as under DropTail, as predicted by the duality model of [3].
- The goodput for DropTail upper bounds that of all variations of RED, because it keeps a substantially larger mean queue.
Wireless TCP
- TCP (or more precisely, the AIMD algorithm) was originally designed for wireline networks where congestion is measured, and conveyed to users, by packet losses due to buffer overflows.
- The third approach aims to eliminate packet loss due to buffer overflow, so the source only sees wireless losses.
- This phenomenon also manifests itself in the cumulative packet losses due only to buffer overflow shown in Fig. 4: loss is heaviest with NewReno, negligible with RED(10:30) and REM, and moderate with RED(20:80).
- A possible solution is for routers to somehow indicate their ECN capability, possibly making use of one of the two ECN bits proposed in [2].
Conclusion
- The authors have proposed a new active queue management scheme, REM, that attempts to achieve both high utilization and negligible loss and delay.
- As time increases on the x-axis, the number of sources increases from 20 to 100 and the average window size decreases from 22 packets to 4 packets.
- Simulation results suggest that this goal seems achievable without sacrificing the simplicity and scalability of the original RED.
- The authors emphasize, however, that it is an equilibrium property, and REM’s transient behavior needs more careful study.
Did you find this useful? Give us your feedback
Citations
1,191 citations
858 citations
822 citations
Cites methods from "REM: active queue management"
...Another link algorithm, proposed in [45] and [46], is...
[...]
...To communicateqi, the technique of random exponential marking [45] can be used....
[...]
...This protocol can be implemented in a similar fashion to RED, but simulations in [45] and [46] have shown a marked improvement over RED in terms of achieving fast responses and low queues....
[...]
701 citations
Cites background or methods from "REM: active queue management"
...A larger generally yields a higher utilization especially when the queue oscillates widely [1], [2]....
[...]
...Different protocols, such as Reno, Vegas, RED, and REM [1], [2], all solve the same prototypical problem with different utility functions, and we derive these functions explicitly (Sections III and IV)....
[...]
...Since this is not used by Reno, other increasing functions can also be used, as explained in [1] and [2]....
[...]
...4 REM: REM [1], [2] also maintains two internal variables, instantaneous queue length and a quantity called “price” ....
[...]
587 citations
References
6,198 citations
"REM: active queue management" refers background in this paper
...They contrast sharply with random early detection (RED) [ 1 ]....
[...]
5,566 citations
3,067 citations
2,101 citations
1,325 citations
"REM: active queue management" refers background in this paper
...Three approaches have been proposed to address this problem [12]....
[...]
Related Papers (5)
Frequently Asked Questions (17)
Q2. What is the effect of the ECN bit on the packets?
With active queue management, the ECN bit is set to 1 in ns-2 so that packets are probabilistically marked according to RED or REM.
Q3. How many packets are in the REM?
As time increases on the x-axis, the number of sources increases from 20 to 100 and the average window size decreases from 22 packets to 4 packets.
Q4. What is the significance of the exponential form of the marking probability?
The exponential form of the marking probability is critical in a large network where the end-to-end marking probability for a packet that traverses multiple congested links from source to destination depends on the link marking probability at every link in the path.
Q5. What is the price of a queue in RED?
Whereas the congestion measure (queue length) in RED is automatically updated by the buffer process according to Eq. 1, REM explicitly controls the update of its price to bring about its first property.
Q6. What is the end-to-end marking probability of REM?
When the link marking probabilities ml(t) are small, and hence the link prices pl(t) are small, the end-to-end marking probability given by Eq. 5 is approximately proportional to the sum of the link prices in the path:(6)The price adjustment rule given by Eq. 2 or 3 leads to the feature that REM attempts to equalize user rates with network capacity while stabilizing queue length around a target value, possibly zero.
Q7. What is the simplest approach to preventing packet loss?
This involves various interference suppression techniques, and error control and local retransmission algorithms on the wireless links.
Q8. What is the effect of a retransmitter when it detects a loss?
The authors modify NewReno so that it halves its window when it receives a mark or detects a loss through timeout, but retransmits without halving its window when it detects a loss through duplicate acknowledgments.
Q9. What is the end-to-end marking probability of a link?
and only when, individual link marking probability is exponential in its link price, this end-to-end marking probability will be exponentially increasing in the sum of the link prices at all the congested links in its path.
Q10. How many packets are in the ns-2.1b6 simulator?
The simulation is conducted in the ns-2.1b6 simulator for a single link that has a bandwidth capacity of 64 Mb/s and a buffer capacity of 120 packets.
Q11. What is the second approach to reducing packet loss?
The second approach informs the source, using TCP options fields, which losses are due to wireless effects, so that the source will not halve its rate after retransmission.
Q12. What does RED use to determine the marking probability?
Since RED uses queue length to determine the marking probability, this means that the mean queue length must steadily increase as the number of users increases.
Q13. What is the average buffer occupancy in a period t?
Here bl(t) is the aggregate buffer occupancy at queue l in period t and b*l ≥ 0 is target queue length, xl(t) is the aggregate input rate to queue l in period t, and cl(t) is the available bandwidth to queue l in period t.
Q14. What is the value of the queue length in the next period?
The queue length in the next period equals the current queue length plus aggregate input minus output:bl(t + 1) = [bl(t) + xl(t) – cl(t)]+ (1) where [z]+
Q15. What is the marking probability of a packet?
one may choose to measure congestion differently, perhaps by using loss, delay, or queue length (but see the next subsection for caution), but mark with an exponential marking probability function, in order to implement the second, but not the first, feature.
Q16. What is the rate at which the queue length grows when the buffer is nonempty?
When the target queue length b* is nonzero, the authors can bypass the measurement of rate mismatch xl(t) – cl(t) in the price update, Eq. 2. Notice that xl(t) – cl(t) is the rate at which the queue length grows when the buffer is nonempty.
Q17. What is the weighted sum of the queue length in the next period?
The weighted sum is positive when either the input rate exceeds the link capacity or there is excess backlog to be cleared, and negative otherwise.