scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Random early detection gateways for congestion avoidance

01 Aug 1993-IEEE ACM Transactions on Networking (IEEE Press)-Vol. 1, Iss: 4, pp 397-413
TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Abstract: The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. >

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 2007
TL;DR: A survey of the recent efforts towards a systematic understanding of layering as optimization decomposition, where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems.
Abstract: | Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent cross-layer designs are also conducted through piecemeal approaches. Network protocol stacks may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of "layering" as "optimization decomposition," where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, leading to a choice of different layering architectures. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent works are summarized, and open issues discussed. Through case studies, it is illustrated how "Layering as Optimization Decomposition" provides a common language to think about modularization in the face of complex, networked interactions, a unifying, top-down approach to design protocol stacks, and a mathematical theory of network architectures.

1,229 citations


Cites background or methods from "Random early detection gateways for..."

  • ...Here, Fs model TCP algorithms (e.g., Reno or Vegas) and ðGl;HlÞ model AQMs (e.g., RED, REM)....

    [...]

  • ...On the current Internet, the source algorithm is carried out by Transmission Control Protocol (TCP), and the link algorithm is carried out by (active) queue management (AQM) schemes such as DropTail or RED [43]....

    [...]

  • ...Then, (the Bgentle[ version of) RED marks a packet with a probability lðtÞ that is a piecewise linear, increasing the function of rlðtÞ with constants 1, 2, Ml, bl, and bl lðtÞ ¼ 0; rlðtÞ bl 1 rlðtÞ blð Þ; bl rlðtÞ bl 2 rlðtÞ bl þMl; bl rlðtÞ 2bl 1; rlðtÞ 2bl. 8>>< >>: (12) Equations (10)–(12) define the model ðG;HÞ for RED....

    [...]

  • ...TCP Reno/RED: The congestion control algorithm in the large majority of current TCP implementations is (an enhanced version of ) TCP Reno, first proposed in [55]....

    [...]

  • ...[2] F. Baccelli, D. R. McDonald, and J. Reynier, BA mean-field model for multiple TCP connections through a buffer implementing RED,[ INRIA, Tech....

    [...]

Journal ArticleDOI
01 Jul 1996
TL;DR: The congestion control algorithms in the simulated implementation of SACK TCP are described and it is shown that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgements does impose limits to TCP's ultimate performance.
Abstract: This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCP's performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP's ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered.

1,228 citations

Journal ArticleDOI
TL;DR: A simple analytic characterization of the steady-state send rate as a function of loss rate and round trip time for a bulk transfer TCP flow is developed and is able to more accurately predict TCP send rate and is accurate over a wider range of loss rates.
Abstract: The steady-state performance of a bulk transfer TCP flow (i.e., a flow with a large amount of data to send, such as FTP transfers) may be characterized by the send rate, which is the amount of data sent by the sender in unit time. In this paper we develop a simple analytic characterization of the steady-state send rate as a function of loss rate and round trip time (RTT) for a bulk transfer TCP flow. Unlike the models of Lakshman and Madhow (see IEE/ACM Trans. Networking, vol.5, p.336-50, 1997), Mahdavi and Floyd (1997), Mathis, Semke, Mahdavi and Ott (see Comput. Commun. Rev., vol.27, no.3, 1997) and by by Ott et al., our model captures not only the behavior of the fast retransmit mechanism but also the effect of the time-out mechanism. Our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our TCP traces contained more time-out events than fast retransmit events. Our measurements demonstrate that our model is able to more accurately predict TCP send rate and is accurate over a wider range of loss rates. We also present a simple extension of our model to compute the throughput of a bulk transfer TCP flow, which is defined as the amount of data received by the receiver in unit time.

1,192 citations

Proceedings ArticleDOI
19 Aug 2002
TL;DR: XCP as mentioned in this paper generalizes the Explicit Congestion Notification proposal (ECN) and decouples utilization control from fairness control, which allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.
Abstract: Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links.To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.

1,191 citations

Journal ArticleDOI
TL;DR: It is argued that controlled link-sharing is an essential component that can provide gateways with the flexibility to accommodate emerging applications and network protocols.
Abstract: Discusses the use of link-sharing mechanisms in packet networks and presents algorithms for hierarchical link-sharing. Hierarchical link-sharing allows multiple agencies, protocol families, or traffic types to share the bandwidth on a link in a controlled fashion. Link-sharing and real-time services both require resource management mechanisms at the gateway. Rather than requiring a gateway to implement separate mechanisms for link-sharing and real-time services, the approach in the paper is to view link-sharing and real-time service requirements as simultaneous, and in some respect complementary, constraints at a gateway that can be implemented with a unified set of mechanisms. While it is not possible to completely predict the requirements that might evolve in the Internet over the next decade, the authors argue that controlled link-sharing is an essential component that can provide gateways with the flexibility to accommodate emerging applications and network protocols. >

1,181 citations

References
More filters
Book ChapterDOI
TL;DR: In this article, upper bounds for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt are derived for certain sums of dependent random variables such as U statistics.
Abstract: Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr {S – ES ≥ nt} depend only on the endpoints of the ranges of the summands and the mean, or the mean and the variance of S. These results are then used to obtain analogous inequalities for certain sums of dependent random variables such as U statistics and the sum of a random sample without replacement from a finite population.

8,655 citations

Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations


"Random early detection gateways for..." refers background in this paper

  • ...In addition, the emphasis on avoiding the global synchronization that results from many connections reducing their windows at the same time is particularly relevant in a network with 4.3-Tahoe BSD TCP [ 14 ], where each connection goes through Slow-Start, reducing the window to one, in response to a dropped packet....

    [...]

  • ...In addition to the design goals discussed in Section3, several general goals have been outlined for congestion avoidance schemes [ 14 , 16]....

    [...]

  • ...1Jacobson [ 14 ] proposed gateways to monitor the average queue size to detect incipient congestion, and to randomly drop...

    [...]

  • ...As long as wq is chosen as a (negative) power of two, this can be implemented with one shift and two additions (given scaled versions of the parameters) [ 14 ]....

    [...]

Book
30 Mar 1990
TL;DR: In this article, the Kalman filter and state space models were used for univariate structural time series models to estimate, predict, and smoothen the univariate time series model.
Abstract: List of figures Acknowledgement Preface Notation and conventions List of abbreviations 1. Introduction 2. Univariate time series models 3. State space models and the Kalman filter 4. Estimation, prediction and smoothing for univariate structural time series models 5. Testing and model selection 6. Extensions of the univariate model 7. Explanatory variables 8. Multivariate models 9. Continuous time Appendices Selected answers to exercises References Author index Subject index.

5,071 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

01 Apr 1981

929 citations


"Random early detection gateways for..." refers background in this paper

  • ...Early descriptions of IP Source Quench messages suggest that gateways could send Source Quench messages to source hosts before the buffer space at the gateway reaches capacity [ 26 ], and before packets have to be dropped at the gateway....

    [...]