scispace - formally typeset
Search or ask a question
Topic

TCP tuning

About: TCP tuning is a research topic. Over the lifetime, 5144 publications have been published within this topic receiving 141939 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Abstract: The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. >

6,198 citations

Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations

01 Apr 1999
TL;DR: This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery, as well as discussing various acknowledgment generation methods.
Abstract: This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods.

2,237 citations

Proceedings ArticleDOI
01 Oct 1998
TL;DR: In this article, the authors developed a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer TCP flow, i.e., a flow with an unlimited amount of data to send.
Abstract: In this paper we develop a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer TCP flow, i.e., a flow with an unlimited amount of data to send. Unlike the models in [6, 7, 10], our model captures not only the behavior of TCP's fast retransmit mechanism (which is also considered in [6, 7, 10]) but also the effect of TCP's timeout mechanism on throughput. Our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our TCP traces contained more time-out events than fast retransmit events. Our measurements demonstrate that our model is able to more accurately predict TCP throughput and is accurate over a wider range of loss rates.

2,145 citations

Journal ArticleDOI
TL;DR: It is shown that a simple additive increase and multiplicative decrease algorithm satisfies the sufficient conditions for con- vergence to an efficient and fair state regardless of the starting state of the network.
Abstract: Congestion avoidance mechanisms allow a network to operate in the optimal region of low delay and high throughput, thereby, preventing the network from becoming congested. This is different from the traditional congestion control mechanisms that allow the network to recover from the congested state of high delay and low throughput. Both con- gestion avoidance and congestion control mechanisms are basi- cally resource management problems. They can be formulated as system control problems in which the system senses its state and feeds this back to its users who adjust their controls. The key component of any congestion avoidance scheme is the algorithm (or control function) used by the users to in- crease or decrease their load (window or rate). We abstractly characterize a wide class of such increase/decreas e algorithms and compare them using several different performance metrics. They key metrics are efficiency, fairness, convergence time, and size of oscillations. It is shown that a simple additive increase and multiplicative decrease algorithm satisfies the sufficient conditions for con- vergence to an efficient and fair state regardless of the starting state of the network. This is the algorithm finally chosen for implementation in the congestion avoidance scheme recom- mended for Digital Networking Architecture and OSI Trans- port Class 4 Networks.

1,847 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
94% related
Wireless ad hoc network
49K papers, 1.1M citations
92% related
Wireless network
122.5K papers, 2.1M citations
92% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Server
79.5K papers, 1.4M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202317
202250
20204
20191
201810
201790