scispace - formally typeset
Open AccessJournal ArticleDOI

Simulation-based comparisons of Tahoe, Reno and SACK TCP

Kevin Fall, +1 more
- Vol. 26, Iss: 3, pp 5-21
TLDR
The congestion control algorithms in the simulated implementation of SACK TCP are described and it is shown that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgements does impose limits to TCP's ultimate performance.
Abstract
This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCP's performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP's ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered.

read more

Content maybe subject to copyright    Report

Simulation-based Comparisons of Tahoe, Reno, and SACK TCP
Kevin Fall and Sally Floyd
Lawrence Berkeley National Laboratory
One Cyclotron Road, Berkeley, CA 94720
kfall@ee.lbl.gov, floyd@ee.lbl.gov
Abstract
This paper uses simulations to explore the benefits of
adding selective acknowledgments (SACK) and selec-
tive repeat to TCP. We compare Tahoe and Reno TCP,
the two most common reference implementations for
TCP, with two modified versions of Reno TCP. The first
version is New-Reno TCP, a modified version of TCP
without SACK that avoids some of Reno TCP's per-
formance problems when multiple packets are dropped
from a window of data. The second version is SACK
TCP, a conservative extension of Reno TCP modified to
use the SACK option being proposed in the Internet En-
gineering Task Force (IETF). We describe the conges-
tion control algorithms in our simulated implementation
of SACK TCP and show that while selective acknowl-
edgments are not required to solve Reno TCP's per-
formance problems when multiple packets are dropped,
the absence of selective acknowledgments does impose
limits to TCP's ultimate performance. In particular,
we show that without selective acknowledgments, TCP
implementations are constrained to either retransmit at
most one dropped packet per round-trip time, or to re-
transmit packets that might have already been success-
fully delivered.
1 Introduction
In this paper we illustrate some of the benefits of adding
selective acknowledgment (SACK) to TCP. Current im-
plementations of TCP use an acknowledgment number
field that contains a cumulative acknowledgment, indi-
cating the TCP receiver has received all of the data up to
the indicated byte. A selective acknowledgment option
allows receivers to additionally report non-sequential
data they have received. When coupled with a selec-
tive retransmission policy implemented in TCP senders,
This work was supported by the Director, Office of Energy Re-
search, Scientific Computing Staff, of the U.S. Department of Energy
under Contract No. DE-AC03-76SF00098.
considerable savings can be achieved.
Several transport protocols have provided for se-
lective acknowledgment (SACK) of received data.
These include NETBLT [CLZ87], XTP [SDW92],
RDP [HSV84] and VMTP [Che88]. The first pro-
posals for adding SACK to TCP [BJ88, BJZ90] were
later removed from the TCP RFCs (Request For Com-
ments) [BBJ92] pending further research. The cur-
rent proposal for adding SACK to TCP is given
in [MMFR96]. We use simulations to show how the
SACK option defined in [MMFR96] can be of substan-
tial benefit relative to TCP without SACK.
The simulations are designed to highlight perfor-
mance differences between TCP with and without
SACK. In this paper, Tahoe TCP refers to TCP with the
Slow-Start, Congestion Avoidance, and Fast Retransmit
algorithms first implemented in 4.3 BSD Tahoe TCP in
1988. Reno TCP refers to TCP with the earlier algo-
rithms plus Fast Recovery, first implemented in 4.3 BSD
Reno TCP in 1990.
Without SACK, Reno TCP has performance prob-
lems when multiple packets are dropped from one win-
dow of data. These problems result from the need
to await a retransmission timer expiration before re-
initiating data flow. Situations in which this problem
occurs are illustrated later in this paper (for example,
see Section 6.4).
Not all of Reno's performance problems are a nec-
essary consequence of the absence of SACK. To show
why, we implemented a variant of the Reno algorithms
in our simulator, called New-Reno. Using a sugges-
tion from Janey Hoe [Hoe95, Hoe96], New-Reno avoids
many of the retransmit timeouts of Reno without requir-
ing SACK. Nevertheless, New-Reno does not perform
as well as TCP with SACK when a large number of
packets are dropped from a window of data. The pur-
pose of our discussion of New-Reno is to clarify the
fundamental limitations of the absence of SACK.
In the absence of SACK, both Reno and New-Reno
senders can retransmit at most one dropped packet per
round-trip time, even if senders recover from multiple

drops in a window of data without waiting for a retrans-
mit timeout. This characteristic is not shared by Tahoe
TCP, which is not limited to retransmitting at most one
dropped packet per round-trip time. However, it is a fun-
damental consequence of the absence of SACK that the
sender has to choose between the following strategies to
recover from lost data:
1. retransmitting at most one dropped packet per
round-trip time, or
2. retransmitting packets that might have already been
successfully delivered.
Reno and New-Reno use the first strategy, and Tahoe
uses the second.
To illustrate the advantages of TCP with SACK, we
show simulations with SACK TCP, using the SACK im-
plementation in our simulator. SACK TCP is based on
a conservative extension of the Reno congestion con-
trol algorithms with the addition of selective acknowl-
edgments and selective retransmission. With SACK, a
sender has a better idea of exactly which packets have
been successfully delivered as compared with compa-
rable protocols lacking SACK. Given such information,
a sender can avoid unnecessary delays and retransmis-
sions, resulting in improved throughput. We believe the
addition of SACK to TCP is one of the most important
changes that should be made to TCP at this time to im-
prove its performance.
In Sections 2 through 5 we describe the congestion
control and packet retransmission algorithms in Tahoe,
Reno, New-Reno, and SACK TCP. Section 6 shows sim-
ulations with Tahoe, Reno, New-Reno, and SACK TCP
in scenarios ranging from one to four packets dropped
from a window of data. Section 7 shows a trace of Reno
TCP taken from actual Internet traffic, showing that the
performance problems of Reno without SACK are of
more than theoretical interest. Finally, Section 8 dis-
cusses possible future directions for TCP with selective
acknowledgments, and Section 9 gives conclusions.
2 Tahoe TCP
Modern TCP implementations contain a number of al-
gorithms aimed at controlling network congestion while
maintaining good user throughput. Early TCP imple-
mentations followed a go-back-
model using cumula-
tive positive acknowledgment and requiring a retrans-
mit timer expirationto re-senddata lost during transport.
These TCPs did little to minimize network congestion.
The Tahoe TCP implementation added a number of
new algorithms and refinements to earlier implementa-
tions. The new algorithms include Slow-Start, Conges-
tion Avoidance, and Fast Retransmit [Jac88]. The re-
finements include a modification to the round-trip time
estimator used to set retransmission timeout values. All
modifications have been described elsewhere [Jac88,
Ste94].
The Fast Retransmit algorithm is of special interest in
this paper because it is modified in subsequent versions
of TCP. With Fast Retransmit, after receiving a small
number of duplicate acknowledgments for the same
TCP segment (dup ACKs), the data sender infers that a
packet has been lost and retransmits the packet without
waiting for a retransmission timer to expire, leading to
higher channel utilization and connection throughput.
3 Reno TCP
The Reno TCP implementation retained the enhance-
ments incorporated into Tahoe, but modified the Fast
Retransmit operation to include Fast Recovery [Jac90].
The new algorithm prevents the communication path
(“pipe”) from going empty after Fast Retransmit,
thereby avoiding the need to Slow-Start to re-fill it after
a single packet loss. Fast Recovery operates by assum-
ing each dup ACK received represents a single packet
having left the pipe. Thus, during Fast Recovery the
TCP sender is able to make intelligent estimates of the
amount of outstanding data.
Fast Recovery is entered by a TCP sender after re-
ceiving an initial threshold of dup ACKs. This thresh-
old, usually known as tcprexmtthresh, is generally set to
three. Once the threshold of dup ACKs is received, the
sender retransmits one packet and reduces its congestion
window by one half. Instead of slow-starting, as is per-
formed by a Tahoe TCP sender, the Reno sender uses
additional incoming dup ACKs to clock subsequent out-
going packets.
In Reno, the sender's usable window becomes

where

is the receiver's
advertised window,

is the sender's congestion
window, and

is maintained at
until the number of
dup ACKs reaches tcprexmtthresh, and thereafter tracks
the number of duplicate ACKs. Thus, during Fast Re-
covery the sender “inflates” its window by the number
of dup ACKs it has received, according to the observa-
tion that each dup ACK indicates some packet has been
removed from the network and is now cached at the re-
ceiver. After entering Fast Recovery and retransmitting
a single packet, the sender effectively waits until half
a window of dup ACKs have been received, and then
sends a new packet for each additional dup ACK that is
received. Upon receipt of an ACK for new data (called
a “recovery ACK”), the sender exits Fast Recovery by
setting

to
. Fast Recovery is illustrated in more
detail in the simulations in Section 6.

Reno's Fast Recovery algorithm is optimized for the
case when a single packet is dropped from a window of
data. The Reno sender retransmits at most one dropped
packet per round-trip time. Reno significantly improves
upon the behavior of Tahoe TCP when a single packet is
dropped from a window of data, but can suffer from per-
formance problems when multiple packets are dropped
from a window of data. This is illustrated in the simu-
lations in Section 6 with three or more dropped packets.
The problem is easily constructed in our simulator when
a Reno TCP connection with a large congestion window
suffers a burst of packet losses after slow-starting in a
network with drop-tail gateways (or other gateways that
fail to monitor the average queue size).
4 New-Reno TCP
We include New-Reno TCP in this paper to show how a
simple change to TCP makes it possible to avoid some
of the performance problems of Reno TCP without the
addition of SACK. At the same time, we use New-Reno
TCP to explore the fundamental limitations of TCP per-
formance in the absence of SACK.
The New-Reno TCP in this paper includes a small
change to the Reno algorithm at the sender that elimi-
nates Reno's wait for a retransmit timer when multiple
packets are lost from a window [Hoe95, CH95]. The
change concerns the sender's behavior during Fast Re-
covery when a partial ACK is received that acknowl-
edges some but not all of the packets that were out-
standing at the start of that Fast Recovery period. In
Reno, partial ACKs take TCP out of Fast Recovery by
“deflating” the usable window back to the size of the
congestion window. In New-Reno, partial ACKs do not
take TCP out of Fast Recovery. Instead, partial ACKs
received during Fast Recovery are treated as an indica-
tion that the packet immediately following the acknowl-
edged packet in the sequence space has been lost, and
should be retransmitted. Thus, when multiple pack-
ets are lost from a single window of data, New-Reno
can recover without a retransmission timeout, retrans-
mitting one lost packet per round-trip time until all of
the lost packets from that window have been retransmit-
ted. New-Reno remains in Fast Recovery until all of the
data outstanding when Fast Recovery was initiated has
been acknowledged.
The implementations of New-Reno and SACK TCP
in our simulator also use a “maxburst” parameter. In
our SACK TCP implementation, the “maxburst” param-
eter limits to four the number of packets that can be
sent in response to a single incoming ACK, even if the
sender's congestion window would allow more pack-
ets to be sent. In New-Reno, the “maxburst” parame-
ter is set to four packets outside of Fast Recovery, and
to two packets during Fast Recovery, to more closely
reproduce the behavior of Reno TCP during Fast Re-
covery. The “maxburst” parameter is really only needed
for the first window of packets that are sent after leav-
ing Fast Recovery. If the sender had been prevented by
the receiver's advertised window from sending packets
during Fast Recovery, then, without “maxburst”, it is
possible for the sender to send a large burst of packets
upon exiting Fast Recovery. This applies to Reno and
New-Reno TCP, and to a lesser extent, to SACK TCP.
In Tahoe TCP the Slow-Start algorithm prevents bursts
after recovering from a packet loss. The bursts of pack-
ets upon exiting Fast Recovery with New-Reno TCP are
illustrated in Section 6 in the simulations with three and
four packet drops. Bursts of packets upon exiting Fast
Recovery with Reno TCP are illustrated in [Flo95].
[Hoe95] recommends an additional change to TCP's
Fast Recovery algorithms. She suggests the data sender
send a new packet for everytwo dup ACKs receiveddur-
ing Fast Recovery, to keep the “flywheel” of ACK and
data packets going. This is not implemented in “New-
Reno” because we wanted to consider the minimal set of
changes to Reno needed to avoid unnecessary retransmit
timeouts.
5 SACK TCP
The SACK TCP implementation in this paper, called
“Sack1” in our simulator, is also discussed in [Flo96b,
Flo96a].
The SACK option follows the format
in [MMFR96]. From [MMFR96], the SACK option
field contains a number of SACK blocks, where each
SACK block reports a non-contiguous set of data that
has been received and queued. The first block in a
SACK option is required to report the data receiver's
most recently received segment, and the additional
SACK blocks repeat the most recently reported SACK
blocks [MMFR96]. In these simulations each SACK op-
tion is assumed to have room for three SACK blocks.
When the SACK option is used with the Timestamp
option specified for TCP Extensions for High Perfor-
mance [BBJ92], then the SACK option has room for
only three SACK blocks [MMFR96]. If the SACK op-
tion were to be used with both the Timestampoption and
with T/TCP (TCP Extensions for Transactions) [Bra94],
the TCP option space would have room for only two
SACK blocks.
The 1990 “Sack” TCP implementation on our previous simula-
tor is from Steven McCanne and Sally Floyd, and does not conform
to the formats in [MMFR96]. The new “Sack1” implementation con-
tains major contributions from Kevin Fall, Jamshid Mahdavi, and Matt
Mathis.

The congestion control algorithms implemented in
our SACK TCP are a conservative extension of Reno's
congestion control, in that they use the same algorithms
for increasing and decreasing the congestion window,
and make minimal changes to the other congestion con-
trol algorithms. Adding SACK to TCP does not change
the basic underlying congestion control algorithms. The
SACK TCP implementation preserves the properties of
Tahoe and Reno TCP of being robust in the presence
of out-of-order packets, and uses retransmit timeouts as
the recovery method of last resort. The main difference
between the SACK TCP implementation and the Reno
TCP implementation is in the behavior when multiple
packets are dropped from one window of data.
As in Reno, the SACK TCP implementation enters
Fast Recovery when the data sender receives tcprexmt-
thresh duplicate acknowledgments. The sender re-
transmits a packet and cuts the congestion window in
half. During Fast Recovery, SACK maintains a vari-
able called pipe that represents the estimated number
of packets outstanding in the path. (This differs from the
mechanisms in the Reno implementation.) The sender
only sends new or retransmitted data when the estimated
number of packets in the path is less than the conges-
tion window. The variable pipe is incremented by one
when the sender either sends a new packet or retransmits
an old packet. It is decremented by one when the sender
receives a dup ACK packet with a SACK option report-
ing that new data has been received at the receiver.
Use of the pipe variable decouples the decision of
when to send a packet from the decision of which packet
to send. The sender maintains a data structure, the
scoreboard (contributed by Jamshid Mahdavi and Matt
Mathis), that remembers acknowledgments from previ-
ous SACK options. When the sender is allowed to send
a packet, it retransmits the next packet from the list of
packets inferredto be missing at the receiver. If there are
no such packets and the receiver's advertised window is
sufficiently large, the sender sends a new packet.
When a retransmitted packet is itself dropped, the
SACK implementation detects the drop with a retrans-
mit timeout, retransmitting the dropped packet and then
slow-starting.
The sender exits Fast Recovery when a recovery ac-
knowledgment is received acknowledging all data that
was outstanding when Fast Recovery was entered.
The SACK sender has special handling for partial
ACKs (ACKs received during Fast Recovery that ad-
vance the Acknowledgment Number field of the TCP
Our simulator simply works in units of packets, not in units of
bytes or segments, and all data packets for a particular TCP connection
are constrained to be the same size. Also note that a more aggressive
implementation might decrement the variable pipe by more than one
packet when an ACK packet with a SACK option is received reporting
that the receiver has received more than one new out-of-order packet.
header, but do not take the sender out of Fast Recov-
ery). For partial ACKs, the sender decrements pipe by
two packets rather than one, as follows. When Fast Re-
transmit is initiated, pipe is effectively decremented
by one for the packet that was assumed to have been
dropped, and then incremented by one for the packet
that was retransmitted. Thus, decrementing the pipe
by two packets when the first partial ACK is received
is in some sense “cheating”, as that partial ACK only
represents one packet having left the pipe. However, for
any succeeding partial ACKs, pipe was incremented
when the retransmitted packet entered the pipe, but was
never decremented for the packet assumed to have been
dropped. Thus, when the succeeding partial ACK ar-
rives, it does in fact represent two packets that have
left the pipe: the original packet (assumed to have been
dropped), and the retransmitted packet. Because the
sender decrements pipe by two packets rather than one
for partial ACKs, the SACK sender never recovers more
slowly than a Slow-Start.
The maxburst parameter, which limits the number
of packets that can be sent in response to a single incom-
ing ACK packet, is experimental, and is not necessarily
recommended for SACK implementations.
There are a number of other proposals for TCP con-
gestion control algorithms using selective acknowledg-
ments [Kes94, MM96]. The SACK implementation in
our simulator is designed to be the most conservative
extension of the Reno congestion control algorithms, in
that it makes the minimum changes to Reno's existing
congestion control algorithms.
6 Simulations
This section describes simulations from four scenarios,
with from one to four packets dropped from a window of
data. Each set of scenarios is run for Tahoe, Reno, New-
Reno, and SACK TCP. Following this section, Section
7 shows a trace of Reno TCP traffic taken from Internet
traffic measurements, illustrating the performance prob-
lems of Reno TCP without SACK, and Section 8 dis-
cusses future directions of TCP with SACK.
For all of the TCP implementations in all of the sce-
narios, the first dropped packet is detected by the Fast
Retransmit procedure, after the source receives three
dup ACKs.
The results of the Tahoe simulations are similar in
all four scenarios. The Tahoe sender recovers with a
For those reading the SACK code in the simulator, the boolean
overhead parameter significantly complicates the code, but is only
of concern in the simulator. The overhead parameter indicates
whether some randomization should be added to the timing of the TCP
connection. For all of the simulations in this paper, the overhead
parameter is set to zero, implying no randomization is added.

Fast Retransmit followed by Slow-Start regardless of
the number of packets dropped from the window of
data. For connections with a larger congestion window,
Tahoe's delay in slow-starting back up to half the previ-
ous congestion window can have a significant impact on
overall performance.
The Reno implementation without SACK gives opti-
mal performance when a single packet is dropped from
a window of data. For the scenario in Figure 3 with two
dropped packets, the sender goes through Fast Retrans-
mit and Fast Recovery twice in succession, unnecessar-
ily reducing the congestion window twice. For the sce-
narios with three or four packet drops, the Reno sender
has to wait for a retransmit timer to recover.
As expected, the New-Reno and SACK TCPs each re-
cover from all four scenarios without having to wait for
a retransmit timeout. The New-Reno and SACK TCPs
simulations look quite similar. However, the New-Reno
sender is able to retransmit at most one dropped packet
each round-trip time. The limitations of New-Reno, rel-
ative to SACK TCP, are more pronounced in scenarios
with larger congestion windows and a larger number of
dropped packets from a window of data. In this case the
constraint of retransmitting at most one dropped packet
each round-trip time results in substantial delay in re-
transmitting the later dropped packets in the window. In
addition, if the sender is limited by the receiver's ad-
vertised window during this recovery period, then the
sender can be unable to effectively use the available
bandwidth.
.
For each of the four scenarios, the SACK sender re-
covers with good performance in both per-packet end-
to-end delay and overall throughput.
6.1 The simulation scenario
The rest of this section consists of a detailed descrip-
tion of the simulations in Figures 2 through 5. All of
these simulations can be run on our simulator ns with
the command test-sack. For those readers who are
interested, the text gives a packet-by-packet description
of the behavior of TCP in each simulation.
100ms
K1
0.8Mbps
S1
0.1ms
8Mbps
R1
Figure 1: Simulation Topology
Figure 1 shows the network used for the simulations
in this paper. The circle indicates a finite-buffer drop-
tail gateway, and the squares indicate sending or receiv-
This is shown in the LBNL simulator ns in the test
many-drops, run with the command test-sack
ing hosts. The links are labeled with their bandwidth
capacity and delay. Each simulation has three TCP con-
nections from S1 to K1. Only the first connection is
shown in the figures. The second and third connections
have limited data to send, and are included to achieve
the desired pattern of packet drops for the first con-
nection. The pattern of packet drops is changed sim-
ply by changing the number of packets sent by the sec-
ond and third connections. Readers interested in the
exact details of the simulation set-up are referred to
the files test-sack and sack.tcl in our simula-
tor ns [MF95]. The granularity of the TCP clock is set
to 100 msec, giving round-trip time measurements ac-
curate to only the nearest 100 msec.
These simulations use drop-tail gateways with small
buffers. These are not intended to be realistic sce-
narios, or realistic values for the buffer size. They
are intended as a simple scenario for illustrating TCP's
congestion control algorithms. Simulations with RED
(Random Early Detection) gateways [FJ93] would in
general avoid the bursts of packet drops characteristic
of drop-tail gateways.
Ns [MF95] is based on LBNL's previous simulator
tcpsim, which was in turn based on the REAL sim-
ulator [Kes88]. The simulator does not use production
TCP code, and does not pretend to reproduce the exact
behavior of specific implementations of TCP [Flo95].
Instead, the simulator is intended to support exploration
of underlying TCP congestion and error control algo-
rithms, including Slow-Start, Congestion Avoidance,
Fast Retransmit, and Fast Recovery. The simulation re-
sults contained in this report can be recreated with the
test-sack script supplied with ns.
For simplicity, most of the simulations shown in this
paper use a data receiver that sends an ACK for ev-
ery data packet received. The simulations in this paper
also consist of one-way traffic. As a result, ACKs are
never “compressed” or discarded on the path from the
receiver back to the sender. The simulation set run by
the test-sack script includes simulations with multi-
ple connections, two-way traffic, and data receivers that
send an ACK for every two data packets received.
The graphs from the simulations were generated by
tracing packets entering and departing from

. For
each graph, the
-axis shows the packet arrival or de-
parture time in seconds. The
-axis shows the packet
number


. Packets are numbered starting with
packet
. Each packet arrival and departure is marked
by a square on the graph. For example, a single packet
passing through

experiencing no appreciable queue-
ing delay would generate two marks so close together on
the graph as to appear as a single mark. Packets delayed
at

but not dropped will generate two colinear marks
for a constant packet number, spaced by the queueing

Citations
More filters
Journal ArticleDOI

A comprehensive survey on Carrier Ethernet Congestion Management mechanism

TL;DR: This paper presents a taxonomy of the Carrier Ethernet congestion control mechanism and correlate it with existing taxonomies and draws a parallel between the different schemes and point out the advantage and disadvantage of each one.
Proceedings ArticleDOI

An analytical model for evaluating utilization of TCP Reno

TL;DR: The results show that the TCP Reno is superior to another version of TCP Tahoe by having higher percentage of utilization and lower percentage of packet dropping rate.

Design and evaluation of a bandwidth broker that provides network quality of service for Grid applications

Volker Sander
TL;DR: This dissertation evolves a flexible bandwidth broker architecture with the goal to incorporate the network as a manageable resource into a Grid resource management infrastructure by considering the complex trust relationships and usage policies that can apply in a multi-domain network environment.
Proceedings ArticleDOI

Perceptually optimized 3D transmission over wireless networks

TL;DR: This paper provides an overview of the research on designing a 3D perceptual quality metric integrating two important ones, resolution of texture and resolution of mesh, that control the transmission bandwidth, and suggests alternative strategies for packet 3D transmission of both texture and mesh.
Journal ArticleDOI

TCP Congestion Control with ACK-Pacing for Vertical Handover

TL;DR: This paper proposes a new congestion control for improving TCP performance during vertical handover and shows that the proposed system adjust its congestion window size to a target BDP value rapidly and can achieve a good throughput performance.
References
More filters
Journal ArticleDOI

Random early detection gateways for congestion avoidance

TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Journal ArticleDOI

Congestion avoidance and control

TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.

TCP Selective Acknowledgement Options

TL;DR: TCP may experience poor performance when multiple packets are lost from one window of data because of the limited information available from cumulative acknowledgments.
Book

TCP/IP Illustrated Vol 1 The Protocols

TL;DR: TCP/IP Illustrated, Volume 1 is a complete and detailed guide to the entire TCP/IP protocol suite - with an important difference from other books on the subject: rather than just describing what the RFCs say the protocol suite should do, this unique book uses a popular diagnostic tool so you may actually watch the protocols in action.
Journal ArticleDOI

A comparison of mechanisms for improving TCP performance over wireless links

TL;DR: The results show that a reliable link-layer protocol that is TCP-aware provides very good performance and it is possible to achieve good performance without splitting the end-to-end connection at the base station.
Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Simulation-based comparisons of tahoe, reno, and sack tcp" ?

This paper uses simulations to explore the benefits of adding selective acknowledgments ( SACK ) and selective repeat to TCP. The authors describe the congestion control algorithms in their simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP 's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP 's ultimate performance. In particular, the authors show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered.