scispace - formally typeset
Journal ArticleDOI

Packet reordering is not pathological network behavior

TLDR
It is found that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected and that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.
Abstract
It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.

read more

Citations
More filters

A Classification of

TL;DR: This work provides a classification of the set of multicast protocols using the user requirements, and illustrates it with several example protocols chosen to cover the range of features described.
Journal ArticleDOI

Concurrent multipath transfer using SCTP multihoming over independent end-to-end paths

TL;DR: This foundation work identifies three negative side- effects of reordering introduced by CMT that must be managed before efficient parallel transfer can be achieved and proposes three algorithms which augment and/or modify current SCTP to counter these side-effects.
Journal ArticleDOI

The Eifel algorithm: making TCP robust against spurious retransmissions

TL;DR: The Eifel algorithm finally makes TCP truly wireless-capable without the need for proxies between the end points and reduces the penalty of a spurious timeout to a single (in the common case) spurious retransmission.

An Extension to the Selective Acknowledgement (SACK) Option for TCP

TL;DR: This note suggests that when duplicate packets are received, the first block of the SACK option field can be used to report the sequence numbers of the packet that triggered the acknowledgement, allowing the TCP sender to infer the order of packets received at the receiver.
Journal ArticleDOI

Dynamic load balancing without packet reordering

TL;DR: Contrary to popular belief, it is shown that one can systematically split a single flow across multiple paths without causing packet reordering, and proposes FLARE, a new traffic splitting algorithm that operates on bursts of packets, carefully chosen to avoid reordering.
References
More filters
Journal ArticleDOI

Congestion avoidance and control

TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Book

Stochastic Modeling and the Theory of Queues

TL;DR: An integrated treatment of applied stochastic processes and queueing theory, with an emphasis on time-averages and long-run behavior.

TCP Selective Acknowledgement Options

TL;DR: TCP may experience poor performance when multiple packets are lost from one window of data because of the limited information available from cumulative acknowledgments.
Journal ArticleDOI

The macroscopic behavior of the TCP congestion avoidance algorithm

TL;DR: A performance model for the TCP Congestion Avoidance algorithm that predicts the bandwidth of a sustained TCP connection subjected to light to moderate packet losses, such as loss caused by network congestion is analyzed.

TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms

W. R. Stevens
TL;DR: The purpose of this document is to document four intertwined algorithms that have never been fully documented as Internet standards: slow start, congestion avoidance, fast retransmit, and fast recovery.