scispace - formally typeset
Open Access

Improving Restart of Idle TCP Connections

TLDR
This paper proposes a third alternative: pacing some packets at a certain rate until the ACK clock can be restarted, and describes the motivation and implementation and simulation results which show that it achieves the elapsed-time performance comparable to NSSR and loss behavior of SSR.
Abstract
TCP congestion avoidance mechanisms are based on adjustments to the congestion-window size, triggered by the ACK clock. These mechanisms are not well matched to large but intermittent bursts of traffic, such as responses from a HTTP/1.1-based web server. Idle periods between bursts (web page replies) stop the ACK clock and hence disrupt even data flow. When restarting data flow after an idle period, current implementations either enforce slow start (SSR) or use the prior congestion window (NSSR). The former approach, while conservative, leads to low effective throughput in cases like P-HTTP. The latter case optimistically sends a large burst of back-to-back packets, risking router buffer overflow and subsequent packet loss. This paper proposes a third alternative: pacing some packets at a certain rate until the ACK clock can be restarted. We describe the motivation and implementation of this third alternative and present simulation results which show that it achieves the elapsed-time performance comparable to NSSR and loss behavior of SSR.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Less is more: trading a little bandwidth for ultra-low latency in the data center

TL;DR: The HULL (High-bandwidth Ultra-Low Latency) architecture is presented to balance two seemingly contradictory goals: near baseline fabric latency and high bandwidth utilization and results show that by sacrificing a small amount of bandwidth, HULL can dramatically reduce average and tail latencies in the data center.
Proceedings ArticleDOI

Understanding the performance of TCP pacing

TL;DR: It is shown that contrary to intuition, pacing often has significantly worse throughput than regular TCP because it is susceptible to synchronized losses and it delays congestion signals.
Journal ArticleDOI

Stability of end-to-end algorithms for joint routing and rate control

TL;DR: Stable, scalable load-sharing across paths, based on end-to-end measurements, can be achieved on the same rapid time- scale as rate control, namely the time-scale of round-trip times.

Ongoing TCP Research Related to Satellites

TL;DR: This document outlines possible TCP enhancements that may allow TCP to better utilize the available bandwidth provided by networks containing satellite links.

Improving Simulation for Network Research

TL;DR: ns as mentioned in this paper is a multi-protocol network simulator designed to address the needs of networking researchers and provides multiple levels of abstraction to permit simulations to span a wide range of scales, emulation, where real-world packets can enter the simulator.
References
More filters
Journal ArticleDOI

Congestion avoidance and control

TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Proceedings Article

Hypertext Transfer Protocol -- HTTP/1.1

TL;DR: The Hypertext Transfer Protocol is an application-level protocol for distributed, collaborative, hypermedia information systems, which can be used for many tasks beyond its use for hypertext through extension of its request methods, error codes and headers.

Requirements for Internet Hosts - Communication Layers

Robert Braden
TL;DR: This RFC is an official specification for the Internet community that incorporates by reference, amends, corrects, and supplements the primary protocol standards documents relating to hosts.

TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms

W. R. Stevens
TL;DR: The purpose of this document is to document four intertwined algorithms that have never been fully documented as Internet standards: slow start, congestion avoidance, fast retransmit, and fast recovery.
Proceedings ArticleDOI

TCP Vegas: new techniques for congestion detection and avoidance

TL;DR: This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study—using both simulations and measurements on the Internet— of the Vegas and Reno implementations of TCP.