scispace - formally typeset
Search or ask a question
Topic

Packet loss

About: Packet loss is a research topic. Over the lifetime, 21235 publications have been published within this topic receiving 302453 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This research focuses on delay-based congestion avoidance algorithms (DCA), like TCP/Vegas, which attempt to utilize the congestion information contained in packet round-trip time (RTT) samples, and shows evidence suggesting that a single deployment of DCA is not a viable enhancement to TCP over high-speed paths.
Abstract: The set of TCP congestion control algorithms associated with TCP/Reno (e.g., slow-start and congestion avoidance) have been crucial to ensuring the stability of the Internet. Algorithms such as TCP/NewReno (which has been deployed) and TCP/Vegas (which has not been deployed) represent incrementally deployable enhancements to TCP as they have been shown to improve a TCP connection's throughput without degrading performance to competing flows. Our research focuses on delay-based congestion avoidance algorithms (DCA), like TCP/Vegas, which attempt to utilize the congestion information contained in packet round-trip time (RTT) samples. Through measurement and simulation, we show evidence suggesting that a single deployment of DCA (i.e., a TCP connection enhanced with a DCA algorithm) is not a viable enhancement to TCP over high-speed paths. We define several performance metrics that quantify the level of correlation between packet loss and RTT. Based on our measurement analysis we find that although there is useful congestion information contained within RTT samples, the level of correlation between an increase in RTT and packet loss is not strong enough to allow a TCP/Sender to reliably improve throughput. While DCA is able to reduce the packet loss rate experienced by a connection, in its attempts to avoid packet loss, the algorithm will react unnecessarily to RTT variation that is not associated with packet loss. The result is degraded throughput as compared to a similar flow that does not support DCA.

144 citations

Proceedings ArticleDOI
22 Aug 2005
TL;DR: A simple, low-complexity protocol, called Variable-structure congestion Control Protocol (VCP), is designed and implemented that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, ie high utilization, low persistent queue length, negligible packet loss rate, and reasonable fairness.
Abstract: Achieving efficient and fair bandwidth allocation while minimizing packet loss in high bandwidth-delay product networks has long been a daunting challenge. Existing end-to-end congestion control (eg TCP) and traditional congestion notification schemes (eg TCP+AQM/ECN) have significant limitations in achieving this goal. While the recently proposed XCP protocol addresses this challenge, XCP requires multiple bits to encode the congestion-related information exchanged between routers and end-hosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a non-trivial and time-consuming standardization process.In this paper, we design and implement a simple, low-complexity protocol, called Variable-structure congestion Control Protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, ie high utilization, low persistent queue length, negligible packet loss rate, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios. To gain insight into the behavior of VCP, we analyze a simple fluid model, and prove a global stability result for the case of a single bottleneck link shared by flows with identical round-trip times.

143 citations

Proceedings ArticleDOI
11 Apr 2016
TL;DR: Although DDS results in higher bandwidth usage than MQTT, its superior performance with regard to data latency and reliability makes it an attractive choice for medical IoT applications and beyond.
Abstract: One of the challenges faced by today's Internet of Things (IoT) is to efficiently support machine-to-machine communication, given that the remote sensors and the gateway devices are connected through low bandwidth, unreliable, or intermittent wireless communication links. In this paper, we quantitatively compare the performance of IoT protocols, namely MQTT (Message Queuing Telemetry Transport), CoAP (Constrained Application Protocol), DDS (Data Distribution Service) and a custom UDP-based protocol in a medical setting. The performance of the protocols was evaluated using a network emulator, allowing us to emulate a low bandwidth, high system latency, and high packet loss wireless access network. This paper reports the observed performance of the protocols and arrives at the conclusion that although DDS results in higher bandwidth usage than MQTT, its superior performance with regard to data latency and reliability makes it an attractive choice for medical IoT applications and beyond.

143 citations

Patent
27 Jun 1994
TL;DR: In this article, a packet switching system (100) having a packet switch (140) employs an acknowledgment scheme in order to assure the delivery of all fragments (310) comprising a fragmented data packet (300) to improve overall system throughput during the handling of packets that require reassembly.
Abstract: A packet switching system (100) having a packet switch (140) employs an acknowledgment scheme in order assure the delivery of all fragments (310) comprising a fragmented data packet (300) to improve overall system throughput during the handling of packets (310) that require reassembly. When packet fragments (310) are lost, corrupted or otherwise unintelligible to a receiving device (92, 94), the acknowledgment scheme permits retransmission of the missing data. In addition, a second acknowledgment signal is scheduled by system processing resources (110) in order to verify the successful delivery of all retransmitted data.

142 citations

01 Nov 1989
TL;DR: In this article, a gateway congestion control policy, called Random Drop, is proposed to relieve resource congestion upon buffer overflow by choosing a random packet from the service queue to be dropped.
Abstract: Lately, the growing demand on the Internet has prompted the need for more effective congestion control policies. Currently No Gateway Policy is used to relieve and signal congestion, which leads to unfair service to the individual users and a degradation of overall network performance. Network simulation was used to illustrate the character of Internet congestion and its causes. A newly proposed gateway congestion control policy, called Random Drop, was considered as a promising solution to the pressing problem. Random Drop relieves resource congestion upon buffer overflow by choosing a random packet from the service queue to be dropped. The random choice should result in a drop distribution proportional to the bandwidth distribution among all contending TCP connections, thus applying the necessary fairness. Nonetheless, the simulation experiments demonstrate several shortcomings with this policy. Because Random Drop is a congestion control policy, which is not applied until congestion has already occurred, it usually results in a high drop rate that hurts too many connections including well-behaved ones. Even though the number of packets dropped is different from one connection to another depending on the buffer utilization upon overflow, the TCP recovery overhead is high enough to neutralize these differences, causing unfair congestion penalties. Besides, the drop distribution itself is an inaccurate representation of the average bandwidth distribution, missing much important information about the bandwidth utilization between buffer overflow events. A modification of Random Drop to do congestion avoidance by applying the policy early was also proposed. Early Random Drop has the advantage of avoiding the high drop rate of buffer overflow. The early application of the policy removes the pressure of congestion relief and allows more accurate signaling of congestion. To be used effectively, algorithms for the dynamic adjustment of the parameters of Early Random Drop to suite the current network load must still be developed.

142 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
96% related
Wireless ad hoc network
49K papers, 1.1M citations
96% related
Wireless network
122.5K papers, 2.1M citations
95% related
Wireless sensor network
142K papers, 2.4M citations
94% related
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations
93% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023133
2022325
2021694
2020846
20191,033
2018993