About: Packet loss is a(n) research topic. Over the lifetime, 21235 publication(s) have been published within this topic receiving 302453 citation(s).
Papers published on a yearly basis
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.
28 Aug 2000
TL;DR: A mechanism for equation-based congestion control for unicast traffic that refrains from reducing the sending rate in half in response to a single packet drop, and uses both simulations and experiments over the Internet to explore performance.
Abstract: This paper proposes a mechanism for equation-based congestion control for unicast traffic. Most best-effort traffic in the current Internet is well-served by the dominant transport protocol, TCP. However, traffic such as best-effort unicast streaming multimedia could find use for a TCP-friendly congestion control mechanism that refrains from reducing the sending rate in half in response to a single packet drop. With our mechanism, the sender explicitly adjusts its sending rate as a function of the measured rate of loss events, where a loss event consists of one or more packets dropped within a single round-trip time. We use both simulations and experiments over the Internet to explore performance.
TL;DR: Govindan et al. as mentioned in this paper performed a large-scale measurement of packet delivery in dense wireless sensor networks and found that packet de-livery performance is important for energy-constrained networks.
Abstract: Understanding Packet Delivery Performance In Dense Wireless Sensor Networks ∗ Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Jerry Zhao Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Ramesh Govindan firstname.lastname@example.org ABSTRACT Wireless sensor networks promise ﬁne-grain monitoring in a wide variety of environments. Many of these environ- ments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspec- tive, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal charac- teristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks. In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three diﬀerent environments: an indoor oﬃce building, a habitat with moderate foliage, and an open parking lot. Our ﬁndings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks. email@example.com spectrum under use, the particular modulation schemes un- der use, and possibly on the communicating devices them- selves. Communication quality can vary dramatically over time, and has been reputed to change with slight spatial displacements. All of these are true to a greater degree for ad-hoc (or infrastructure-less) communication than for wire- less communication to a base station. Given this, and the paucity of large-scale deployments, it is perhaps not surpris- ing that there have been no medium to large-scale measure- ments of ad-hoc wireless systems; one expects measurement studies to reveal high variability in performance, and one suspects that such studies will be non-representative. Wireless sensor networks [5, 7] are predicted on ad-hoc wireless communications. Perhaps more than other ad-hoc wireless systems, these networks can expect highly variable wireless communication. They will be deployed in harsh, inaccessible, environments which, almost by deﬁnition will exhibit signiﬁcant multi-path communication. Many of the current sensor platforms use low-power radios which do not have enough frequency diversity to reject multi-path prop- agation. Finally, these networks will be fairly densely de- ployed (on the order of tens of nodes within communica- tion range). Given the potential impact of these networks, and despite the anecdotal evidence of variability in wireless communication, we argue that it is imperative that we get a quantitative understanding of wireless communication in sensor networks, however imperfect. Our paper is a ﬁrst attempt at this. Using up to 60 Mica motes, we systematically evaluate the most basic aspect of wireless communication in a sensor network: packet delivery. Particularly for energy-constrained networks, packet de- livery performance is important, since that translates to net- work lifetime. Sensor networks are predicated using low- power RF transceivers in a multi-hop fashion. Multiple short hops can be more energy-eﬃcient than one single hop over a long range link. Poor cumulative packet delivery per- formance across multiple hops may degrade performance of data transport and expend signiﬁcant energy. Depending on the kind of application, it might signiﬁcantly undermine application-level performance. Finally, understanding the dynamic range of packet delivery performance (and the ex- tent, and time-varying nature of this performance) is impor- tant for evaluating almost all sensor network communication protocols. We study packet delivery performance at two layers of the communication stack (Section 3). At the physical-layer and in the absence of interfering transmissions, packet de- Categories and Subject Descriptors C.2.1 [Network Architecture and Design]: Wireless communication; C.4 [Performance of Systems]: Perfor- mance attributes, Measurement techniques General Terms Measurement, Experimentation Keywords Low power radio, Packet loss, Performance measurement 1. INTRODUCTION Wireless communication has the reputation of being no- toriously unpredictable. The quality of wireless communica- tion depends on the environment, the part of the frequency ∗ This work is supported in part by NSF grant CCR-0121778 for the Center for Embedded Systems. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proﬁt or commercial advantage and that copies bear this notice and the full citation on the ﬁrst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speciﬁc permission and/or a fee. SenSys’03, November 5–7, 2003, Los Angeles, California, USA. Copyright 2003 ACM 1-58113-707-9/03/0011 ... $ 5.00.
05 Nov 2003
TL;DR: This paper reports on a systematic medium-scale measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot, which has interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks.
Abstract: Wireless sensor networks promise fine-grain monitoring in a wide variety of environments. Many of these environments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspective, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal characteristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks.In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot. Our findings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks.
01 Oct 1997
TL;DR: The prevalence of unusual network events such as out-of-order delivery and packet corruption are characterized and a robust receiver-based algorithm for estimating "bottleneck bandwidth" is discussed that addresses deficiencies discovered in techniques based on "packet pair".
Abstract: We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.
Trending Questions (5)
Related Topics (5)
159.7K papers, 2.2M citations
Wireless ad hoc network
49K papers, 1.1M citations
122.5K papers, 2.1M citations
Wireless sensor network
142K papers, 2.4M citations
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations