scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2003"


Journal Article
TL;DR: Govindan et al. as mentioned in this paper performed a large-scale measurement of packet delivery in dense wireless sensor networks and found that packet de-livery performance is important for energy-constrained networks.
Abstract: Understanding Packet Delivery Performance In Dense Wireless Sensor Networks ∗ Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Jerry Zhao Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Ramesh Govindan zhaoy@usc.edu ABSTRACT Wireless sensor networks promise fine-grain monitoring in a wide variety of environments. Many of these environ- ments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspec- tive, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal charac- teristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks. In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot. Our findings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks. ramesh@usc.edu spectrum under use, the particular modulation schemes un- der use, and possibly on the communicating devices them- selves. Communication quality can vary dramatically over time, and has been reputed to change with slight spatial displacements. All of these are true to a greater degree for ad-hoc (or infrastructure-less) communication than for wire- less communication to a base station. Given this, and the paucity of large-scale deployments, it is perhaps not surpris- ing that there have been no medium to large-scale measure- ments of ad-hoc wireless systems; one expects measurement studies to reveal high variability in performance, and one suspects that such studies will be non-representative. Wireless sensor networks [5, 7] are predicted on ad-hoc wireless communications. Perhaps more than other ad-hoc wireless systems, these networks can expect highly variable wireless communication. They will be deployed in harsh, inaccessible, environments which, almost by definition will exhibit significant multi-path communication. Many of the current sensor platforms use low-power radios which do not have enough frequency diversity to reject multi-path prop- agation. Finally, these networks will be fairly densely de- ployed (on the order of tens of nodes within communica- tion range). Given the potential impact of these networks, and despite the anecdotal evidence of variability in wireless communication, we argue that it is imperative that we get a quantitative understanding of wireless communication in sensor networks, however imperfect. Our paper is a first attempt at this. Using up to 60 Mica motes, we systematically evaluate the most basic aspect of wireless communication in a sensor network: packet delivery. Particularly for energy-constrained networks, packet de- livery performance is important, since that translates to net- work lifetime. Sensor networks are predicated using low- power RF transceivers in a multi-hop fashion. Multiple short hops can be more energy-efficient than one single hop over a long range link. Poor cumulative packet delivery per- formance across multiple hops may degrade performance of data transport and expend significant energy. Depending on the kind of application, it might significantly undermine application-level performance. Finally, understanding the dynamic range of packet delivery performance (and the ex- tent, and time-varying nature of this performance) is impor- tant for evaluating almost all sensor network communication protocols. We study packet delivery performance at two layers of the communication stack (Section 3). At the physical-layer and in the absence of interfering transmissions, packet de- Categories and Subject Descriptors C.2.1 [Network Architecture and Design]: Wireless communication; C.4 [Performance of Systems]: Perfor- mance attributes, Measurement techniques General Terms Measurement, Experimentation Keywords Low power radio, Packet loss, Performance measurement 1. INTRODUCTION Wireless communication has the reputation of being no- toriously unpredictable. The quality of wireless communica- tion depends on the environment, the part of the frequency ∗ This work is supported in part by NSF grant CCR-0121778 for the Center for Embedded Systems. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SenSys’03, November 5–7, 2003, Los Angeles, California, USA. Copyright 2003 ACM 1-58113-707-9/03/0011 ... $ 5.00.

1,330 citations


Proceedings ArticleDOI
05 Nov 2003
TL;DR: This paper reports on a systematic medium-scale measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot, which has interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks.
Abstract: Wireless sensor networks promise fine-grain monitoring in a wide variety of environments. Many of these environments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspective, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal characteristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks.In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot. Our findings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks.

1,326 citations


Journal Article
TL;DR: In this article, Stann et al. present RMST (Reliable Multi-Segment Transport), a new transport layer for Directed Diffusion, which provides guaranteed delivery and fragmentation/reassembly for applications that require them.
Abstract: Appearing in 1st IEEE International Workshop on Sensor Net Protocols and Applications (SNPA). Anchorage, Alaska, USA. May 11, 2003. RMST: Reliable Data Transport in Sensor Networks Fred Stann, John Heidemann Abstract – Reliable data transport in wireless sensor networks is a multifaceted problem influenced by the physical, MAC, network, and transport layers. Because sensor networks are subject to strict resource constraints and are deployed by single organizations, they encourage revisiting traditional layering and are less bound by standardized placement of services such as reliability. This paper presents analysis and experiments resulting in specific recommendations for implementing reliable data transport in sensor nets. To explore reliability at the transport layer, we present RMST (Reliable Multi- Segment Transport), a new transport layer for Directed Diffusion. RMST provides guaranteed delivery and fragmentation/reassembly for applications that require them. RMST is a selective NACK-based protocol that can be configured for in-network caching and repair. Second, these energy constraints, plus relatively low wireless bandwidths, make in-network processing both feasible and desirable [3]. Third, because nodes in sensor networks are usually collaborating towards a common task, rather than representing independent users, optimization of the shared network focuses on throughput rather than fairness. Finally, because sensor networks are often deployed by a single organization with inexpensive hardware, there is less need for interoperability with existing standards. For all of these reasons, sensor networks provide an environment that encourages rethinking the structure of traditional communications protocols. The main contribution is an evaluation of the placement of reliability for data transport at different levels of the protocol stack. We consider implementing reliability in the MAC, transport layer, application, and combinations of these. We conclude that reliability is important at the MAC layer and the transport layer. MAC-level reliability is important not just to provide hop-by-hop error recovery for the transport layer, but also because it is needed for route discovery and maintenance. (This conclusion differs from previous studies in reliability for sensor nets that did not simulate routing. [4]) Second, we have developed RMST (Reliable Multi-Segment Transport), a new transport layer, in order to understand the role of in- network processing for reliable data transfer. RMST benefits from diffusion routing, adding minimal additional control traffic. RMST guarantees delivery, even when multiple hops exhibit very high error rates. 1 Introduction Wireless sensor networks provide an economical, fully distributed, sensing and computing solution for environments where conventional networks are impractical. This paper explores the design decisions related to providing reliable data transport in sensor nets. The reliable data transport problem in sensor nets is multi-faceted. The emphasis on energy conservation in sensor nets implies that poor paths should not be artificially bolstered via mechanisms such as MAC layer ARQ during route discovery and path selection [1]. Path maintenance, on the other hand, benefits from well- engineered recovery either at the MAC layer or the transport layer, or both. Recovery should not be costly however, since many applications in sensor nets are impervious to occasional packet loss, relying on the regular delivery of coarse-grained event descriptions. Other applications require loss detection and repair. These aspects of reliable data transport include the provision of guaranteed delivery and fragmentation/ reassembly of data entities larger than the network MTU. Sensor networks have different constraints than traditional wired nets. First, energy constraints are paramount in sensor networks since nodes can often not be recharged, so any wasted energy shortens their useful lifetime [2]. This work was supported by DARPA under grant DABT63-99-1-0011 as part of the SCAADS project, and was also made possible in part due to support from Intel Corporation and Xerox Corporation. Fred Stann and John Heidemann are with USC/Information Sciences Institute, 4676 Admiralty Way, Marina Del Rey, CA, USA E-mail: fstann@usc.edu, johnh@isi.edu. 2 Architectural Choices There are a number of key areas to consider when engineering reliability for sensor nets. Many current sensor networks exhibit high loss rates compared to wired networks (2% to 30% to immediate neighbors)[1,5,6]. While error detection and correction at the physical layer are important, approaches at the MAC layer and higher adapt well to the very wide range of loss rates seen in sensor networks and are the focus of this paper. MAC layer protocols can ameliorate PHY layer unreliability, and transport layers can guarantee delivery. An important question for this paper is the trade off between implementation of reliability at the MAC layer (i.e. hop to hop) vs. the Transport layer, which has traditionally been concerned with end-to-end reliability. Because sensor net applications are distributed, we also considered implementing reliability at the application layer. Our goal is to minimize the cost of repair in terms of transmission.

650 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: This paper studies TCP performance over multihop wireless networks that use the IEEE 802.11 protocol as the access method and proposes two techniques, link RED and adaptive pacing, through which it is able to improve TCP throughput by 5% to 30% in various simulated topologies.
Abstract: This paper studies TCP performance over multihop wireless networks that use the IEEE 802.11 protocol as the access method. Our analysis and simulations show that, given a specific network topology and flow patterns, there exists a TCP window size W*, at which TCP achieves best throughput via improved spatial channel reuse. However, TCP does not operate around W*, and typically grows its average window size much larger; this leads to decreased throughput and increased packet loss. The TCP throughput reduction can be explained by its loss behavior. Our results show that network overload is mainly signified by wireless link contention in multihop wireless networks. As long as the buffer size at each node is reasonably large (say, larger than 10 packets), buffer overflow-induced packet loss is rare and packet drops due to link-layer contention dominate. Link-layer drops offer the first sign for network overload. We further show that multihop wireless links collectively exhibit graceful drop behavior: as the offered load increases, the link contention drop probability also increases, but saturates eventually. In general, the link drop probability is insufficient to stabilize the average TCP window size around W*. Consequently, TCP suffers from reduced throughput due to reduced spatial reuse. We further propose two techniques, link RED and adaptive pacing, through which we are able to improve TCP throughput by 5% to 30% in various simulated topologies. Some simulation results are also validated by real hardware experiments.

570 citations


Journal ArticleDOI
01 Jul 2003
TL;DR: The emulation capabilities of NIST Net are described; the architecture of the tool is examined; and some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead are discussed.
Abstract: Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.

543 citations


Journal ArticleDOI
TL;DR: This work proposes and study a novel end-to-end congestion control mechanism called Veno that is simple and effective for dealing with random packet loss in wireless access networks and can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections.
Abstract: Wireless access networks in the form of wireless local area networks, home networks, and cellular networks are becoming an integral part of the Internet. Unlike wired networks, random packet loss due to bit errors is not negligible in wireless networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: (1) it refines the multiplicative decrease algorithm of TCP Reno-the most widely deployed TCP version in practice-by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and (2) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized. Based on extensive network testbed experiments and live Internet measurements, we show that Veno can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections. In typical wireless access networks with 1% random packet loss rate, throughput improvement of up to 80% can be demonstrated. A salient feature of Veno is that it modifies only the sender-side protocol of Reno without changing the receiver-side protocol stack.

530 citations


Proceedings ArticleDOI
03 Dec 2003
TL;DR: A mapping protocol for nodes that surround a jammer which allows network applications to reason about the region as an entity, rather than as a collection of broken links and congested nodes is described.
Abstract: Preventing denial-of-service attacks in wireless sensor networks is difficult primarily because of the limited resources available to network nodes and the ease with which attacks are perpetrated Rather than jeopardize design requirements which call for simple, inexpensive, mass-producible devices, we propose a coping strategy that detects and maps jammed regions We describe a mapping protocol for nodes that surround a jammer which allows network applications to reason about the region as an entity, rather than as a collection of broken links and congested nodes This solution is enabled by a set of design principles: loose group semantics, eager eavesdropping, supremacy of local information, robustness to packet loss and failure, and early use of results Performance results show that regions can be mapped in 1-5 seconds, fast enough for real-time response With a moderately connected network, the protocol is robust to failure rates as high as 25 percent

400 citations


Proceedings ArticleDOI
11 May 2003
TL;DR: A novel tree construction algorithm is proposed that enables energy-efficient computation of some classes of aggregates of network properties, and it is shown that wireless communication artifacts in even relatively benign environments can significantly impact the computation of these aggregate properties.
Abstract: Wireless sensor networks involve very large numbers of small, low-power, wireless devices. Given their unattended nature, and their potential applications in harsh environments, we need a monitoring infrastructure that indicates system failures and resource depletion. We describe an architecture for sensor network monitoring, then focus on one aspect of this architecture: continuously computing aggregates (sum, average, count) of network properties (loss rates, energy-levels etc., packet counts). Our contributions are two-fold. First, we propose a novel tree construction algorithm that enables energy-efficient computation of some classes of aggregates. Second, we show through actual implementation and experiments that wireless communication artifacts in even relatively benign environments can significantly impact the computation of these aggregate properties. In some cases, without careful attention to detail, the relative error in the computed aggregates can be as much as 50%. However, by carefully discarding links with heavy packet loss and asymmetry, we can improve accuracy by an order of magnitude.

355 citations


Journal Article
TL;DR: The segmentation scheme is investigated by simulation in conjunction with a deflection scheme, and it is shown that segmentation with deflection can achieve a significantly reduced packet loss rate.

307 citations


Journal ArticleDOI
TL;DR: A node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by identifying redundant nodes in respect of sensing coverage and then assigning them an off-duty operation mode that has lower energy consumption than the normal on-duty one.
Abstract: In wireless sensor networks that consist of a large number of low-power, short-lived, unreliable sensors, one of the main design challenges is to obtain long system lifetime without sacrificing system original performances (sensing coverage and sensing reliability). In this paper, we propose a node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by identifying redundant nodes in respect of sensing coverage and then assigning them an off-duty operation mode that has lower energy consumption than the normal on-duty one. Our scheme aims to completely preserve original sensing coverage theoretically. Practically, sensing coverage degradation caused by location error, packet loss and node failure is very limited, not more than 1% as shown by our experimental results. In addition, the experimental results illustrate that certain redundancy is still guaranteed after node-scheduling, which we believe can provide enough sensing reliability in many applications. We implement the proposed scheme in NS-2 as an extension of the LEACH protocol and compare its energy consumption with the original LEACH. Simulation results exhibit noticeably longer system lifetime after introducing our scheme than before. Copyright © 2003 John Wiley & Sons, Ltd.

291 citations


Journal ArticleDOI
TL;DR: It is demonstrated that time-varying effects on wireless channels result in wireless traces which exhibit non-stationary behavior over small window sizes, and an algorithm is presented that extracts stationary components from a collected trace in order to provide analytical channel models that more accurately represent characteristics such as burstiness, statistical distribution of errors, and packet loss processes.
Abstract: Techniques for modeling and simulating channel conditions play an essential role in understanding network protocol and application behavior. In [11], we demonstrated that inaccurate modeling using a traditional analytical model yielded suboptimal error control protocol parameters choices. In this paper, we demonstrate that time-varying effects on wireless channels result in wireless traces which exhibit non-stationary behavior over small window sizes. We then present an algorithm that extracts stationary components from a collected trace in order to provide analytical channel models that, relative to traditional approaches, more accurately represent characteristics such as burstiness, statistical distribution of errors, and packet loss processes. Our algorithm also generates artificial traces with the same statistical characteristics as actual collected network traces. For validation, we develop a channel model for the circuit-switched data service in GSM and show that it: (1) more closely approximates GSM channel characteristics than traditional Markov models and (2) generates artificial traces that closely match collected traces' statistics. Using these traces in a simulator environment enables future protocol and application testing under different controlled and repeatable conditions.

Proceedings ArticleDOI
06 Apr 2003
TL;DR: A model is proposed that accurately estimates the expected distortion by explicitly accounting for the loss pattern, inter-frame error propagation, and the correlation between error frames and the accuracy of the proposed model is validated with JVT/H.
Abstract: Video communication is often afflicted by various forms of losses, such as packet loss over the Internet. The paper examines the question of whether the packet loss pattern, and in particular the burst length, is important for accurately estimating the expected mean-squared error distortion. Specifically, we (1) verify that the loss pattern does have a significant effect on the resulting distortion, (2) explain why a loss pattern, for example a burst loss, generally produces a larger distortion than an equal number of isolated losses, and (3) propose a model that accurately estimates the expected distortion by explicitly accounting for the loss pattern, inter-frame error propagation, and the correlation between error frames. The accuracy of the proposed model is validated with JVT/H.26L coded video and previous frame concealment, where for most sequences the total distortion is predicted to within /spl plusmn/0.3 dB for burst loss of length two packets, as compared to prior models which underestimate the distortion by about 1.5 dB. Furthermore, as the burst length increases, our prediction is within /spl plusmn/0.7 dB, while prior models degrade and underestimate the distortion by over 3 dB.

Journal ArticleDOI
TL;DR: This work addresses the issue of providing quality-of-service (QoS) in an optical burst-switched network by introducing prioritized contention resolution policies in the network core and a composite burst-assembly technique at the network edge.
Abstract: We address the issue of providing quality-of-service (QoS) in an optical burst-switched network. QoS is provided by introducing prioritized contention resolution policies in the network core and a composite burst-assembly technique at the network edge. In the core, contention is resolved through prioritized burst segmentation and prioritized deflection. The burst segmentation scheme allows high-priority bursts to preempt low-priority bursts and enables full class isolation between bursts of different priorities. At the edge of the network, a composite burst-assembly technique combines packets of different classes into the same burst, placing lower class packets toward the tail of the burst. By implementing burst segmentation in the core, packets that are placed at the tail of the burst are more likely to be dropped than packets that are placed at the head of the burst. The proposed schemes are evaluated through analysis and simulation, and it is shown that significant differentiation with regard to packet loss and delay can be achieved.

Journal ArticleDOI
TL;DR: Different techniques that attempt to avoid time and frequency collisions of WLAN and Bluetooth transmissions are considered, and their performance is measured in terms of packet loss, TCP goodput, delay, and delay jitter.
Abstract: In this article we discuss solutions to the interference problem caused by the proximity and simultaneous operation of Bluetooth and WLAN networks. We consider different techniques that attempt to avoid time and frequency collisions of WLAN and Bluetooth transmissions. We conduct a comparative analysis of their performance, and discuss the trends and trade-offs they bring for different applications and interference levels. Performance is measured in terms of packet loss, TCP goodput, delay, and delay jitter.

Proceedings ArticleDOI
04 Nov 2003
TL;DR: In this article, the authors extend the sender to detect and recover from false fast retransmits using DSACK information, and to avoid false fast reransmits proactively, by adaptively varying dupthresh.
Abstract: TCP performs poorly on paths that reorder packets significantly, where it misinterprets out-of-order delivery as packet loss. The sender responds with a fast retransmit though no actual loss has occurred. These repeated false fast retransmits keep the sender's window small, and severely degrade the throughput it attains. Requiring nearly in-order delivery needlessly restricts and complicates Internet routing systems and routers. Such beneficial systems as multi-path routing and parallel packet switches are difficult to deploy in a way that preserves ordering. Toward a more reordering-tolerant Internet architecture, we present enhancements to TCP that improve the protocol's robustness to reordered and delayed packets. We extend the sender to detect and recover from false fast retransmits using DSACK information, and to avoid false fast retransmits proactively, by adaptively varying dupthresh. Our algorithm is the first that adaptively balances increasing dupthresh, to avoid false fast retransmits, and limiting the growth of dupthresh, to avoid unnecessary timeouts. Finally, we demonstrate that TCP's RTO estimator tolerates delayed packets poorly, and present enhancements to it that ensure it is sufficiently conservative, without using timestamps or additional TCP header hits. Our simulations show that these enhancements significantly improve TCP's performance over paths that reorder or delay packets.

Journal ArticleDOI
TL;DR: A new receiver-based playout scheduling scheme is proposed to improve the tradeoff between buffering delay and late loss for real-time voice communication over IP networks and the overall audio quality is investigated based on subjective listening tests.
Abstract: The quality of service limitation of today's Internet is a major challenge for real-time voice communications. Excessive delay, packet loss, and high delay jitter all impair the communication quality. A new receiver-based playout scheduling scheme is proposed to improve the tradeoff between buffering delay and late loss for real-time voice communication over IP networks. In this scheme the network delay is estimated from past statistics and the playout time of the voice packets is adaptively adjusted. In contrast to previous work, the adjustment is not only performed between talkspurts, but also within talkspurts in a highly dynamic way. Proper reconstruction of continuous playout speech is achieved by scaling individual voice packets using a time-scale modification technique based on the Waveform Similarity Overlap-Add (WSOLA) algorithm. Results of subjective listening tests show that this operation does not impair audio quality, since the adaptation process requires infrequent scaling of the voice packets and low playout jitter is perceptually tolerable. The same time-scale modification technique is also used to conceal packet loss at very low delay, i.e., one packet time. Simulation results based on Internet measurements show that the tradeoff between buffering delay and late loss can be improved significantly. The overall audio quality is investigated based on subjective listening tests, showing typical gains of 1 on a 5-point scale of the Mean Opinion Score.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: Simulation results reveal that by using a five-packet data cache, CHAMP exhibits excellent improvement in packet delivery, outperforming AODV and DSR by at most 30% in stressful scenarios and end-to-end delay is significantly reduced while routing overhead is lower at high mobility rates.
Abstract: A mobile ad hoc network is an autonomous system of infrastructureless, multihop wireless mobile nodes. Reactive routing protocols perform well in such an environment due to their ability to cope quickly against topological changes. In this paper, we propose a new routing protocol called Caching and Multipath (CHAMP) Routing Protocol. CHAMP uses cooperative packet caching and shortest multipath routing to reduce packet loss due to frequent route breakdowns. Simulation results reveal that by using a five-packet data cache, CHAMP exhibits excellent improvement in packet delivery, outperforming AODV and DSR by at most 30% in stressful scenarios. Furthermore, end-to-end delay is significantly reduced while routing overhead is lower at high mobility rates.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed bit-plane-wise unequal error protection algorithm is simple, fast and robust in hostile network conditions and, therefore, can provide reasonable picture quality for video applications under varying network conditions.
Abstract: This paper presents a new bit-plane-wise unequal error protection algorithm for progressive bitstreams transmitted over lossy networks. The proposed algorithm protects a compressed embedded bitstream generated by a 3-D SPIHT algorithm by assigning an unequal amount of forward error correction (FEC) to each bit-plane. The proposed algorithm reduces the amount of side information needed to send the size of each code to the decoder by limiting the number of quality levels to the number of bit-planes to be sent while providing a graceful degradation of picture quality as packet losses increase. We also apply our proposed algorithm to transmission of JPEG 2000 coded images over the Internet. To get additional error-resilience at high packet loss rates, we extend our algorithm to multiple-substream unequal error protection. Simulation results show that the proposed algorithm is simple, fast and robust in hostile network conditions and, therefore, can provide reasonable picture quality for video applications under varying network conditions.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: This work develops three techniques for inferring packet loss characteristics of Internet links using server-based measurements based on random sampling, linear optimization, and Bayesian inference using Gibbs sampling, respectively and finds that these techniques can identify most of the lossy links in the network with a manageable false positive rate.
Abstract: The problem of inferring the packet loss characteristics of Internet links using server-based measurements is investigated. Unlike much of existing work on network tomography that is based on active probing, we make inferences based on passive observation of end-to-end client-server traffic. Our work on passive network tomography focuses on identifying lossy links (i.e., the trouble spots in the network). We have developed three techniques for this purpose based on random sampling, linear optimization, and Bayesian inference using Gibbs sampling, respectively. We evaluate the accuracy of these techniques using both simulations and Internet packet traces. We find that these techniques can identify most of the lossy links in the network with a manageable false positive rate. For instance, simulation results indicate that the Gibbs sampling technique has over 80% coverage with a false positive rate under 5%. Furthermore, this technique provides a confidence indicator on its inference. We also perform inference based on Internet traces gathered at the busy microsoft.com Web site. However, validating these inferences is a challenging problem. We present a method for indirect validation that suggests that the false positive rate is manageable.

Proceedings ArticleDOI
10 Jun 2003
TL;DR: A scalable model of a network of Active Queue Management (AQM) routers serving a large population of TCP flows is presented, showing the models to be quite accurate while at the same time requiring substantially less time to solve, especially when workloads and bandwidths are high.
Abstract: In this paper we present a scalable model of a network of Active Queue Management (AQM) routers serving a large population of TCP flows. We present efficient solution techniques that allow one to obtain the transient behavior of the average queue lengths, packet loss probabilities, and average end-to-end latencies. We model different versions of TCP as well as different versions of RED, the most popular AQM scheme currently in use. Comparisons between our models and ns simulation show our models to be quite accurate while at the same time requiring substantially less time to solve, especially when workloads and bandwidths are high.

Journal ArticleDOI
TL;DR: This paper describes a simple, lossless method of preventing deadlocks and livelocks in backpressured packet networks that represents a new networking paradigm in which internal network losses are avoided (thereby simplifying the design of other network protocols) and internal network delays are bounded.
Abstract: No packets will be dropped inside a packet network, even when congestion builds up, if congested nodes send backpressure feedback to neighboring nodes, informing them of unavailability of buffering capacity-stopping them from forwarding more packets until enough buffer becomes available. While there are potential advantages in backpressured networks that do not allow packet dropping, such networks are susceptible to a condition known as deadlock in which throughput of the network or part of the network goes to zero (i.e., no packets are transmitted). In this paper, we describe a simple, lossless method of preventing deadlocks and livelocks in backpressured packet networks. In contrast with prior approaches, our proposed technique does not introduce any packet losses, does not corrupt packet sequence, and does not require any changes to packet headers. It represents a new networking paradigm in which internal network losses are avoided (thereby simplifying the design of other network protocols) and internal network delays are bounded.

01 Jan 2003
TL;DR: A novel rate allocation scheme to be used with Forward Error Correction (FEC) in order to minimize the probability of packet loss in bursty loss environments such as those caused by network congestion is proposed.
Abstract: With the explosive growth of video applications over the Internet, many approaches have been proposed to stream video effectively over packet switched, best-effort networks. Many use techniques from source and channel coding, or implement transport protocols, or modify system architectures in order to deal with delay, loss, and time-varying nature of the Internet. In our previous work , we proposed a framework with a receiver driven protocol to coordinate simultaneous video streaming from multiple senders to a single receiver in order to achieve higher throughput, and to increase tolerance to packet loss and delay due to network congestion. The receiver-driven protocol employs two algorithms: rate allocation and packet partition. The rate allocation algorithm determines the sending rate for each sender; the packet partition algorithm ensures no senders send the same packets, and at the same time, minimizes the probability of late packets. In this paper, we propose a novel rate allocation scheme to be used with Forward Error Correction (FEC) in order to minimize the probability of packet loss in bursty loss environments such as those caused by network congestion. Using both simulations and actual Internet experiments, we demonstrate the effectiveness of our rate allocation scheme in reducing packet loss, and hence, achieving higher visual quality for the streamed video.

Journal ArticleDOI
TL;DR: An analytical model for integrated real-time and non-real-time services in a wireless mobile network with priority reservation and preemptive priority handoff schemes and it is observed that the simulation results closely match the analytical model.
Abstract: We propose an analytical model for integrated real-time and non-real-time services in a wireless mobile network with priority reservation and preemptive priority handoff schemes. We categorize the service calls into four different types, namely, real-time and non-real-time service originating calls, and real-time and non real-time handoff service request calls. Accordingly, the channels in each cell are divided into three parts: one is for real-time service calls only, the second is for non-real-time service calls only, and the last one is for overflow of handoff requests that cannot be served in the first two parts. In the third group, several channels are reserved exclusively for real-time service handoffs so that higher priority can be given to them. In addition, a realtime service handoff request has the right to preempt non-real-time service in the preemptive priority handoff scheme if no free channels are available, while the interrupted non-real-time service call returns to its handoff request queue. The system is modeled using a multidimensional Markov chain and a numerical analysis is presented to estimate blocking probabilities of originating calls, forced termination probability, and average transmission delay. This scheme is also simulated under different call holding time and cell dwell time distributions. It is observed that the simulation results closely match the analytical model. Our scheme significantly reduces the forced termination probability of real-time service calls. The probability of packet loss of non-real-time transmission is shown to be negligibly small, as a non-real-time service handoff request in waiting can be transferred from the queue of the current base station to another one.

Proceedings ArticleDOI
01 Dec 2003
TL;DR: A new formula is proposed to quantify the effects of packet loss and delay jitter on speech quality in voice over Internet protocol (VoIP) scenarios and incorporated into ITU-T G.107, the E-model, which is very useful in MOS prediction as well as network planning.
Abstract: The paper investigates the effects of packet loss and delay jitter on speech quality in voice over Internet protocol (VoIP) scenarios. A new formula is proposed to quantify these effects and incorporated into ITU-T G.107, the E-model. In the simulation, codecs ITU-T G.723.1 and G.729 are used; random packet loss and Pareto distributed network delay are introduced. The prediction errors range between -0.20 and +0.12 MOS (mean opinion score). The formula extends the coverage of the current E-model, and is very useful in MOS prediction as well as network planning.

Journal ArticleDOI
TL;DR: This work describes a novel method for authenticating multicast packets that is robust against packet loss, and derives the authentication probability of the scheme using two different bursty loss models.
Abstract: We describe a novel method for authenticating multicast packets that is robust against packet loss. Our focus is to minimize the size of the communication overhead required to authenticate the packets. Our approach is to encode the hash values and the signatures with Rabin's Information Dispersal Algorithm (IDA) to construct an authentication scheme that amortizes a single signature operation over multiple packets. This strategy is especially efficient in terms of space overhead, because just the essential elements needed for authentication (i.e., one hash per packet and one signature per group of packets) are used in conjunction with an erasure code that is space optimal. Using asymptotic techniques, we derive the authentication probability of our scheme using two different bursty loss models. A lower bound of the authentication probability is also derived for one of the loss models. To evaluate the performance of our scheme, we compare our technique with four other previously proposed schemes using empirical results.

Patent
12 May 2003
TL;DR: In this article, an infrastructure mode packet transmitting method for an egress point, an enhanced beacon packet or a negative acknowledgement packet is created and transmitted, while in an ad-hoc mode packet receiving method for a participating vehicle, beacon service table packets, vehicle service table packet, packet bursts, or positive acknowledgement packets are received.
Abstract: Participating vehicles and egress points communicate with each other according to an infrastructure mode. Participating vehicles communicate with other participating vehicles according to an ad-hoc mode. In an infrastructure mode packet transmitting method for a participating vehicle, beacon service table packets, vehicle service table packets, or packet bursts are created and transmitted. In an infrastructure mode packet receiving method for a participating vehicle, beacon service table packets, vehicle service table packets, packet bursts, or negative acknowledgement packets are received. In an infrastructure mode packet transmitting method for an egress point, an enhanced beacon packet or a negative acknowledgement packet is created and transmitted. In an infrastructure mode packet receiving method for an egress point, beacon service table packets, vehicle service table packets, or packets bursts are received. In an ad-hoc mode packet transmitting method for a participating vehicle, beacon service table packets, vehicle service table packets, packet bursts, or positive acknowledgement packets are created and transmitted. In an ad-hoc mode packet receiving method for a participating vehicle, beacon service table packets, vehicle service table packets, packet bursts, or positive acknowledgement packets are received.

Journal ArticleDOI
TL;DR: This research focuses on delay-based congestion avoidance algorithms (DCA), like TCP/Vegas, which attempt to utilize the congestion information contained in packet round-trip time (RTT) samples, and shows evidence suggesting that a single deployment of DCA is not a viable enhancement to TCP over high-speed paths.
Abstract: The set of TCP congestion control algorithms associated with TCP/Reno (e.g., slow-start and congestion avoidance) have been crucial to ensuring the stability of the Internet. Algorithms such as TCP/NewReno (which has been deployed) and TCP/Vegas (which has not been deployed) represent incrementally deployable enhancements to TCP as they have been shown to improve a TCP connection's throughput without degrading performance to competing flows. Our research focuses on delay-based congestion avoidance algorithms (DCA), like TCP/Vegas, which attempt to utilize the congestion information contained in packet round-trip time (RTT) samples. Through measurement and simulation, we show evidence suggesting that a single deployment of DCA (i.e., a TCP connection enhanced with a DCA algorithm) is not a viable enhancement to TCP over high-speed paths. We define several performance metrics that quantify the level of correlation between packet loss and RTT. Based on our measurement analysis we find that although there is useful congestion information contained within RTT samples, the level of correlation between an increase in RTT and packet loss is not strong enough to allow a TCP/Sender to reliably improve throughput. While DCA is able to reduce the packet loss rate experienced by a connection, in its attempts to avoid packet loss, the algorithm will react unnecessarily to RTT variation that is not associated with packet loss. The result is degraded throughput as compared to a similar flow that does not support DCA.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: A scalable, heuristic scheme for selecting a redundant path between a sender and a receiver is proposed, and it is shown that substantial reduction in packet loss can be achieved by dividing packets between the default path and the redundant path.
Abstract: Packet loss and end-to-end delay limit delay sensitive applications over the best effort packet switched networks such as the Internet. In our previous work, we have shown that substantial reduction in packet loss can be achieved by sending packets at appropriate sending rates to a receiver from multiple senders, using disjoint paths, and by protecting packets with forward error correction. In this paper, we propose a path diversity with forward error correction (PDF) system for delay sensitive applications over the Internet in which, disjoint paths from a sender to a receiver are created using a collection of relay nodes. We propose a scalable, heuristic scheme for selecting a redundant path between a sender and a receiver, and show that substantial reduction in packet loss can be achieved by dividing packets between the default path and the redundant path. NS simulations are used to verify the effectiveness of PDF system.

Proceedings ArticleDOI
01 Dec 2003
TL;DR: This paper proposes two dynamic congestion-based load balanced routing techniques to avoid congestion and shows that the proposed contention avoidance techniques improve the network utilization and reduce the packet loss probability.
Abstract: In optical burst-switched networks, data loss may occur when bursts contend for network resources. There have been several proposed solutions to resolve contentions in order to minimize loss. These localized contention resolution techniques react to contention, but do not address the more fundamental problem of congestion. Hence, there is a need for network level contention avoidance using load balanced routing techniques in order to minimize the loss. In this paper, we propose two dynamic congestion-based load balanced routing techniques to avoid congestion. Our simulation results show that the proposed contention avoidance techniques improve the network utilization and reduce the packet loss probability.

Patent
16 Apr 2003
TL;DR: In this paper, a protocol for a universal transfer mode (UTM) of transferring data packets at a regulated bit rate is proposed. But the protocol does not support a plurality of data formats, such as PCM voice data, IP packets, ATM cells, frame relay and the like.
Abstract: A method and a network for a universal transfer mode (UTM) of transferring data packets at a regulated bit rate are disclosed. The method defines a protocol that uses an adaptive packet header to simplify packet routing and increase transfer speed. The protocol supports a plurality of data formats, such as PCM voice data, IP packets, ATM cells, frame relay and the like. The network preferably includes a plurality of modules that provide interfaces to various data sources. The modules are interconnected by an optic core with adequate inter-module links with preferably no more than two hops being required between any origination/destination pair of modules. The adaptive packet header is used for both signaling and payload transfer. The header is parsed using an algorithm to determine its function. Rate regulation is accomplished using each module control element and egress port controllers to regulate packet transfer. The protocol enables the modules to behave as a single distributed switch capable of multi-terabit transfer rates. The advantage is a high speed distributed switch capable of serving as a transfer backbone for substantially any telecommunications service.