scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2006"


Journal ArticleDOI
TL;DR: AOMDV as discussed by the authors is an on-demand, multipath distance vector routing protocol for mobile ad hoc networks, which guarantees loop freedom and disjointness of alternate paths.
Abstract: We develop an on-demand, multipath distance vector routing protocol for mobile ad hoc networks. Specifically, we propose multipath extensions to a well-studied single path routing protocol known as ad hoc on-demand distance vector (AODV). The resulting protocol is referred to as ad hoc on-demand multipath distance vector (AOMDV). The protocol guarantees loop freedom and disjointness of alternate paths. Performance comparison of AOMDV with AODV using ns-2 simulations shows that AOMDV is able to effectively cope with mobility-induced route failures. In particular, it reduces the packet loss by up to 40% and achieves a remarkable improvement in the end-to-end delay (often more than a factor of two). AOMDV also reduces routing overhead by about 30% by reducing the frequency of route discovery operations. Copyright © 2006 John Wiley & Sons, Ltd.

625 citations


Proceedings ArticleDOI
31 Oct 2006
TL;DR: The funneling-MAC mitigates the funneling effect, improves throughput, loss, and energy efficiency, and importantly, significantly outperforms other representative protocols such as B-MAC, and more recent hybrid TDMA/CSMA MAC protocolssuch as Z-MAC.
Abstract: Sensor networks exhibit a unique funneling effect which is a product of the distinctive many-to-one, hop-by-hop traffic pattern found in sensor networks, and results in a significant increase in transit traffic intensity, collision, congestion, packet loss, and energy drain as events move closer toward the sink While network (eg, congestion control) and application techniques (eg, aggregation) can help counter this problem they cannot fully alleviate it We take a different but complementary approach to solving this problem than found in the literature and present the design, implementation, and evaluation of a localized, sink-oriented, funneling-MAC capable of mitigating the funneling effect and boosting application fidelity in sensor networks The funneling-MAC is based on a CSMA/CA being implemented network-wide, with a localized TDMA algorithm overlaid in the funneling region (ie, within a small number of hops from the sink) In this sense, the funneling-MAC represents a hybrid MAC approach but does not have the scalability problems associated with the network-wide deployment of TDMA The funneling-MAC is 'sink-oriented' because the burden of managing the TDMA scheduling of sensor events in the funneling region falls on the sink node, and not on resource limited sensor nodes; and it is 'localized' because TDMA only operates locally in the funneling region close to the sink and not across the complete sensor field We show through experimental results from a 45 mica-2 testbed that the funneling-MAC mitigates the funneling effect, improves throughput, loss, and energy efficiency, and importantly, significantly outperforms other representative protocols such as B-MAC, and more recent hybrid TDMA/CSMA MAC protocols such as Z-MAC

317 citations


Journal ArticleDOI
TL;DR: A robust cross-layer architecture that leverages the inherent H.264 error resilience tools and the existing QoS-based IEEE 802.11e MAC protocol possibilities is proposed, which allows graceful video degradation while minimizing the mean packet loss and end-to-end delays.
Abstract: The recently developed H.264 video standard achieves efficient encoding over a bandwidth ranging from a few kilobits per second to several megabits per second. Hence, transporting H.264 video is expected to be an important component of many wireless multimedia services, such as video conferencing, real-time network gaming, and TV broadcasting. However, due to wireless channel characteristics and lack of QoS support, the basic 802.11-based channel access procedure is merely sufficient to deliver non-real-time traffic. The delivery should be augmented by appropriate mechanisms to better consider different QoS requirements and ultimately adjust the medium access parameters to the video data content characteristics. In this article we address H.264 wireless video transmission over IEEE 802.11 WLAN by proposing a robust cross-layer architecture that leverages the inherent H.264 error resilience tools (i.e., data partitioning); and the existing QoS-based IEEE 802.11e MAC protocol possibilities. The performances of the proposed architecture are extensively investigated by simulations. Results obtained indicate that compared to 802.11 and 802.11e, our cross-layer architecture allows graceful video degradation while minimizing the mean packet loss and end-to-end delays.

271 citations


Journal ArticleDOI
11 Aug 2006
TL;DR: The Datagram Congestion Control Protocol or DCCP adds to a UDP-like foundation the minimum mechanisms necessary to support congestion control, shedding light on how congestion control interacts with unreliable transport, how modern network constraints impact protocol design, and how TCP's reliable bytestream semantics intertwine with its other mechanisms, including congestion control.
Abstract: Fast-growing Internet applications like streaming media and telephony prefer timeliness to reliability, making TCP a poor fit. Unfortunately, UDP, the natural alternative, lacks congestion control. High-bandwidth UDP applications must implement congestion control themselves-a difficult task-or risk rendering congested networks unusable. We set out to ease the safe deployment of these applications by designing a congestion-controlled unreliable transport protocol. The outcome, the Datagram Congestion Control Protocol or DCCP, adds to a UDP-like foundation the minimum mechanisms necessary to support congestion control. We thought those mechanisms would resemble TCP's, but without reliability and, especially, cumulative acknowledgements, we had to reconsider almost every aspect of TCP's design. The resulting protocol sheds light on how congestion control interacts with unreliable transport, how modern network constraints impact protocol design, and how TCP's reliable bytestream semantics intertwine with its other mechanisms, including congestion control.

250 citations


Proceedings ArticleDOI
01 Nov 2006
TL;DR: An analytical model is proposed to study the performance improvement of the MAC protocol by using the two frame aggregation techniques, namely A-MPDU and A-MSDU (MAC Service Data Unit Aggregation) and results show that the network throughput performance is significant improved when compared with both randomized and fixed frame aggregation algorithms.
Abstract: The IEEE 802.11a/b/g have been widely accepted as the de facto standards for wireless local area networks (WLANs). The recent IEEE 802.11n proposals aim at providing a physical layer transmission rate of up to 600 Mbps. However, to fully utilize this high data rate, the current IEEE 802.11 medium access control (MAC) needs to be enhanced. In this paper, we investigate the performance improvement of the MAC protocol by using the two frame aggregation techniques, namely A-MPDU (MAC Protocol Data Unit Aggregation) and A-MSDU (MAC Service Data Unit Aggregation). We first propose an analytical model to study the performance under uni-directional and bi-directional data transfer. Our proposed model incorporates packet loss either from collisions or channel errors. Comparison with simulation results show that the model is accurate in predicting the network throughput. We also propose an optimal frame size adaptation algorithm with A-MSDU under error-prone channels. Simulation results show that the network throughput performance is significant improved when compared with both randomized and fixed frame aggregation algorithms.

239 citations


Journal ArticleDOI
TL;DR: A unified geometric interpretation for wireless quality-aware routing metrics is provided and empirical observations of a real-world wireless mesh network suggest that mETX and ENT could achieve a 50% reduction in the average packet loss rate compared with ETX.
Abstract: This paper considers the problem of selecting good paths in a wireless mesh network. It is well-known that picking the path with the smallest number of hops between two nodes often leads to poor performance, because such paths tend to use links that could have marginal quality. As a result, quality-aware routing metrics are desired for networks that are built solely from wireless radios. Previous work has developed metrics (such as ETX) that work well when wireless channel conditions are relatively static (DeCouto , 2003), but typical wireless channels experience variations at many time-scales. For example, channels may have low average packet loss ratios, but with high variability, implying that metrics that use the mean loss ratio will perform poorly. In this paper, we describe two new metrics, called modified expected number of transmissions (mETX) and effective number of transmissions (ENT) that work well under a wide variety of channel conditions. In addition to analyzing and evaluating the performance of these metrics, we provide a unified geometric interpretation for wireless quality-aware routing metrics. Empirical observations of a real-world wireless mesh network suggest that mETX and ENT could achieve a 50% reduction in the average packet loss rate compared with ETX

236 citations


Proceedings ArticleDOI
31 Oct 2006
TL;DR: Current practices of the packet delivery performance of the Telos and MicaZ sensor platforms could be easily changed that would greatly improve efficiency and performance.
Abstract: We present empirical measurements of the packet delivery performance of the Telos and MicaZ sensor platforms. At a high level, their behavior is similar to that of earlier platforms. They exhibit a reception "grey region," and temporal variations in packet loss. Looking more deeply, however, there are subtle differences, and looking deeper still, the patterns behind these complexities become clear. Environmental noise (802.11b) has high spatial correlation. Packet loss occurs when a receiver operating near its noise floor experiences a small decrease in received signal strength, rather than an increase in environmental noise. These variations cause the reception "grey region." Packet losses are highly correlated over short time periods, but are independent over longer periods. Based on these findings, current practices could be easily changed that would greatly improve efficiency and performance.

233 citations


Proceedings ArticleDOI
25 Apr 2006
TL;DR: This is the first paper that presents a detailed scheme for detecting selective forwarding attacks in the environment of sensor networks and the simulation results show that even when the channel error rate is 15%, simulating very harsh radio conditions, the detection accuracy of the proposed scheme is over 95%.
Abstract: Selective forwarding attacks may corrupt some mission-critical applications such as military surveillance and forest fire monitoring. In these attacks, malicious nodes behave like normal nodes in most time but selectively drop sensitive packets, such as a packet reporting the movement of the opposing forces. Such selective dropping is hard to detect. In this paper, we propose a lightweight security scheme for detecting selective forwarding attacks. The detection scheme uses a multi-hop acknowledgement technique to launch alarms by obtaining responses from intermediate nodes. This scheme is efficient and reliable in the sense that an intermediate node reports any abnormal packet loss and suspect nodes to both the base station and the source node. To the best of our knowledge, this is the first paper that presents a detailed scheme for detecting selective forwarding attacks in the environment of sensor networks. The simulation results show that even when the channel error rate is 15%, simulating very harsh radio conditions, the detection accuracy of the proposed scheme is over 95%.

211 citations


Proceedings ArticleDOI
05 Jun 2006
TL;DR: A node priority-based congestion control protocol (PCCP) is introduced to reflect the importance of each node and imposes hop-by-hop control based on the measured congestion degree as well as the node priority index for wireless sensor networks.
Abstract: In wireless sensor networks (WSNs), congestion occurs, for example, when nodes are densely distributed, and/or the application produces high flow rate near the sink due to the convergent nature of upstream traffic. Congestion may cause packet loss, which in turn lowers throughput and wastes energy. Therefore congestion in WSNs needs to be controlled for high energy-efficiency, to prolong system lifetime, improve fairness, and improve quality of service (QoS) in terms of throughput (or link utilization) and packet loss ratio along with the packet delay. This paper proposes a node priority-based congestion control protocol (PCCP) for wireless sensor networks. In PCCP, node priority index is introduced to reflect the importance of each node. PCCP uses packet inter-arrival time along with packet service time to measure a parameter defined as congestion degree and furthermore imposes hop-by-hop control based on the measured congestion degree as well as the node priority index. PCCP controls congestion faster and more energy-efficiency than other known techniques.

205 citations


01 Mar 2006
TL;DR: CCID 2 should be used by senders who would like to take advantage of the available bandwidth in an environment with rapidly changing conditions, and who are able to adapt to the abrupt changes in the congestion window typical of TCP's Additive Increase Multiplicative Decrease (AIMD) congestion control.
Abstract: This document contains the profile for Congestion Control Identifier 2 (CCID 2), TCP-like Congestion Control, in the Datagram Congestion Control Protocol (DCCP). CCID 2 should be used by senders who would like to take advantage of the available bandwidth in an environment with rapidly changing conditions, and who are able to adapt to the abrupt changes in the congestion window typical of TCP's Additive Increase Multiplicative Decrease (AIMD) congestion control. [STANDARDS-TRACK]

188 citations


Journal ArticleDOI
TL;DR: It is argued that routing should not only be aware of, but also be adaptive to, network congestion, and proposed a routing protocol (CRP) with such properties is proposed.
Abstract: Mobility, channel error, and congestion are the main causes for packet loss in mobile ad hoc networks. Reducing packet loss typically involves congestion control operating on top of a mobility and failure adaptive routing protocol at the network layer. In the current designs, routing is not congestion-adaptive. Routing may let a congestion happen which is detected by congestion control, but dealing with congestion in this reactive manner results in longer delay and unnecessary packet loss and requires significant overhead if a new route is needed. This problem becomes more visible especially in large-scale transmission of heavy traffic such as multimedia data, where congestion is more probable and the negative impact of packet loss on the service quality is of more significance. We argue that routing should not only be aware of, but also be adaptive to, network congestion. Hence, we propose a routing protocol (CRP) with such properties. Our ns-2 simulation results confirm that CRP improves the packet loss rate and end-to-end delay while enjoying significantly smaller protocol overhead and higher energy efficiency as compared to AODV and DSR

Journal ArticleDOI
TL;DR: This work designs models using the results of a subjective test based on 1080 packet losses in 72 minutes of video, and develops three methods, which differ in the amount of information available to them.
Abstract: We consider the problem of predicting packet loss visibility in MPEG-2 video. We use two modeling approaches: CART and GLM. The former classifies each packet loss as visible or not; the latter predicts the probability that a packet loss is visible. For each modeling approach, we develop three methods, which differ in the amount of information available to them. A reduced reference method has access to limited information based on the video at the encoder's side and has access to the video at the decoder's side. A no-reference pixel-based method has access to the video at the decoder's side but lacks access to information at the encoder's side. A no-reference bitstream-based method does not have access to the decoded video either; it has access only to the compressed video bitstream, potentially affected by packet losses. We design our models using the results of a subjective test based on 1080 packet losses in 72 minutes of video.

Journal ArticleDOI
11 Aug 2006
TL;DR: This work conducts extensive measurement that involves both controlled routing updates through two tier-1 ISPs and active probes of a diverse set of end-to-end paths on the Internet and finds that routing changes contribute to end- to-end packet loss significantly.
Abstract: Extensive measurement studies have shown that end-to-end Internet path performance degradation is correlated with routing dynamics. However, the root cause of the correlation between routing dynamics and such performance degradation is poorly understood. In particular, how do routing changes result in degraded end-to-end path performance in the first place? How do factors such as topological properties, routing policies, and iBGP configurations affect the extent to which such routing events can cause performance degradation? Answers to these questions are critical for improving network performance.In this paper, we conduct extensive measurement that involves both controlled routing updates through two tier-1 ISPs and active probes of a diverse set of end-to-end paths on the Internet. We find that routing changes contribute to end-to-end packet loss significantly. Specifically, we study failover events in which a link failure leads to a routing change and recovery events in which a link repair causes a routing change. In both cases, it is possible to experience data plane performance degradation in terms of increased long loss burst as well as forwarding loops. Furthermore, we find that common routing policies and iBGP configurations of ISPs can directly affect the end-to-end path performance during routing changes. Our work provides new insights into potential measures that network operators can undertake to enhance network performance.

Patent
12 Jul 2006
TL;DR: In this article, a method, device, system, and computer program for providing a transport distribution scheme for a security protocol are disclosed, and at least one security parameter is negotiated with the remote node for transmitting packets through the first packet data connection.
Abstract: A method, device, system and computer program for providing a transport distribution scheme for a security protocol are disclosed. A first packet data connection is established to a remote node for transmitting packet data over a network with a security protocol. An authentication procedure is performed with the remote node via the first packet data connection for establishing a security protocol session with the remote node. At least one security parameter is negotiated with the remote node for transmitting packets through the first packet data connection. A second packet data connection is established to the remote node, and at least one security parameter is negotiated with the remote node for use with the second packet data connection. The first and second packet data connections are handled as packet data subconnections associated with the security protocol session.

Journal ArticleDOI
TL;DR: This paper proposes an energy aware dual-path routing scheme for real-time traffic, which balances node energy utilization to increase the network lifetime, takes network congestion into account to reduce the routing delay across the network and increases the reliability of the packets reaching the destination by introducing minimal data redundancy.

Proceedings ArticleDOI
05 Dec 2006
TL;DR: This paper presents FireFly, a time-synchronized sensor network platform for real-time data streaming across multiple hops, and implements RT-Link, a TDMA-based link layer protocol for message exchange on well-defined time slots and pipelining along multiple hops.
Abstract: Wireless sensor networks have traditionally focused on low duty-cycle applications where sensor data are reported periodically in the order of seconds or even longer. This is due to typically slow changes in physical variables, the need to keep node costs low and the goal of extending battery lifetime. However, there is a growing need to support real-time streaming of audio and/or low-rate video even in wireless sensor networks for use in emergency situations and shortterm intruder detection. In this paper, we present FireFly, a time-synchronized sensor network platform for real-time data streaming across multiple hops. FireFly is composed of several integrated layers including specialized low-cost hardware, a sensor network operating system, a real-time link layer and network scheduling which together provide efficient support for applications with timing constraints. In order to achieve high end-to-end throughput, bounded latency and predictable lifetime, we employ hardware-based time synchronization. Multiple tasks including audio sampling, networking and sensor reading are scheduled using the Nano- RK RTOS. We have implemented RT-Link, a TDMA-based link layer protocol for message exchange on well-defined time slots and pipelining along multiple hops. We use this platform to support 2-way audio streaming concurrently with sensing tasks. For interactive voice, we investigate TDMA-based slot scheduling with balanced bi-directional latency while meeting audio timeliness requirements. Finally, we describe our experimental deployment of 42 nodes in a coal mine, and present measurements of the end-to-end throughput, jitter, packet loss and voice quality.

Journal ArticleDOI
TL;DR: Simulations show that the proposed techniques for handling packet loss can effectively mitigate the effects of random transmission losses in a power-efficient way and study in-network aggregation's cost-efficiency using simple mathematical models.
Abstract: This paper explores in-network aggregation as a power-efficient mechanism for collecting data in wireless sensor networks. In particular, we focus on sensor network scenarios where a large number of nodes produce data periodically. Such communication model is typical of monitoring applications, an important application domain sensor networks target. The main idea behind in-network aggregation is that, rather than sending individual data items from sensors to sinks, multiple data items are aggregated as they are forwarded by the sensor network. Through simulations, we evaluate the performance of different in-network aggregation algorithms, including our own cascading timers, in terms of the trade-offs between energy efficiency, data accuracy and freshness. Our results show that timing, that is, how long a node waits to receive data from its children (downstream nodes in respect to the information sink) before forwarding data onto the next hop (toward the sink) plays a crucial role in the performance of aggregation algorithms for applications that generate data periodically. By carefully selecting when to aggregate and forward data, cascading timers achieves considerable energy savings while maintaining data freshness and accuracy. We also study in-network aggregation's cost-efficiency using simple mathematical models. Since wireless sensor networks are prone to transmission errors and losses can have considerable impact when data aggregation is used, we also propose and evaluate a number of techniques for handling packet loss. Simulations show that, when used in conjunction with aggregation protocols, the proposed techniques can effectively mitigate the effects of random transmission losses in a power-efficient way.

Journal ArticleDOI
TL;DR: An optimization framework is proposed, which enables the multiple senders to coordinate their packet transmission schedules, such that the average quality over all video clients is maximized, and is very efficient in terms of video quality.
Abstract: We consider the problem of distributed packet selection and scheduling for multiple video streams sharing a communication channel. An optimization framework is proposed, which enables the multiple senders to coordinate their packet transmission schedules, such that the average quality over all video clients is maximized. The framework relies on rate-distortion information that is used to characterize a video packet. This information consists of two quantities: the size of the packet in bits, and its importance for the reconstruction quality of the corresponding stream. A distributed streaming strategy then allows for trading off rate and distortion, not only within a single video stream, but also across different streams. Each of the senders allocates to its own video packets a share of the available bandwidth on the channel in proportion to their importance. We evaluate the performance of the distributed packet scheduling algorithm for two canonical problems in streaming media, namely adaptation to available bandwidth and adaptation to packet loss through prioritized packet retransmissions. Simulation results demonstrate that, for the difficult case of scheduling nonscalably encoded video streams, our framework is very efficient in terms of video quality, both over all streams jointly and also over the individual videos. Compared to a conventional streaming system that does not consider the relative importance of the video packets, the gains in performance range up to 6 dB for the scenario of bandwidth adaptation, and even up to 10 dB for the scenario of random packet loss adaptation.

Journal ArticleDOI
TL;DR: A recursion model is derived that relates the average channel-induced distortion in successive P-frames in decoded video caused by random packet losses in the underlying transmission network to characterize the channel distortion in subsequent received frames after a single lost frame.
Abstract: This paper analyzes the distortion in decoded video caused by random packet losses in the underlying transmission network. A recursion model is derived that relates the average channel-induced distortion in successive P-frames. The model is applicable to all video encoders using the block-based motion-compensated prediction framework (including the H.261/263/264 and MPEG1/2/4 video coding standards) and allows for any motion-compensated temporal concealment method at the decoder. The model explicitly considers the interpolation operation invoked for motion-compensated temporal prediction and concealment with sub-pel motion vectors. The model also takes into account the two new features of the H.264/AVC standard, namely intraprediction and inloop deblocking filtering. A comparison with simulation data shows that the model is very accurate over a large range of packet loss rates and encoder intrablock rates. The model is further adapted to characterize the channel distortion in subsequent received frames after a single lost frame. This allows one to easily evaluate the impact of a single frame loss.

Journal ArticleDOI
TL;DR: This correspondence is answered in the affirmative by showing that the optimal throughput-delay tradeoff is still D(n)=Theta(nT(n)), where now D( n) is the average delay per bit.
Abstract: In Part I of this paper, the optimal throughput-delay tradeoff for static wireless networks was shown to be D(n)=Theta(nT(n)), where D(n) and T(n) are the average packet delay and throughput in a network of n nodes, respectively. While this tradeoff captures the essential network dynamics, packets need to scale down with the network size. In this "fluid model, " no buffers are required. Due to this packet scaling, D(n) does not correspond to the average delay per bit. This leads to the question whether the tradeoff remains the same when the packet size is kept constant, which necessitates packet scheduling in the network. In this correspondence, this question is answered in the affirmative by showing that the optimal throughput-delay tradeoff is still D(n)=Theta(nT(n)), where now D(n) is the average delay per bit. Packets of constant size necessitate the use of buffers in the network, which in turn requires scheduling packet transmissions in a discrete-time queuing network and analyzing the corresponding delay. Our method consists of deriving packet schedules in the discrete-time network by devising a corresponding continuous-time network and then analyzing the delay induced in the actual discrete network using results from queuing theory for continuous-time networks.

Journal ArticleDOI
TL;DR: This article presents a SIP-based architecture that supports soft handoff for IP-centric wireless networks and ensures that there is no packet loss and that the end-to-end delay jitter is kept under control.
Abstract: Application-level protocol abstraction is required to support seamless mobility in next-generation heterogeneous wireless networks. Session initiation protocol (SIP) provides the required abstraction for mobility support for multimedia applications in such networks. However, the handoff procedure with SIP suffers from undesirable delay and hence packet loss in some cases, which is detrimental to applications like voice over IP (VoIP) or streaming video that demand stringent quality of service (QoS) requirements. In this article we present a SIP-based architecture that supports soft handoff for IP-centric wireless networks. Soft handoff ensures that there is no packet loss and that the end-to-end delay jitter is kept under control

01 Jul 2006
TL;DR: This document describes an RTP payload format for performing retransmissions and assumes that RTCP feedback as defined in the extended RTP profile for R TCP-based feedback (denoted RTP/AVPF), is available in this memo.
Abstract: RTP retransmission is an effective packet loss recovery technique for real-time applications with relaxed delay bounds. This document describes an RTP payload format for performing retransmissions. Retransmitted RTP packets are sent in a separate stream from the original RTP stream. It is assumed that feedback from receivers to senders is available. In particular, it is assumed that Real-time Transport Control Protocol (RTCP) feedback as defined in the extended RTP profile for RTCP-based feedback (denoted RTP/AVPF) is available in this memo. [STANDARDS-TRACK]

01 Jan 2006
TL;DR: The paper proposes the use of synthetic speech coding algorithms (vocoders) to provide redundancy, since the algorithms produce a very low bit-rate stream, which only adds a small overhead to a packet.
Abstract: This paper describes current problems found with audio applications over the MBONE (Multicast Backbone), and investigates possible solutions to the most common one packet loss. The principles of packet speech systems are discussed, and how the structure allows the use of redundancy to design viable solutions to the problem. The paper proposes the use of synthetic speech coding algorithms (vocoders) to provide redundancy, since the algorithms produce a very low bit-rate stream, which only adds a small overhead to a packet. Preliminary experiments show that normal speech repaired with synthetic quality speech is intelligible, even at very high loss rates.

Journal ArticleDOI
TL;DR: This paper proposes a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP.
Abstract: Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we propose a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCP-PR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering (including out-or-order acknowledgments) has no effect on TCP-PR's performance. Through extensive simulations, we show that TCP-PR performs consistently better than existing mechanisms that try to make TCP more robust to packet reordering. In the case that packets are not reordered, we verify that TCP-PR maintains the same throughput as typical implementations of TCP (specifically, TCP-SACK) and shares network resources fairly. Furthermore, TCP-PR only requires changes to the TCP sender side making it easier to deploy.

Journal ArticleDOI
TL;DR: This paper developed a network mobility testbed and implemented the network mobility (NEMO) basic support protocol and identified problems in the architecture which affect the handoff and routing performance, and extended a previously proposed route optimization (RO) scheme, OptiNets.
Abstract: Measuring the performance of an implementation of a set of protocols and analyzing the results is crucial to understanding the performance and limitations of the protocols in a real network environment. Based on this information, the protocols and their interactions can be improved to enhance the performance of the whole system. To this end, we have developed a network mobility testbed and implemented the network mobility (NEMO) basic support protocol and have identified problems in the architecture which affect the handoff and routing performance. To address the identified handoff performance issues, we have proposed the use of make-before-break handoffs with two network interfaces for NEMO. We have carried out a comparison study of handoffs with NEMO and have shown that the proposed scheme provides near-optimal performance. Further, we have extended a previously proposed route optimization (RO) scheme, OptiNets. We have compared the routing and header overheads using experiments and analysis and shown that the use of the extended OptiNets scheme reduces these overheads of NEMO to a level comparable with Mobile IPv6 RO. Finally, this paper shows that the proposed handoff and RO schemes enable NEMO protocol to be used in applications sensitive to delay and packet loss

Journal ArticleDOI
TL;DR: Multiple description (MD) codes, a type of network source codes, are used to compensate for the effect of packet dropping on Kalman filtering and show that MD codes greatly improve the statistical stability and performance of Kalman filter over a large set of packet loss scenarios.

01 Apr 2006
TL;DR: A Media Delivery Index measurement that provides an indication of traffic jitter, a measure of deviation from nominal flow rates, and a data loss at-a-glance measure for a particular flow is defined.
Abstract: This memo defines a Media Delivery Index (MDI) measurement that can be used as a diagnostic tool or a quality indicator for monitoring a network intended to deliver applications such as streaming media, MPEG video, Voice over IP, or other information sensitive to arrival time and packet loss. It provides an indication of traffic jitter, a measure of deviation from nominal flow rates, and a data loss at-a-glance measure for a particular flow. For instance, the MDI may be used as a reference in characterizing and comparing networks carrying UDP streaming media. This memo provides information for the Internet community.

Journal ArticleDOI
TL;DR: An analytical model is proposed to evaluate the packet loss probability and the average delay for shared buffers at a single switch and it is observed that the shared buffering scheme can significantly reduce packet loss with much smaller switch sizes and fewer FDLs than the output buffering architecture.
Abstract: Packet contention is a major issue in asynchronous optical packet switching networks. Optical buffering, which is implemented by fiber delay lines (FDLs), is fundamental to many optical switch implementations for resolving contention. Most existing optical buffering implementations are output-based and require a huge amount of FDLs as well as larger switch sizes, which impose extra cost on the overall system. In this paper, we consider a shared optical buffering architecture which can reduce the buffer size at a switch. We propose an analytical model to evaluate the packet loss probability and the average delay For shared buffers at a single switch. We then compare the performance of output buffers to shared buffers under different granularities of FDLs. We observe that, by choosing an appropriate granularity, the shared buffering scheme can significantly reduce packet loss with much smaller switch sizes and fewer FDLs than the output buffering architecture. The accuracy of the analytical model is also confirmed by extensive simulation

Patent
15 Sep 2006
TL;DR: In this paper, a decoder-ready packet is generated by decoding at least part of a media stream, and then decoding the decoder ready packet is processed synchronously with the decoding.
Abstract: A method including receiving data packets encapsulating at least part of a media stream, extracting a decoder-ready packet from the data packets, processing the decoder-ready packet; and substantially synchronously with the processing of the decoder-ready packet, generating delivery performance information for the at least part of the media stream, data from which is included in the decoder-ready packet.

Journal ArticleDOI
TL;DR: Experimental results indicate that the architecture and protocols can be combined to yield voice quality on par with the public switched telephone network.
Abstract: The cost savings and novel features associated with voice over IP (VoIP) are driving its adoption by service providers. Unfortunately, the Internet's best effort service model provides no quality of service guarantees. Because low latency and jitter are the key requirements for supporting high-quality interactive conversations, VoIP applications use UDP to transfer data, thereby subjecting themselves to quality degradations caused by packet loss and network failures. In this paper, we describe an architecture to improve the performance of such VoIP applications. Two protocols are used for localized packet loss recovery and rapid rerouting in the event of network failures. The protocols are deployed on the nodes of an application-level overlay network and require no changes to the underlying infrastructure. Experimental results indicate that the architecture and protocols can be combined to yield voice quality on par with the public switched telephone network