scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2008"


Journal ArticleDOI
TL;DR: It is shown that the minimum error covariance estimator is time-varying, stochastic, and it does not converge to a steady state, and the architecture is independent of the communication protocol and can be implemented using a finite memory buffer if the delivered packets have a finite maximum delay.
Abstract: In this note, we study optimal estimation design for sampled linear systems where the sensors measurements are transmitted to the estimator site via a generic digital communication network. Sensor measurements are subject to random delay or might even be completely lost. We show that the minimum error covariance estimator is time-varying, stochastic, and it does not converge to a steady state. Moreover, the architecture of this estimator is independent of the communication protocol and can be implemented using a finite memory buffer if the delivered packets have a finite maximum delay. We also present two alternative estimator architectures that are more computationally efficient and provide upper and lower bounds for the performance of the time-varying estimator. The stability of these estimators does not depend on packet delay but only on the overall packet loss probability. Finally, algorithms to compute critical packet loss probability and estimators performance in terms of their error covariance are given and applied to some numerical examples.

478 citations


Proceedings ArticleDOI
14 Sep 2008
TL;DR: This paper introduces two new components for improving openWiFi data delivery to moving vehicles: QuickWiFi is a streamlined client-side process to establish end-to-end connectivity, reducing mean connection time to less than 400 ms, from over 10 seconds when using standard wireless networking software.
Abstract: Cabernet is a system for delivering data to and from moving vehicles using open 802.11 (WiFi) access points encountered opportunistically during travel. Using open WiFi access from the road can be challenging. Network connectivity in Cabernet is both fleeting (access points are typically within range for a few seconds) and intermittent (because the access points do not provide continuous coverage), and suffers from high packet loss rates over the wireless channel. On the positive side, WiFi data transfers, when available, can occur at broadband speeds.In this paper, we introduce two new components for improving openWiFi data delivery to moving vehicles: The first, QuickWiFi, is a streamlined client-side process to establish end-to-end connectivity, reducing mean connection time to less than 400 ms, from over 10 seconds when using standard wireless networking software. The second part, CTP, is a transport protocol that distinguishes congestion on the wired portion of the path from losses over the wireless link, resulting in a 2x throughput improvement over TCP. To characterize the amount of open WiFi capacity available to vehicular users, we deployed Cabernet on a fleet of 10 taxis in the Boston area. The long-term average transfer rate achieved was approximately 38 Mbytes/hour per car (86 kbit/s), making Cabernet a viable system for a number of non-interactive applications.

467 citations


01 Sep 2008
TL;DR: This document describes the use of loop-free alternates to provide local protection for unicast traffic in pure IP and MPLS/LDP networks in the event of a single failure, whether link, node or shared risk link group (SRLG).
Abstract: This document describes the use of loop-free alternates to provide local protection for unicast traffic in pure IP and MPLS/LDP networks in the event of a single failure, whether link, node or shared risk link group (SRLG). The goal of this technology is to reduce the packet loss that happens while routers converge after a topology change due to a failure. Rapid failure repair is achieved through use of precalculated backup next-hops that are loop-free and safe to use until the distributed network convergence process completes. This simple approach does not require any support from other routers. The extent to which this goal can be met by this specification is dependent on the topology of the network.

377 citations


Proceedings ArticleDOI
14 Sep 2008
TL;DR: In this setting, the notion of interference cancellation for unmanaged networks - the ability for a single receiver to disambiguate and successfully receive simultaneous overlapping transmissions from multiple unsynchronized sources - is explored, and it is found that techniques can reduce packet loss rate and substantially increase spatial reuse.
Abstract: A fundamental problem with unmanaged wireless networks is high packet loss rates and poor spatial reuse, especially with bursty traffic typical of normal use. To address these limitations, we explore the notion of interference cancellation for unmanaged networks - the ability for a single receiver to disambiguate and successfully receive simultaneous overlapping transmissions from multiple unsynchronized sources. We describe a practical algorithm for interference cancellation, and implement it for ZigBee using software radios. In this setting, we find that our techniques can reduce packet loss rate and substantially increase spatial reuse. With carrier sense set to prevent concurrent sends, our approach reduces the packet loss rate during collisions from 14% to 8% due to improved handling of hidden terminals. Conversely, disabling carrier sense reduces performance for only 7% of all pairs of links and increases the delivery rate for the median pair of links in our testbed by a factor of 1.8 due to improved spatial reuse.

329 citations


Journal ArticleDOI
TL;DR: An analytical framework to evaluate the performance of IPv6-based mobility management protocols and the effect of system parameters, such as subnet residence time, packet arrival rate and wireless link delay, is investigated and shows that there is a trade-off between performance metrics and network parameters.
Abstract: Mobility management with provision of seamless handover is crucial for an efficient support of global roaming of mobile nodes (MNs) in next-generation wireless networks (NGWN). Mobile IPv6 (MIPv6) and its extensions were proposed by IETF for IP layer mobility management. However, performance of IPv6-based mobility management schemes is highly dependent on traffic characteristics and user mobility models. Consequently, it is important to assess this performance in-depth through those two factors. The performance of IPv6-based mobility management schemes is usually evaluated through simulations. This paper proposes an analytical framework to evaluate the performance of IPv6-based mobility management protocols. This proposal does not aim to advocate which is better but rather to study the effects of various network parameters on the performance of these protocols to enlighten decision-making. The effect of system parameters, such as subnet residence time, packet arrival rate and wireless link delay, is investigated for performance evaluation with respect to various metrics like signaling overhead cost, handoff latency and packet loss. Numerical results show that there is a trade-off between performance metrics and network parameters.

313 citations


Proceedings ArticleDOI
13 Apr 2008
TL;DR: This paper proposes a promising technique called COLLIE that performs loss diagnosis for 802.11-based communication by using newly designed metrics that examine error patterns within a physical-layer symbol in order to expose statistical differences between collision and weak signal based losses.
Abstract: It is well known that a packet loss in 802.11 can happen either due to collision or an insufficiently strong signal. However, discerning the exact cause of a packet loss, once it occurs, is known to be quite difficult. In this paper we take a fresh look at this problem of wireless packet loss diagnosis for 802.11-based communication and propose a promising technique called COLLIE. COLLIE performs loss diagnosis by using newly designed metrics that examine error patterns within a physical-layer symbol in order to expose statistical differences between collision and weak signal based losses. We implement COLLIE through custom driver-level modifications in Linux and evaluate its performance experimentally. Our results demonstrate that it has an accuracy ranging between 60-95% while allowing a false positive rate of up to 2%. We also demonstrate the use of COLLIE in subsequent link adaptations in both static and mobile wireless usage scenarios through measurements on regular laptops and the Netgear SPH101 Voice-over-WiFi phone. In these experiments, COLLIE led to throughput improvements of 20- 60% and reduced retransmission related costs by 40% depending upon the channel conditions.

226 citations


Journal ArticleDOI
TL;DR: A simple, low-complexity protocol, called variable-structure congestion control protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, negligible packet loss rate, low persistent queue length, and reasonable fairness.
Abstract: Achieving efficient and fair bandwidth allocation while minimizing packet loss and bottleneck queue in high bandwidth-delay product networks has long been a daunting challenge. Existing end-to-end congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ECN) have significant limitations in achieving this goal. While the XCP protocol addresses this challenge, it requires multiple bits to encode the congestion-related information exchanged between routers and end-hosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a non-trivial and time-consuming standardization process. In this paper, we design and implement a simple, low-complexity protocol, called variable-structure congestion control protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, negligible packet loss rate, low persistent queue length, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios and find that it significantly outperforms many recently-proposed TCP variants, such as HSTCP, FAST, CUBIC, etc. To gain insight into the behavior of VCP, we analyze a simplified fluid model and prove its global stability for the case of a single bottleneck shared by synchronous flows with identical round-trip times.

204 citations


Journal ArticleDOI
TL;DR: The major issues that arise when designing a reliable media streaming system for wireless networks are reviewed, including accuracy of characterizing channel fluctuations and effectiveness of application-level adaptation.
Abstract: The success of next-generation mobile communication systems depends on the ability of service providers to engineer new added-value multimedia-rich services, which impose stringent constraints on the underlying delivery/transport architecture. The reliability of real-time services is essential for the viability of any such service offering. The sporadic packet loss typical of wireless channels can be addressed using appropriate techniques such as the widely used packet-level forward error correction. In designing channel-aware media streaming applications, two interrelated and challenging issues should be tackled: accuracy of characterizing channel fluctuations and effectiveness of application-level adaptation. The first challenge requires thorough insight into channel fluctuations and their manifestations at the application level, while the second concerns the way those fluctuations are interpreted and dealt with by adaptive mechanisms such as FEC. In this article we review the major issues that arise when designing a reliable media streaming system for wireless networks.

201 citations


Journal ArticleDOI
TL;DR: This paper proposes a model that accurately estimates the expected distortion by explicitly accounting for the loss pattern, inter-frame error propagation, and the correlation between error frames, and works well for video-telephony-type of sequences with low to medium motion.
Abstract: Video communication is often afflicted by various forms of losses, such as packet loss over the Internet. This paper examines the question of whether the packet loss pattern, and in particular, the burst length, is important for accurately estimating the expected mean-squared error distortion resulting from packet loss of compressed video. We focus on the challenging case of low-bit-rate video where each P-frame typically fits within a single packet. Specifically, we: 1) verify that the loss pattern does have a significant effect on the resulting distortion; 2) explain why a loss pattern, for example a burst loss, generally produces a larger distortion than an equal number of isolated losses; and 3) propose a model that accurately estimates the expected distortion by explicitly accounting for the loss pattern, inter-frame error propagation, and the correlation between error frames. The accuracy of the proposed model is validated with H.264/AVC coded video and previous frame concealment, where for most sequences the total distortion is predicted to within plusmn0.3 dB for burst loss of length two packets, as compared to prior models which underestimate the distortion by about 1.5 dB. Furthermore, as the burst length increases, our prediction is within plusmn0.7 dB, while prior models degrade and underestimate the distortion by over 3 dB. The proposed model works well for video-telephony-type of sequences with low to medium motion. We also present a simple illustrative example, of how knowledge of the effect of burst loss can be used to adapt the schedule of video streaming to provide improved performance for a burst loss channel, without requiring an increase in bit rate.

184 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to gain helpful information and hints to efficiently face coexistence problems between such networks and optimize their setup in some real-life conditions.
Abstract: Coexistence issues between IEEE 802.11b wireless communication networks and IEEE 802.15.4 wireless sensor networks, operating over the 2.4-GHz industrial, scientific, and medical band, are assessed. In particular, meaningful experiments that are performed through a suitable testbed are presented. Such experiments involve both the physical layer, through measurements of channel power and the SIR, and the network/transport layer, by means of packet loss ratio estimations. Different configurations of the testbed are considered; major characteristics, such as the packet rate, the packet size, the SIR, and the network topology, are varied. The purpose of this paper is to gain helpful information and hints to efficiently face coexistence problems between such networks and optimize their setup in some real-life conditions. Details concerning the testbed, the measurement procedure, and the performed experiments are provided.

183 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that TSVC maintains acceptable packet latency with much less packet overhead, while significantly reducing the packet loss ratio compared with that of the existing public key infrastructure (PKI) based schemes, especially when the road traffic is heavy.
Abstract: In this paper, we propose a timed efficient and secure vehicular communication (TSVC) scheme with privacy preservation, which aims at minimizing the packet overhead in terms of signature overhead and signature verification latency without compromising the security and privacy requirements. Compared with currently existing public key based packet authentication schemes for security and privacy, the communication and computation overhead of TSVC can be significantly reduced due to the short message authentication code (MAC) tag attached in each packet for the packet authentication, by which only a fast hash operation is required to verify each packet. Simulation results demonstrate that TSVC maintains acceptable packet latency with much less packet overhead, while significantly reducing the packet loss ratio compared with that of the existing public key infrastructure (PKI) based schemes, especially when the road traffic is heavy.

Proceedings ArticleDOI
22 Apr 2008
TL;DR: Two rateless OAP protocols are designed and implemented, both of which replace the data transfer mechanism of the established OAP Deluge protocol with rateless analogs, which significantly improve OAP in such environments by drastically reducing the need for packet rebroadcasting.
Abstract: Over-the-air programming (OAP) is a fundamental service in sensor networks that relies upon reliable broadcast for efficient dissemination. As such, existing OAP protocols become decidedly inefficient (with respect to energy, communication or delay) in unreliable broadcast environments, such as those with relatively high node density or noise. In this paper, we consider OAP approaches based on rateless codes, which significantly improve OAP in such environments by drastically reducing the need for packet rebroadcasting. We thus design and implement two rateless OAP protocols, rateless Deluge and ACKless Deluge, both of which replace the data transfer mechanism of the established OAP Deluge protocol with rateless analogs. Experiments with Tmote Sky motes on single-hop networks with packet loss rates of 7% show these protocols to save significantly in communication over regular Deluge (roughly 15-30% savings in the data plane, and 50-80% in the control plane), and multi-hop experiments reveal similar trends. Simulations further shows that our new protocols scale better than standard Deluge (in terms of communication and energy) to high network density. TinyOS code for our implementation can be found at http://nislab.bu.edu.

Journal ArticleDOI
TL;DR: It is shown that passivity can be maintained in the face of varying delay and packet loss but that it depends fundamentally on the mechanism used to handle missing packets.
Abstract: In this brief, we propose a passivity-based framework for control of bilateral teleoperators under time-varying delays and data loss. The usual scattering formalism which guarantees passivity for any constant time delay is extended in several important ways to handle adverse network dynamics. Communication management modules (CMM) are proposed to reconstruct the scattering variables while guaranteeing passivity of the bilateral teleoperator and asymptotic stability of the master/slave velocities under time-varying delays and data losses. The results are also extended to the discrete domain, in particular to the case where communication between the master and slave robots occurs over a packet-switched network. We show that passivity can be maintained in the face of varying delay and packet loss but that it depends fundamentally on the mechanism used to handle missing packets. Our framework unifies several existing results in the continuous and discrete time domain. We develop novel algorithms for the CMM which not only preserve passivity and stability, but have been shown through experiments to improve tracking performance in a single-degree-of-freedom teleoperator system.

Proceedings ArticleDOI
01 Sep 2008
TL;DR: A directional flooding-based routing protocol, called DFR, which relies on a packet flooding technique to increase the reliability and addresses a well-known void problem by allowing at least one node to participate in forwarding a packet.
Abstract: Unlike terrestrial sensor networks, underwater sensor networks (UWSNs) have different characteristics such as a long propagation delay, a narrow bandwidth and high packet loss. Hence, existing path setup-based routing protocols proposed for terrestrial sensor networks are not applicable in the underwater environment. For example, they take much time when establishing a path between source and destination nodes due to the long propagation delay. In addition, the path establishment requires much overhead of control messages. Moreover, the dynamic and high packet loss degrades reliability, which invokes more retransmissions. Even though exiting routing protocols such as VBF were proposed to improve the reliability, they did not take into account the link quality. That is, there is no guarantee that packets reach the sink safely especially when a link is error-prone. In this paper, we therefore propose a directional flooding-based routing protocol, called DFR. Basically, DFR relies on a packet flooding technique to increase the reliability. However, the number of nodes which flood a packet is controlled in order to prevent a packet from flooding over the whole network and the nodes to forward the packet are decided according to the link quality. In addition, DFR also addresses a well-known void problem by allowing at least one node to participate in forwarding a packet. Our simulation study using ns-2 proves that DFR is more suitable for UWSNs especially when links are prone to packet loss.

Patent
03 May 2008
TL;DR: In this article, a method and system for transmitting packets in a packet switching network is presented. Packets received by a packet processor may be prioritized based on the urgency to process them.
Abstract: A method and system for transmitting packets in a packet switching network. Packets received by a packet processor may be prioritized based on the urgency to process them. Packets that are urgent to be processed may be referred to as real-time packets. Packets that are not urgent to be processed may be referred to as non-real-time packets. Real-time packets have a higher priority to be processed than non-real-time packets. A real-time packet may either be discarded or transmitted into a real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time queue congestion conditions. A non-real-time packet may either be discarded or transmitted into a non-real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time and non-real-time queue congestion conditions.

Book ChapterDOI
11 Jun 2008
TL;DR: Results show that LiveNet is able to accurately reconstruct network topology, determine bandwidth usage and routing paths, identify hot-spot nodes, and disambiguate sources of packet loss observed at the application level.
Abstract: We describe LiveNet, a set of tools and analysis methods for reconstructing the complex behavior of a deployed sensor network LiveNet is based on the use of multiple passive packet sniffers co-located with the network, which collect packet traces that are merged to form a global picture of the network's operation The merged trace can be used to reconstruct critical aspects of the network's operation that cannot be observed from a single vantage point or with simple application-level instrumentation We address several challenges: merging multiple sniffer traces, determining sniffer coverage, and inference of missing information for routing path reconstruction We perform a detailed validation of LiveNet's accuracy and coverage using a 184-node sensor network testbed, and present results from a real-world deployment involving physiological monitoring of patients during a disaster drill Our results show that LiveNet is able to accurately reconstruct network topology, determine bandwidth usage and routing paths, identify hot-spot nodes, and disambiguate sources of packet loss observed at the application level

Journal ArticleDOI
TL;DR: Evalvid-RA’s capabilities in performing close-to-true rate adaptive codec operation with low complexity to enable the simulation of large networks with many adaptive media sources on a single computer are presented.
Abstract: Due to the increasing deployment of conversational real-time applications like VoIP and videoconferencing, the Internet is today facing new challenges. Low end-to-end delay is a vital QoS requirement for these applications, and the best effort Internet architecture does not support this natively. The delay and packet loss statistics are directly coupled to the aggregated traffic characteristics when link utilization is close to saturation. In order to investigate the behavior and quality of such applications under heavy network load, it is therefore necessary to create genuine traffic patterns. Trace files of real compressed video and audio are text files containing the number of bytes per video and audio frame. These can serve as material to construct mathematical traffic models. They can also serve as traffic generators in network simulators since they determine the packet sizes and their time schedule. However, to inspect perceived quality, the compressed binary content is needed to ensure decoding of received media. The EvalVid streaming video tool-set enables this using a sophisticated reassembly engine. Nevertheless, there has been a lack of research solutions for rate adaptive media content. The Internet community fears a congestion collapse if the usage of non-adaptive media content continues to grow. This paper presents a solution named Evalvid-RA for the simulation of true rate adaptive video. The solution generates real rate adaptive MPEG-4 streaming traffic, using the quantizer scale for adjusting the sending rate. A feedback based VBR rate controller is used at simulation time, supporting TFRC and a proprietary congestion control system named P-AQM. Example ns-2 simulations of TFRC and P-AQM demonstrate Evalvid-RA's capabilities in performing close-to-true rate adaptive codec operation with low complexity to enable the simulation of large networks with many adaptive media sources on a single computer.

Patent
01 Dec 2008
TL;DR: In this paper, the authors present a system and methods for accelerating network packet processing for devices configured to process network traffic at relatively high data rates, which includes a hardware-accelerated packet processing module that handles in-sequence network packets and a software-based processing module for handling out-of-sequence and exception case network packets.
Abstract: Disclosed is a system and methods for accelerating network packet processing for devices configured to process network traffic at relatively high data rates. The system incorporates a hardware-accelerated packet processing module that handles in-sequence network packets and a software-based processing module that handles out-of-sequence and exception case network packets.

Proceedings Article
16 Apr 2008
TL;DR: This work presents a simulation environment for protocols with improved performance under benign conditions that combines a declarative networking system with a robust network simulator and shows that Zyzzyva outperforms protocols like PBFT and Q/U undermost but not all conditions, indicating that one-size-fits-all protocols may be hard if not impossible to design in practice.
Abstract: Much recent work on Byzantine state machine replication focuses on protocols with improved performance under benign conditions (LANs, homogeneous replicas, limited crash faults), with relatively little evaluation under typical, practical conditions (WAN delays, packet loss, transient disconnection, shared resources). This makes it difficult for system designers to choose the appropriate protocol for a real target deployment. Moreover, most protocol implementations differ in their choice of runtime environment, crypto library, and transport, hindering direct protocol comparisons even under similar conditions. We present a simulation environment for such protocols that combines a declarative networking system with a robust network simulator. Protocols can be rapidly implemented from pseudocode in the high-level declarative language of the former, while network conditions and (measured) costs of communication packages and crypto primitives can be plugged into the latter. We show that the resulting simulator faithfully predicts the performance of native protocol implementations, both as published and as measured in our local network. We use the simulator to compare representative protocols under identical conditions and rapidly explore the effects of changes in the costs of crypto operations, workloads, network conditions and faults. For example, we show that Zyzzyva outperforms protocols like PBFT and Q/U undermost but not all conditions, indicating that one-size-fits-all protocols may be hard if not impossible to design in practice.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: CARS, a novel context-aware rate selection algorithm that makes use of context information to systematically address the above challenges, while maximizing the link throughput, is designed, implemented and evaluated.
Abstract: Traffic querying, road sensing and mobile content delivery are emerging application domains for vehicular networks whose performance depends on the throughput these networks can sustain. Rate adaptation is one of the key mechanisms at the link layer that determine this performance. Rate adaptation in vehicular networks faces the following key challenges: (1) due to the rapid variations of the link quality caused by fading and mobility at vehicular speeds, the transmission rate must adapt fast in order to be effective, (2) during infrequent and bursty transmission, the rate adaptation scheme must be able to estimate the link quality with few or no packets transmitted in the estimation window, (3) the rate adaptation scheme must distinguish losses due to environment from those due to hidden-station induced collision. Our extensive outdoor experiments show that the existing rate adaptation schemes for 802.11 wireless networks under utilize the link capacity in vehicular environments. In this paper, we design, implement and evaluate CARS, a novel context-aware rate selection algorithm that makes use of context information (e.g. vehicle speed and distance from neighbor) to systematically address the above challenges, while maximizing the link throughput. Our experimental evaluation in real outdoor vehicular environments with different mobility scenarios shows that CARS adapts to changing link conditions at high vehicular speeds faster than existing rate-adaptation algorithms. Our scheme achieves significantly higher throughput, up to 79%, in all the tested scenarios, and is robust to packet loss due to collisions, improving the throughput by up to 256% in the presence of hidden stations.

Proceedings ArticleDOI
02 Jun 2008
TL;DR: This paper designs and analyzes path-quality monitoring protocols that reliably raise an alarm when the packet-loss rate and delay exceed a threshold, even when an adversary tries to bias monitoring results by selectively delaying, dropping, modifying, injecting, or preferentially treating packets.
Abstract: Edge networks connected to the Internet need effective monitoring techniques to drive routing decisions and detect violations of Service Level Agreements (SLAs). However, existing measurement tools, like ping, traceroute, and trajectory sampling, are vulnerable to attacks that can make a path look better than it really is. In this paper, we design and analyze path-quality monitoring protocols that reliably raise an alarm when the packet-loss rate and delay exceed a threshold, even when an adversary tries to bias monitoring results by selectively delaying, dropping, modifying, injecting, or preferentially treating packets.Despite the strong threat model we consider in this paper, our protocols are efficient enough to run at line rate on high-speed routers. We present a secure sketching protocol for identifying when packet loss and delay degrade beyond a threshold. This protocol is extremely lightweight, requiring only 250-600 bytes of storage and periodic transmission of a comparably sized IP packet to monitor billions of packets. We also present secure sampling protocols that provide faster feedback and accurate round-trip delay estimates, at the expense of somewhat higher storage and communication costs. We prove that all our protocols satisfy a precise definition of secure path-quality monitoring and derive analytic expressions for the trade-off between statistical accuracy and system overhead. We also compare how our protocols perform in the client-server setting, when paths are asymmetric, and when packet marking is not permitted.

Proceedings ArticleDOI
01 Oct 2008
TL;DR: The modification of the well known GPSR routing protocol, with the concept of lifetime, showed that GPSR with Lifetime achieved 20% to 40% increase in the packet delivery rate and significant improvement in packet delivery ratio for different ¿HELLO¿ message intervals when compared to GPSR.
Abstract: The modification of the well known GPSR routing protocol, with the concept of lifetime is proposed. The lifetime is calculated between the node and each of its neighbors. A lifetime timer is set to the lifetime value. This timer helps in determining the quality of link and duration of the neighbor?s existence. During the next hop selection process, the node selects the neighbor which is closest to the destination with good link quality and non-zero lifetime timer value in contrast to GPSR. This results in appropriate selection of the next hop node in a highly mobile and noisy environment, thus reducing the packet loss. The simulation is conducted for two scenarios where the source and destination are travelling in same and opposite directions. The results showed that GPSR with Lifetime achieved 20% to 40% increase in the packet delivery rate and significant improvement in packet delivery ratio for different ?HELLO? message intervals when compared to GPSR.

Proceedings Article
02 Jun 2008
TL;DR: It is shown that packet copying is the bottleneck in high-throughput packet forwarding and that by moving packet copying off the critical path, the results suggest that other mechanisms, such as multi-route forwarding, may be fruitful way to further improve multi-hop throughput.
Abstract: Recent work in sensor network energy optimization has shown that batch-and-send networks can significantly reduce network energy consumption. Batch-and-send networks rely on effective batch data transport protocols, but the throughput of state-of-the-art protocols is low. We present conditional immediate transmission, a novel packet forwarding mechanism, with which we achieve a 109 kbit/s raw data throughput over a 6-hop multi-channel 250 kbit/s 802.15.4 network; 97% of the theoretical upper bound. We show that packet copying is the bottleneck in high-throughput packet forwarding and that by moving packet copying off the critical path, we nearly double the end-to-end throughput. Our results can be seen as an upper bound on the achievable throughput over a single-route, multi-channel, multi-hop 802.15.4 network. While it might be possible to slightly improve our performance, we are sufficiently close to the theoretical upper bound for such work to be of limited value. Rather, our results suggests that other mechanisms, such as multi-route forwarding, may be fruitful way to further improve multi-hop throughput.

Journal ArticleDOI
TL;DR: It is argued that the availability of larger buffers in the network enables IPTV to better offer new services (in particular, time-shifted TV, network personal video recorder, and video-on-demand) than the competing platforms.
Abstract: Currently, digital television is gradually replacing analogue TV. Although these digital TV services can be delivered via various broadcast networks (e.g., terrestrial, cable, satellite), Internet Protocol TV over broadband telecommunication networks offers much more than traditional broadcast TV. Not only can it improve the quality that users experience with this linear programming TV service, but it also paves the way for new TV services, such as video-on- demand, time-shifted TV, and network personal video recorder services, because of its integral return channel and the ability to address individual users. This article first provides an overview of a typical IPTV network architecture and some basic video coding concepts. Based on these, we then explain how IPTV can increase the linear programming TV quality experienced by end users by reducing channel-change latency and mitigating packet loss. For the latter, forward error correction and automatic repeat request techniques are discussed, whereas for the former a solution based on a circular buffer strategy is described. This article further argues that the availability of larger buffers in the network enables IPTV to better offer new services (in particular, time-shifted TV, network personal video recorder, and video-on-demand) than the competing platforms.

Journal ArticleDOI
TL;DR: This paper is concerned with control applications over lossy data networks, and the discrete‐time linear quadratic Gaussian (LQG) optimal control problem is considered.
Abstract: This paper is concerned with control applications over lossy data networks. Sensor data is transmitted to an estimation-control unit over a network, and control commands are issued to subsystems over the same network. Sensor and control packets may be randomly lost according to a Bernoulli process. In this context, the discrete-time linear quadratic Gaussian (LQG) optimal control problem is considered. It is known that in the scenario described above, and for protocols for which there is no acknowledgment of successful delivery of control packets (e.g. UDP-like protocols), the LQG optimal controller is in general nonlinear. However, the simplicity of a linear sub-optimal solution is attractive for a variety of applications. Accordingly, this paper characterizes the optimal linear static controller and compares its performance to the case when there is acknowledgment of delivery of packets (e.g. TCP-like protocols). Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society

Proceedings ArticleDOI
23 Jun 2008
TL;DR: A new queue based congestion control protocol with priority support (QCCP-PS), using the queue length as an indication of congestion degree, which has a good achieved priority close to the ideal and near-zero packet loss probability.
Abstract: New applications made possible by the rapid improvements and miniaturization in hardware has motivated recent developments in wireless multimedia sensor networks (WMSNs). As multimedia applications produce high volumes of data which require high transmission rates, multimedia traffic is usually high speed. This may cause congestion in the sensor nodes, leading to impairments in the quality of service (QoS) of multimedia applications. Thus, to meet the QoS requirements of multimedia applications, a reliable and fair transport protocol is mandatory. An important function of the transport layer in WMSNs is congestion control. In this paper, we present a new queue based congestion control protocol with priority support (QCCP-PS), using the queue length as an indication of congestion degree. The rate assignment to each traffic source is based on its priority index as well as its current congestion degree. Simulation results show that the proposed QCCP-PS protocol can detect congestion better than previous mechanisms. Furthermore it has a good achieved priority close to the ideal and near-zero packet loss probability, which make it an efficient congestion control protocol for multimedia traffic in WMSNs. As congestion wastes the scarce energy due to a large number of retransmissions and packet drops, the proposed QCCP-PS protocol can save energy at each node, given the reduced number of retransmissions and packet losses.

Journal ArticleDOI
TL;DR: The scheme makes use of a packet prioritization strategy that orders video packets based on their contribution to reducing the expected distortion of the received video sequence, and significantly outperforms content-independent packet scheduling schemes.
Abstract: Demand for multimedia services, such as video streaming over wireless networks, has grown dramatically in recent years. The downlink transmission of multiple video sequences to multiple users over a shared resource-limited wireless channel, however, is a daunting task. Among the many challenges in this area are the time-varying channel conditions, limited available resources, such as bandwidth and power, and the different transmission requirements of different video content. This work takes into account the time-varying nature of the wireless channels, as well as the importance of individual video packets, to develop a cross-layer resource allocation and packet scheduling scheme for multiuser video streaming over lossy wireless packet access networks. Assuming that accurate channel feedback is not available at the scheduler, random channel losses combined with complex error concealment at the receiver make it impossible for the scheduler to determine the actual distortion of the sequence at the receiver. Therefore, the objective of the optimization is to minimize the expected distortion of the received sequence, where the expectation is calculated at the scheduler with respect to the packet loss probability in the channel. The expected distortion is used to order the packets in the transmission queue of each user, and then gradients of the expected distortion are used to efficiently allocate resources across users. Simulations show that the proposed scheme performs significantly better than a conventional content-independent scheme for video transmission.

Proceedings ArticleDOI
12 May 2008
TL;DR: An overview of the model, of its integration into a multimedia model predicting audio-visual quality, and of its application to service monitoring are provided, and a performance analysis shows a high correlation with the results of different subjective video quality perception tests.
Abstract: The paper presents a parameter-based model for predicting the perceived quality of transmitted video for IPTV applications. The core model we derived can be applied both to service monitoring and network or service planning. In its current form, the model covers H.264 and MPEG-2 coded video (standard and high definition) transmitted over IP-links. The model includes factors like the coding bit-rate, the packet loss percentage and the type of packet loss handling used by the codec. The paper provides an overview of the model, of its integration into a multimedia model predicting audio-visual quality, and of its application to service monitoring. A performance analysis is presented showing a high correlation with the results of different subjective video quality perception tests. An outlook highlights future model extensions.

Patent
16 Apr 2008
TL;DR: In this article, the authors propose a method for dynamically interleaving streams, including introducing greater amounts of interleaves as a stream is transmitted independently of any source block structure to spread out losses or errors in the channel over a much larger period of time within the original stream than if interleaved were not introduced, provide superior protection against packet loss or packet corruption when used with FEC coding, and allow content zapping time and content transition time to be reduced to a minimum and minimal content transition times.
Abstract: A communications system can provide methods of dynamically interleaving streams, including methods for dynamically introducing greater amounts of interleaving as a stream is transmitted independently of any source block structure to spread out losses or errors in the channel over a much larger period of time within the original stream than if interleaving were not introduced, provide superior protection against packet loss or packet corruption when used with FEC coding, provide superior protection against network jitter, and allow content zapping time and the content transition time to be reduced to a minimum and minimal content transition times. Streams may be partitioned into sub-streams, delivering the sub-streams to receivers along different paths through a network and receiving concurrently different sub-streams at a receiver sent from potentially different servers. When used in conjunction with FEC encoding, the methods include delivering portions of an encoding of each source block from potentially different servers.

Journal ArticleDOI
TL;DR: Analyzing the theoretically achievable as well as the actually achieved quality of IP-based voice calls using Skype shows in how far Skype over UMTS is able to keep pace with existing mobile telephony systems and how it reacts to different network characteristics.