scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2010"


Journal ArticleDOI
TL;DR: A newly developed NCS model including all these network phenomena is provided, including communication constraints, to provide an explicit construction of a continuum of Lyapunov functions that guarantee stability of the NCS in the presence of communication constraints.
Abstract: There are many communication imperfections in networked control systems (NCS) such as varying transmission delays, varying sampling/transmission intervals, packet loss, communication constraints and quantization effects. Most of the available literature on NCS focuses on only some of these aspects, while ignoring the others. In this paper we present a general framework that incorporates communication constraints, varying transmission intervals and varying delays. Based on a newly developed NCS model including all these network phenomena, we will provide an explicit construction of a continuum of Lyapunov functions. Based on this continuum of Lyapunov functions we will derive bounds on the maximally allowable transmission interval (MATI) and the maximally allowable delay (MAD) that guarantee stability of the NCS in the presence of communication constraints. The developed theory includes recently improved results for delay-free NCS as a special case. After considering stability, we also study semi-global practical stability (under weaker conditions) and performance of the NCS in terms of Lp gains from disturbance inputs to controlled outputs. The developed results lead to tradeoff curves between MATI, MAD and performance gains that depend on the used protocol. These tradeoff curves provide quantitative information that supports the network designer when selecting appropriate networks and protocols guaranteeing stability and a desirable level of performance, while being robust to specified variations in delays and transmission intervals. The complete design procedure will be illustrated using a benchmark example.

827 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: By studying the interaction between smartphone traffic and the radio power management policy, it is found that the power consumption of the radio can be reduced by 35% with minimal impact on the performance of packet exchanges.
Abstract: Using data from 43 users across two platforms, we present a detailed look at smartphone traffic. We find that browsing contributes over half of the traffic, while each of email, media, and maps contribute roughly 10%. We also find that the overhead of lower layer protocols is high because of small transfer sizes. For half of the transfers that use transport-level security, header bytes correspond to 40% of the total. We show that while packet loss is the main factor that limits the throughput of smartphone traffic, larger send buffers at Internet servers can improve the throughput of a quarter of the transfers. Finally, by studying the interaction between smartphone traffic and the radio power management policy, we find that the power consumption of the radio can be reduced by 35% with minimal impact on the performance of packet exchanges.

438 citations


Journal ArticleDOI
TL;DR: The development of the EEE standard and how energy savings resulting from the adoption of EEE may exceed $400 million per year in the U.S. alone are described and results show that packet coalescing can significantly improve energy efficiency while keeping absolute packet delays to tolerable bounds are presented.
Abstract: Ethernet is the dominant wireline communications technology for LANs with over 1 billion interfaces installed in the U.S. and over 3 billion worldwide. In 2006 the IEEE 802.3 Working Group started an effort to improve the energy efficiency of Ethernet. This effort became IEEE P802.3az Energy Efficient Ethernet (EEE) resulting in IEEE Std 802.3az-2010, which was approved September 30, 2010. EEE uses a Low Power Idle mode to reduce the energy consumption of a link when no packets are being sent. In this article, we describe the development of the EEE standard and how energy savings resulting from the adoption of EEE may exceed $400 million per year in the U.S. alone (and over $1 billion worldwide). We also present results from a simulation-based performance evaluation showing how packet coalescing can be used to improve the energy efficiency of EEE. Our results show that packet coalescing can significantly improve energy efficiency while keeping absolute packet delays to tolerable bounds. We are aware that coalescing may cause packet loss in downstream buffers, especially when using TCP/IP. We explore the effects of coalescing on TCP/IP flows with an ns-2 simulation, note that coalescing is already used to reduce packet processing load on the system CPU, and suggest open questions for future work. This article will help clarify what can be expected when EEE is deployed.

345 citations


Journal ArticleDOI
TL;DR: This work model the input and output missing data as two separate Bernoulli processes characterised by probabilities of missing data, then a missing output estimator is designed, and a recursive algorithm for parameter estimation is developed by modifying the Kalman filter-based algorithm.
Abstract: We consider the problem of parameter estimation and output estimation for systems in a transmission control protocol (TCP) based network environment. As a result of networked-induced time delays and packet loss, the input and output data are inevitably subject to randomly missing data. Based on the available incomplete data, we first model the input and output missing data as two separate Bernoulli processes characterised by probabilities of missing data, then a missing output estimator is designed, and finally we develop a recursive algorithm for parameter estimation by modifying the Kalman filter-based algorithm. Under the stochastic framework, convergence properties of both the parameter estimation and output estimation are established. Simulation results illustrate the effectiveness of the proposed algorithms.

242 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the consensus problem for a team of second-order mobile agents communicating via a network with noise, variable delays and occasional packet losses and proposed an approach to designing consensus protocol and numerical examples are given to illustrate the results.
Abstract: The consensus problem is considered for a team of second-order mobile agents communicating via a network with noise, variable delays and occasional packet losses. A queuing mechanism is applied and the switching process of the interaction topology of the network is modeled as a Bernoulli random process. In such a framework, a necessary and sufficient condition is presented for the mean-square robust consensus. Moreover, a necessary and sufficient condition of the solvability of the mean-square robust consensus problem is established. An approach to designing consensus protocol is proposed and numerical examples are given to illustrate the results.

212 citations


01 Jan 2010
TL;DR: An approach to designing consensus protocol is proposed and numerical examples are given to illustrate the results.
Abstract: The consensus problem is considered for a team of second- order mobile agents communicating via a network with noise, variable de- lays and occasional packet losses. A queuing mechanism is applied and the switching process of the interaction topology of the network is modeled as a Bernoulli random process. In such a framework, a necessary and sufficient condition is presented for the mean-square robust consensus. Moreover, a necessary and sufficient condition of the solvability of the mean-square ro- bust consensus problem is established. An approach to designing consensus protocol is proposed and numerical examples are given to illustrate the re- sults.

199 citations


Journal ArticleDOI
TL;DR: This paper develops a CC scheme in which control decisions are made based on global information and a DC scheme which enables distributed actuators to make control decisions locally and proposes a method for reducing packet-loss rate.
Abstract: This paper considers joint problems of control and communication in wireless sensor and actuator networks (WSANs) for building-environment control systems. In traditional control systems, centralized control (CC) and distributed control (DC) are two major approaches. However, little work has been done in comparing the two approaches in joint problems of control and communication, particularly in WSANs serving as components of control loops. In this paper, we develop a CC scheme in which control decisions are made based on global information and a DC scheme which enables distributed actuators to make control decisions locally. We also develop methods that enable wireless communications among system devices compatible with the control strategies, and propose a method for reducing packet-loss rate. We compare the two schemes using simulations in many aspects. Simulation results show that the DC can achieve a comparable control performance of the CC, while the DC is more robust against packet loss and has lower computational complexity than the CC. Furthermore, the DC has shorter actuation latency than the CC under certain conditions.

185 citations


Journal ArticleDOI
TL;DR: Simulation results show that the EBGR scheme significantly outperforms existing protocols in wireless sensor networks with highly dynamic network topologies and extends to lossy sensor networks to provide energy-efficient routing in the presence of unreliable communication links.
Abstract: Geographic routing is an attractive localized routing scheme for wireless sensor networks (WSNs) due to its desirable scalability and efficiency. Maintaining neighborhood information for packet forwarding can achieve a high efficiency in geographic routing, but may not be appropriate for WSNs in highly dynamic scenarios where network topology changes frequently due to nodes mobility and availability. We propose a novel online routing scheme, called Energy-efficient Beaconless Geographic Routing (EBGR), which can provide loop-free, fully stateless, energy-efficient sensor-to-sink routing at a low communication overhead without the help of prior neighborhood knowledge. In EBGR, each node first calculates its ideal next-hop relay position on the straight line toward the sink based on the energy-optimal forwarding distance, and each forwarder selects the neighbor closest to its ideal next-hop relay position as the next-hop relay using the Request-To-Send/Clear-To-Send (RTS/CTS) handshaking mechanism. We establish the lower and upper bounds on hop count and the upper bound on energy consumption under EBGR for sensor-to-sink routing, assuming no packet loss and no failures in greedy forwarding. Moreover, we demonstrate that the expected total energy consumption along a route toward the sink under EBGR approaches to the lower bound with the increase of node deployment density. We also extend EBGR to lossy sensor networks to provide energy-efficient routing in the presence of unreliable communication links. Simulation results show that our scheme significantly outperforms existing protocols in wireless sensor networks with highly dynamic network topologies.

166 citations


Journal ArticleDOI
TL;DR: Compared to typical routing algorithms in sensor networks and the traditional ant-based algorithm, this new algorithm has better convergence and provides significantly better QoS for multiple types of services in wireless multimedia sensor networks.

148 citations


Journal ArticleDOI
TL;DR: A detailed survey of the field of buffer management policies in the context of packet transmission for network switches is provided, describing various models of the problem that have been studied, and summarizing the known results.
Abstract: Over the past decade, there has been great interest in the study of buffer management policies in the context of packet transmission for network switches. In a typical model, a switch receives packets on one or more input ports, with each packet having a designated output port through which it should be transmitted. An online policy must consider bandwidth limits on the rate of transmission, memory constraints impacting the buffering of packets within a switch, and variations in packet properties used to differentiate quality of service. With so many constraints, a switch may not be able to deliver all packets, in which case some will be droppedIn the online algorithms community, researchers have used competitive analysis to evaluate the quality of an online policy in maximizing the value of those packets it is able to transmit. In this article, we provide a detailed survey of the field, describing various models of the problem that have been studied, and summarizing the known results.

139 citations


Journal ArticleDOI
TL;DR: This technical note investigates the design of sliding mode control subject to packet losses, and using the stochastic Lyapunov method, the state trajectories are shown to enter into (in mean square) a neighborhood of the specified sliding surface.
Abstract: This technical note investigates the design of sliding mode control subject to packet losses. It is assumed that there exists a communication network in the feedback loop, and the dropout of data packet may occur. First, an estimation method is proposed to compensate the packet dropout. Subsequently, a discrete-time integral sliding surface involving dropout probability is introduced and a sliding mode controller is designed. By using the stochastic Lyapunov method, the state trajectories are shown to enter into (in mean square) a neighborhood of the specified sliding surface. Meanwhile, the stability of sliding mode dynamics is also ensured. Finally, numerical simulation example is provided.

Journal ArticleDOI
TL;DR: An analytical framework is developed to investigate the impacts of network dynamics on the user perceived video quality and proposes adaptive playout buffer management schemes to optimally manage the threshold of video playback towards the maximal user utility.
Abstract: We develop an analytical framework to investigate the impacts of network dynamics on the user perceived video quality. Our investigation stands from the end user's perspective by analyzing the receiver playout buffer. In specific, we model the playback buffer at the receiver by a G/G/1/? and G/G/1/N queue, respectively, with arbitrary patterns of packet arrival and playback. We then examine the transient queue length of the buffer using the diffusion approximation. We obtain the closed-form expressions of the video quality in terms of the start-up delay, fluency of video playback and packet loss, and represent them by the network statistics, i.e., the average network throughput and delay jitter. Based on the analytical framework, we propose adaptive playout buffer management schemes to optimally manage the threshold of video playback towards the maximal user utility, according to different quality-of-service requirements of end users. The proposed framework is validated by extensive simulations.

Journal ArticleDOI
TL;DR: An analysis of the real-time performance that can be achieved in quality-of-service (QoS)-enabled 802.11 networks has been carried out and a detailed analysis of latencies and packet loss ratios for a typical enhanced distributed channel access (EDCA) infrastructure wireless local area network (WLAN).
Abstract: Nowadays, wireless communication technologies are being employed in an ever increasing number of different application areas, including industrial environments. Benefits deriving from such a choice are manifold and include, among the others, reduced deployment costs, enhanced flexibility and support for mobility. Unfortunately, because of a number of reasons that have been largely debated in the literature, wireless systems cannot be thought of as a means able to fully replace wired networks in production plants, in particular, when real-time behavior is a key issue. In this paper, an analysis of the real-time performance that can be achieved in quality-of-service (QoS)-enabled 802.11 networks has been carried out. In particular, a detailed analysis of latencies and packet loss ratios for a typical enhanced distributed channel access (EDCA) infrastructure wireless local area network (WLAN) is presented, obtained through numerical simulations. A number of aspects that may affect suitability for the use in control systems have been taken into account, including the Transmission Opportunity (TXOP) mechanism, the internal architecture of the AP, the use of a time-division multiple access (TDMA)-based communication scheme as well as the adoption of broadcast communications.

Journal ArticleDOI
TL;DR: A WBAN-specific dynamic power control mechanism is developed that performs adaptive body posture inference for optimal power assignments and is experimentally evaluated and compared with a number of static and dynamic power assignment schemes.
Abstract: This article explores on-body energy management mechanisms in the context of emerging wireless body area networks. In severely resource-constrained systems such as WBANs, energy can usually be traded for packet delay, loss, and system throughput, whenever applicable. Using experimental results from a prototype wearable sensor network, the article first characterizes the dynamic nature of on-body links with varying body postures. A literature review follows to examine the relevant transmission power control mechanisms for ensuring a balance between energy consumption and packet loss on links between body-mounted sensors. Specific emphasis is put on approaches that are customized for TPC via tracking of postural node mobility. Then the article develops a WBAN-specific dynamic power control mechanism that performs adaptive body posture inference for optimal power assignments. Finally, performance of the mechanism is experimentally evaluated and compared with a number of static and dynamic power assignment schemes.

Journal ArticleDOI
TL;DR: To trade sensor energy expenditure for state estimation accuracy, a predictive control algorithm is developed which, in an online fashion, determines the transmission power levels and codebooks to be used by the sensors.
Abstract: We study state estimation via wireless sensors over fading channels. Packet loss probabilities depend upon time-varying channel gains, packet lengths and transmission power levels of the sensors. Measurements are coded into packets by using either independent coding or distributed zero-error coding. At the gateway, a time-varying Kalman filter uses the received packets to provide the state estimates. To trade sensor energy expenditure for state estimation accuracy, we develop a predictive control algorithm which, in an online fashion, determines the transmission power levels and codebooks to be used by the sensors. To further conserve sensor energy, the controller is located at the gateway and sends coarsely quantized power increment commands, only whenever deemed necessary. Simulations based on real channel measurements illustrate that the proposed method gives excellent results.

Journal ArticleDOI
TL;DR: The model was developed using data from high encoding-rate videos, and designed for high-quality video transported over a mostly reliable network; however, the experiments show the model is applicable to different encoding rates.
Abstract: In this paper, we propose a generalized linear model for video packet loss visibility that is applicable to different group-of-picture structures. We develop the model using three subjective experiment data sets that span various encoding standards (H.264 and MPEG-2), group-of-picture structures, and decoder error concealment choices. We consider factors not only within a packet, but also in its vicinity, to account for possible temporal and spatial masking effects. We discover that the factors of scene cuts, camera motion, and reference distance are highly significant to the packet loss visibility. We apply our visibility model to packet prioritization for a video stream; when the network gets congested at an intermediate router, the router is able to decide which packets to drop such that visual quality of the video is minimally impacted. To show the effectiveness of our visibility model and its corresponding packet prioritization method, experiments are done to compare our perceptual-quality-based packet prioritization approach with existing Drop-Tail and Hint-Track-inspired cumulative-MSE-based prioritization methods. The result shows that our prioritization method produces videos of higher perceptual quality for different network conditions and group-of-picture structures. Our model was developed using data from high encoding-rate videos, and designed for high-quality video transported over a mostly reliable network; however, the experiments show the model is applicable to different encoding rates.

BookDOI
15 Aug 2010
TL;DR: In this paper, game theory is used to address a wide range of issues in wireless communications and examines how it can be employed in infrastructure-based wireless networks and multihop networks to reduce power consumption, improve system capacity, decrease packet loss, and enhance network resilience.
Abstract: Originally invented to explain complicated economic behaviors, game theory is poised to become a fundamental technique in the field of wireless communications and networks This book explains how game theory can be used to address a wide range of issues in wireless communications and examines how it can be employed in infrastructure-based wireless networks and multihop networks to reduce power consumption, improve system capacity, decrease packet loss, and enhance network resilience The authors demonstrate how to effectively apply the game theoretic model to handle issues of resource allocation, congestion control, attack, routing, energy management, packet forwarding, and MAC

Patent
30 Mar 2010
TL;DR: In this article, a method for forwarding multi-destination packets through a network device is described. But it does not specify how to forward the multidirectional packet to one or more servers.
Abstract: In one embodiment, a method includes receiving a multi-destination packet at a switch in communication with a plurality of servers through a network device, identifying a port receiving the multi-destination packet at the switch or a forwarding topology for the multi-destination packet, selecting a bit value based on the identified port or forwarding topology, inserting the bit value into a field in a virtual network tag in the multi-destination packet, and forwarding the multi-destination packet with the virtual network tag to the network device. The network device is configured to forward the multi-destination packet to one or more of the servers based on the bit value in the multi-destination packet. An apparatus for forwarding multi-destination packets is also disclosed.

Journal ArticleDOI
TL;DR: A simple and accurate stochastic model for the steady-state throughput of a TCP NewReno bulk data transfer as a function of round-trip time and loss behavior, formulated using a flexible two-parameter loss model that can better represent the diverse packet loss scenarios encountered by TCP on the Internet.
Abstract: This paper develops a simple and accurate stochastic model for the steady-state throughput of a TCP NewReno bulk data transfer as a function of round-trip time and loss behavior. Our model builds upon extensive prior work on TCP Reno throughput models but differs from these prior works in three key aspects. First, our model introduces an analytical characterization of the TCP NewReno fast recovery algorithm. Second, our model incorporates an accurate formulation of NewReno's timeout behavior. Third, our model is formulated using a flexible two-parameter loss model that can better represent the diverse packet loss scenarios encountered by TCP on the Internet. We validated our model by conducting a large number of simulations using the ns-2 simulator and by conducting emulation and Internet experiments using a NewReno implementation in the BSD TCP/IP protocol stack. The main findings from the experiments are: 1) the proposed model accurately predicts the steady-state throughput for TCP NewReno bulk data transfers under a wide range of network conditions; 2) TCP NewReno significantly outperforms TCP Reno in many of the scenarios considered; and 3) using existing TCP Reno models to estimate TCP NewReno throughput may introduce significant errors.

01 Jan 2010
TL;DR: In this article, the authors present a collection protocol for sensor networks called SenseCode, which employs network coding to gracefully introduce a configurable amount of redundant information into the network, thereby decreasing end-to-end packet error rate.
Abstract: Designing a communication protocol for sensor networks often involves obtaining the right trade-off between energy efficiency and end-to-end packet error rate. In this article, we show that network coding provides a means to elegantly balance these two goals. We present the design and implementation of SenseCode, a collection protocol for sensor networks—and, to the best of our knowledge, the first such implemented protocol to employ network coding. SenseCode provides a way to gracefully introduce a configurable amount of redundant information into the network, thereby decreasing end-to-end packet error rate in the face of packet loss. We compare SenseCode to the best (to our knowledge) existing alternative and show that it reduces end-to-end packet error rate in highly dynamic environments, while consuming a comparable amount of network resources. We have implemented SenseCode as a TinyOS module and evaluate it through extensive TOSSIM simulations.

Journal ArticleDOI
TL;DR: The proposed SOSA scheme can decrease the probability of packet losses in the discontinuous spectrum environment and improve the spectrum efficiency, and the practical issues encountered by an SU in a wireless environment are considered.
Abstract: In cognitive radio (CR) networks, the ability to capture a frequency slot for transmission in an idle channel has a significant impact on the spectrum efficiency and quality of service (QoS) of a secondary user (SU). The radio frequency (RF) front-ends of an SU have limited bandwidth for spectrum sensing with the target frequency bands dispersed in a discontinuous manner. This results in the SU having to sense multiple target frequency bands in a short period of time before selecting an appropriate idle channel for transmission. This paper addresses this technical challenge by proposing a selective opportunistic spectrum access (SOSA) scheme. With the aid of statistical data and traffic prediction techniques, our SOSA scheme can estimate the probability of a channel appearing idle based on the statistics and choose the best spectrum-sensing order to maximize spectrum efficiency and maintain an SU's connection. By means of doing so, this SOSA scheme can preserve the QoS of an SU while improving the system efficiency. In contrast to previous work, we consider the practical issues encountered by an SU in a wireless environment, such as discontinuous target frequency bands and limited spectrum-sensing ability. We examine the spectrum-sensing scheme in terms of packet loss ratio (PLR) and throughput. The simulation results show that the proposed SOSA scheme can decrease the probability of packet losses in the discontinuous spectrum environment and improve the spectrum efficiency.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper summarises and evaluates the performance of current packet capturing solutions based on commodity hardware, identifies bottlenecks and pitfalls within the capturing stack of FreeBSD and Linux, and proposes improvements to the operating system's capturing processes that reduce packet loss, and evaluates their impact on capturing performance.
Abstract: Capturing network traffic with commodity hardware has become a feasible task: Advances in hardware as well as soft- ware have boosted off-the-shelf hardware to performance levels that some years ago were the domain of expensive special-purpose hardware. However, the capturing hardware still needs to be driven by a well-performing software stack in order to minimise or avoid packet loss. Improving the capturing stack of Linux and FreeBSD has been an extensively covered research topic in the past years. Although the majority of the proposed enhancements have been backed by evaluations, these have mostly been conducted on different hardware platforms and software versions, which renders a comparative assessment of the various approaches difficult, if not impossible.This paper summarises and evaluates the performance of current packet capturing solutions based on commodity hardware. We identify bottlenecks and pitfalls within the capturing stack of FreeBSD and Linux, and give explanations for the observed effects. Based on our experiments, we provide guidelines for users on how to configure their capturing systems for optimal performance and we also give hints on debugging bad performance. Furthermore, we propose improvements to the operating system's capturing processes that reduce packet loss, and evaluate their impact on capturing performance.

Journal ArticleDOI
TL;DR: In this article, the authors show that the triple goal of hierarchical fidelity levels, robustness against wireless packet loss and efficient security can be achieved by exploiting the algebraic structure of network coding.
Abstract: Emerging practical schemes indicate that algebraic mixing of different packets by means of random linear network coding can increase the throughput and robustness of streaming services over wireless networks. However, concerns with the security of wireless video, in particular when only some of the users are entitled to the highest quality, have uncovered the need for a network coding scheme capable of ensuring different levels of confidentiality under stringent complexity requirements. We show that the triple goal of hierarchical fidelity levels, robustness against wireless packet loss and efficient security can be achieved by exploiting the algebraic structure of network coding. The key idea is to limit the encryption operations to a critical set of network coding coefficients in combination with multi-resolution video coding. Our contributions include an information-theoretic security analysis of the proposed scheme, a basic system architecture for hierarchical wireless video with network coding and simulation results.

Book ChapterDOI
TL;DR: The aim of this chapter is to survey the main research lines in a comprehensive manner for stability analysis and controller design for so-called networked control systems.
Abstract: The presence of a communication network in a control loop induces many imperfections such as varying transmission delays, varying sampling/transmission intervals, packet loss, communication constraints and quantization effects, which can degrade the control performance significantly and even lead to instability. Various techniques have been proposed in the literature for stability analysis and controller design for these so-called networked control systems. The aim of this chapter is to survey the main research lines in a comprehensive manner.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the ability of the phase-control system to tolerate communications latency in a 50kVA diesel generator in a synchronous islanded power network with IP communications.
Abstract: Synchronous islanded operation involves continuously holding an islanded power network in virtual synchronism with the main power system to aid paralleling and avoid potentially damaging out-of-synchronism reclosure. This requires phase control of the generators in the island and the transmission of a reference signal from a secure location on the main power system. Global positioning system (GPS) time-synchronized phasor measurements transmitted via an Internet protocol (IP) are used for the reference signal. However, while offering low cost and a readily available solution for distribution networks, IP communications have variable latency and are susceptible to packet loss, which can make time-critical control applications difficult. This paper investigates the ability of the phase-control system to tolerate communications latency. Phasor measurement conditioning algorithms that can tolerate latency are used in the phase-control loop of a 50-kVA diesel generator.

Patent
29 Jul 2010
TL;DR: In this article, a service node can use a classification result to process other packets in a same packet flow, such that all packets of a flow do not need to be sent to an application node for processing.
Abstract: Packets are encapsulated and sent from a service node (e.g., packet switching device) using one or more services applied to a packet by an application node (e.g., a packet switching device and/or computing platform such as a Cisco ASR 1000) to generate a result, which is used by the service node to process packets of a flow of packets to which the packet belonged. An example of a service applied to a packet is a classification service, such as, but not limited to, using deep packet inspection on the packet to identify a classification result. The service node can, for example, use this classification result to process other packets in a same packet flow, such that all packets of a flow do not need to be, nor typically are, sent to an application node for processing.

Proceedings ArticleDOI
07 Jun 2010
TL;DR: A Reduced-reference quality metric for 3D depth map transmission using the extracted edge information is proposed, motivated by the fact that the edges and contours of the depth map can represent different depth levels and hence can be used in quality evaluations.
Abstract: Due to the technological advancement of 3D video technologies and the availability of other supportive services such as high bandwidth communication links, introduction of immersive video services to the mass market is imminent. However, in order to provide better service to demanding customers, the transmission system parameters need to be changed “on the fly”. Measured 3D video quality at the receiver side can be used as feedback information to fine tune the system. However, measuring 3D video quality using Full-reference quality metrics will not be feasible due to the need of original 3D vide sequence at the receiver side. Therefore, this paper proposed a Reduced-reference quality metric for 3D depth map transmission using the extracted edge information. This work is motivated by the fact that the edges and contours of the depth map can represent different depth levels and hence can be used in quality evaluations. Performance of the method is evaluated across a range of Packet Loss Rates (PLRs) and shows acceptable results compared to its counterpart Full-reference quality metric.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: Results obtained from ns-2 network simulator show that the proposed protocols have potential for significantly improving end-to-end throughput, and at 1% and 5% packet loss rates one of the proposed protocol has shown about 21% and 95% increase in end- to- end throughput for file transfer application.
Abstract: The cognitive radio networks or CogNets poses several new challenges to the transport layer protocols, because of many unique features of cognitive radio based devices used to build them. CogNets not only have inherited all features of wireless networks, but also their link connections are intermittent and discontinuous. Exiting transport layer protocols are too slow to respond quickly for utilizing available link capacity. Furthermore, existing self-timed transport layer protocols are neither designed for nor able to provide efficient reliable end-to-end transport service in CogNets, where wide round trip delay variations naturally occur. We identify (i) requirements of protocols for the transport layer of CogNets, (ii) propose a generic architecture for implementing a family of protocols that fulfill desired requirements, (iii) design, implement, and evaluate a family of best-effort transport protocols for serving delay-tolerant applications. Results obtained from ns-2 network simulator show that the proposed protocols have potential for significantly improving end-to-end throughput. For instance, at 1% and 5% packet loss rates one of the proposed protocol has shown about 21% and 95% increase in end-to-end throughput for file transfer application.

Proceedings ArticleDOI
09 Jan 2010
TL;DR: A client-driven video transmission scheme which utilizes multiple HTTP/TCP streams that is largely insensitive to unanticipated packet loss and thereby reduces throughput fluctuations and can easily be deployed in existing network infrastructures.
Abstract: TCP-based video streaming encounters difficulties in unreliable networks with unanticipated packet loss. In combination with high round trip times, the effective throughput deteriorates rapidly and TCP connection resets or stalls may occur. In this paper, we propose a client-driven video transmission scheme which utilizes multiple HTTP/TCP streams. The scheme is largely insensitive to unanticipated packet loss and thereby reduces throughput fluctuations. Since it is based on HTTP, the scheme can easily be deployed in existing network infrastructures. It fosters scalability on the server side by shifting complexity from the server to the clients. Certain features of request-response schemes allow maintaining fairness, despite of using multiple HTTP streams. Making use of TCP, the scheme inherently adapts to congested network links.

Journal ArticleDOI
TL;DR: Subjective testing results partially uphold the commonly held claim that H.264 provides quality similar to MPEG-2 at no more than half the bit rate for the coding-only case, but the advantage of H. 264 diminishes with increasing bit rate and all but disappears when one reaches about 18 Mbps.
Abstract: The intent of H.264 (MPEG-4 Part 10) was to achieve equivalent quality to previous standards (e.g., MPEG-2) at no more than half the bit-rate. H.264 is commonly felt to have achieved this objective. This document presents results of an HDTV subjective experiment that compared the perceptual quality of H.264 to MPEG-2. The study included both the coding-only impairment case and a coding plus packet loss case, where the packet loss was representative of a well managed network (0.02% random packet loss rate). Subjective testing results partially uphold the commonly held claim that H.264 provides quality similar to MPEG-2 at no more than half the bit rate for the coding-only case. However, the advantage of H.264 diminishes with increasing bit rate and all but disappears when one reaches about 18 Mbps. For the packet loss case, results from the study indicate that H.264 suffers a large decrease in quality whereas MPEG-2 undergoes a much smaller decrease.