scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2011"


Journal ArticleDOI
TL;DR: A distributed event-triggering scheme, where a subsystem broadcasts its state information to its neighbors only when the subsystem's local state error exceeds a specified threshold, is proposed, which is able to make broadcast decisions using its locally sampled data.
Abstract: This paper examines event-triggered data transmission in distributed networked control systems with packet loss and transmission delays. We propose a distributed event-triggering scheme, where a subsystem broadcasts its state information to its neighbors only when the subsystem's local state error exceeds a specified threshold. In this scheme, a subsystem is able to make broadcast decisions using its locally sampled data. It can also locally predict the maximal allowable number of successive data dropouts (MANSD) and the state-based deadlines for transmission delays. Moreover, the designer's selection of the local event for a subsystem only requires information on that individual subsystem. Our analysis applies to both linear and nonlinear subsystems. Designing local events for a nonlinear subsystem requires us to find a controller that ensures that subsystem to be input-to-state stable. For linear subsystems, the design problem becomes a linear matrix inequality feasibility problem. With the assumption that the number of each subsystem's successive data dropouts is less than its MANSD, we show that if the transmission delays are zero, the resulting system is finite-gain Lp stable. If the delays are bounded by given deadlines, the system is asymptotically stable. We also show that those state-based deadlines for transmission delays are always greater than a positive constant.

1,134 citations


Proceedings ArticleDOI
23 May 2011
TL;DR: Analysis of the relationship among three levels of quality of service (QoS) of HTTP video streaming reveals that the frequency of rebuffering is the main factor responsible for the variations in the QoE.
Abstract: HTTP video streaming, such as Flash video, is widely deployed to deliver stored media. Owing to TCP's reliable service, the picture and sound quality would not be degraded by network impairments, such as high delay and packet loss. However, the network impairments can cause rebuffering events which would result in jerky playback and deform the video's temporal structure. These quality degradations could adversely affect users' quality of experience (QoE). In this paper, we investigate the relationship among three levels of quality of service (QoS) of HTTP video streaming: network QoS, application QoS, and user QoS (i.e., QoE). Our ultimate goal is to understand how the network QoS affects the QoE of HTTP video streaming. Our approach is to first characterize the correlation between the application and network QoS using analytical models and empirical evaluation. The second step is to perform subjective experiments to evaluate the relationship between application QoS and QoE. Our analysis reveals that the frequency of rebuffering is the main factor responsible for the variations in the QoE.

479 citations


Proceedings ArticleDOI
23 Feb 2011
TL;DR: A receiver-driven rate adaptation method for HTTP/TCP streaming that deploys a step-wise increase/ aggressive decrease method to switch up/down between the different representations of the content that are encoded at different bitrates is presented.
Abstract: Recently, HTTP has been widely used for the delivery of real-time multimedia content over the Internet, such as in video streaming applications. To combat the varying network resources of the Internet, rate adaptation is used to adapt the transmission rate to the varying network capacity. A key research problem of rate adaptation is to identify network congestion early enough and to probe the spare network capacity. In adaptive HTTP streaming, this problem becomes challenging because of the difficulties in differentiating between the short-term throughput variations, incurred by the TCP congestion control, and the throughput changes due to more persistent bandwidth changes.In this paper, we propose a novel rate adaptation algorithm for adaptive HTTP streaming that detects bandwidth changes using a smoothed HTTP throughput measured based on the segment fetch time (SFT). The smoothed HTTP throughput instead of the instantaneous TCP transmission rate is used to determine if the bitrate of the current media matches the end-to-end network bandwidth capacity. Based on the smoothed throughput measurement, this paper presents a receiver-driven rate adaptation method for HTTP/TCP streaming that deploys a step-wise increase/ aggressive decrease method to switch up/down between the different representations of the content that are encoded at different bitrates. Our rate adaptation method does not require any transport layer information such as round trip time (RTT) and packet loss rates which are available at the TCP layer. Simulation results show that the proposed rate adaptation algorithm quickly adapts to match the end-to-end network capacity and also effectively controls buffer underflow and overflow.

455 citations


01 Jun 2011
TL;DR: This work focuses on the classical Gilbert-Elliott model whose second order statistics is derived over arbitrary time scales and used to fit packet loss processes of traffic traces measured in the IP backbone of Deutsche Telekom.
Abstract: The estimation of quality for real-time services over telecommunication networks requires realistic models for impairments and failures during transmission. We focus on the classical Gilbert-Elliott model whose second order statistics is derived over arbitrary time scales and used to fit packet loss processes of traffic traces measured in the IP backbone of Deutsche Telekom. The results show that simple Markov models are appropriate to capture the observed loss pattern.

263 citations


Journal ArticleDOI
TL;DR: This paper presents an energy-efficient opportunistic routing strategy, denoted as EEOR, and extensive simulations in TOSSIM show that the protocol EEOR performs better than the well-known ExOR protocol in terms of the energy consumption, the packet loss ratio, and the average delivery delay.
Abstract: Opportunistic routing, has been shown to improve the network throughput, by allowing nodes that overhear the transmission and closer to the destination to participate in forwarding packets, i.e., in forwarder list. The nodes in forwarder list are prioritized and the lower priority forwarder will discard the packet if the packet has been forwarded by a higher priority forwarder. One challenging problem is to select and prioritize forwarder list such that a certain network performance is optimized. In this paper, we focus on selecting and prioritizing forwarder list to minimize energy consumption by all nodes. We study both cases where the transmission power of each node is fixed or dynamically adjustable. We present an energy-efficient opportunistic routing strategy, denoted as EEOR. Our extensive simulations in TOSSIM show that our protocol EEOR performs better than the well-known ExOR protocol (when adapted in sensor networks) in terms of the energy consumption, the packet loss ratio, and the average delivery delay.

216 citations


Patent
14 Feb 2011
TL;DR: In this paper, a network device includes a network chip having a number of network ports for receiving and transmitting packets, and the network chip includes logic to decapsulate a packet received from a tunnel, mark the packet with a handle associated with an originating network device of the packet using information from an encapsulation header, and forward the marked packet to a checking functionality having a destination address different from an original destination address.
Abstract: A network, network devices, and methods are described for marked packet forwarding A network device includes a network chip having a number of network ports for receiving and transmitting packets The network chip includes logic to decapsulate a packet received from a tunnel, mark the packet with a handle associated with an originating network device of the packet using information from an encapsulation header, and forward the marked packet to a checking functionality having a destination address different from an original destination address of the packet

188 citations


Journal ArticleDOI
05 Apr 2011
TL;DR: The proposed mobility-based clustering (MBC) protocol outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment.
Abstract: In this study, the authors propose a mobility-based clustering (MBC) protocol for wireless sensor networks with mobile nodes. In the proposed clustering protocol, a sensor node elects itself as a cluster-head based on its residual energy and mobility. A non-cluster-head node aims at its link stability with a cluster head during clustering according to the estimated connection time. Each non-cluster-head node is allocated a timeslot for data transmission in ascending order in a time division multiple address (TDMA) schedule based on the estimated connection time. In the steady-state phase, a sensor node transmits its sensed data in its timeslot and broadcasts a joint request message to join in a new cluster and avoid more packet loss when it has lost or is going to lose its connection with its cluster head. Simulation results show that the MBC protocol can reduce the packet loss by 25% compared with the cluster-based routing (CBR) protocol and 50% compared with the low-energy adaptive clustering hierarchy-mobile (LEACH-mobile) protocol. Moreover, it outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment.

177 citations


Journal ArticleDOI
TL;DR: The RACS scheme prolongs network life-time while employing a simple and distributed scheme which eliminates the need for scheduling, and is suitable for long-term deployment of large underwater networks.
Abstract: Inspired by the theory of compressed sensing and employing random channel access, we propose a distributed energy-efficient sensor network scheme denoted by Random Access Compressed Sensing (RACS). The proposed scheme is suitable for long-term deployment of large underwater networks, in which saving energy and bandwidth is of crucial importance. During each frame, a randomly chosen subset of nodes participate in the sensing process, then share the channel using random access. Due to the nature of random access, packets may collide at the fusion center. To account for the packet loss that occurs due to collisions, the network design employs the concept of sufficient sensing probability. With this probability, sufficiently many data packets - as required for field reconstruction based on compressed sensing - are to be received. The RACS scheme prolongs network life-time while employing a simple and distributed scheme which eliminates the need for scheduling.

161 citations


Patent
18 Apr 2011
TL;DR: In this article, a data extraction unit extracts first destination information from the header of the packet and generates second destination information that conforms to the recognized communication protocol, based on which the processing unit determines an egress interface to which the packet is to be forwarded.
Abstract: An apparatus for forwarding packets includes a packet processing pipeline having a processing unit that processes packets compliant with a recognized communication protocol. A first port coupled to the packet processing pipeline is configured to receive a packet that does not comply with the recognized communication protocol and has a header that conforms to a second communication protocol. A data extraction unit extracts first destination information from the header of the packet and, based on the first destination information, generates second destination information that conforms to the recognized communication protocol. The processing unit determines, based on the second destination information, an egress interface to which the packet is to be forwarded.

150 citations


Proceedings ArticleDOI
17 Oct 2011
TL;DR: This paper presents a multi-parallel intrusion detection architecture tailored for high speed networks that parallelizes network traffic processing and analysis at three levels, using multi-queue NICs, multiple CPUs, and multiple GPUs.
Abstract: Network intrusion detection systems are faced with the challenge of identifying diverse attacks, in extremely high speed networks. For this reason, they must operate at multi-Gigabit speeds, while performing highly-complex per-packet and per-flow data processing. In this paper, we present a multi-parallel intrusion detection architecture tailored for high speed networks. To cope with the increased processing throughput requirements, our system parallelizes network traffic processing and analysis at three levels, using multi-queue NICs, multiple CPUs, and multiple GPUs. The proposed design avoids locking, optimizes data transfers between the different processing units, and speeds up data processing by mapping different operations to the processing units where they are best suited. Our experimental evaluation shows that our prototype implementation based on commodity off-the-shelf equipment can reach processing speeds of up to 5.2 Gbit/s with zero packet loss when analyzing traffic in a real network, whereas the pattern matching engine alone reaches speeds of up to 70 Gbit/s, which is an almost four times improvement over prior solutions that use specialized hardware.

144 citations


Patent
16 Mar 2011
TL;DR: In this paper, techniques for measuring packet data unit (PDU) loss in a L2 virtual private network (L2VPN) service, such as a VPLS instance, are described.
Abstract: In general, techniques are described for measuring packet data unit (PDU) loss in a L2 virtual private network (L2VPN) service, such as a VPLS instance In one example of the techniques, provider edge (PE) routers that participate in the L2VPN measure known unicast and multicast PDU traffic at the service endpoints for the instance to determine unicast PDU loss within the service provider network As the routers learn the outbound service (ie, core-facing) interfaces and outbound local (ie, customer-facing) interfaces for L2 addresses of customer devices that issue packets to the VPLS instance, the routers establish respective unicast transmit and receipt counters for the service endpoints that serve the customer devices In another example, PE routers that participate in the L2VPN measure multicast PDU traffic at the service endpoints for the instance and account for internal replication by intermediate service nodes to determine multicast PDU loss within the service

Journal ArticleDOI
TL;DR: It is revealed that, contrary to existing thought, the inactive frames of VoIP streams are more suitable for data embedding than the active frames of the streams; that is, steganography in the inactive audio frames attains a largerData embedding capacity than that in the active audio frames under the same imperceptibility.
Abstract: This paper describes a novel high-capacity steganography algorithm for embedding data in the inactive frames of low bit rate audio streams encoded by G.723.1 source codec, which is used extensively in Voice over Internet Protocol (VoIP). This study reveals that, contrary to existing thought, the inactive frames of VoIP streams are more suitable for data embedding than the active frames of the streams; that is, steganography in the inactive audio frames attains a larger data embedding capacity than that in the active audio frames under the same imperceptibility. By analyzing the concealment of steganography in the inactive frames of low bit rate audio streams encoded by G.723.1 codec with 6.3 kb/s, the authors propose a new algorithm for steganography in different speech parameters of the inactive frames. Performance evaluation shows embedding data in various speech parameters led to different levels of concealment. An improved voice activity detection algorithm is suggested for detecting inactive audio frames taking into packet loss account. Experimental results show our proposed steganography algorithm not only achieved perfect imperceptibility but also gained a high data embedding rate up to 101 bits/frame, indicating that the data embedding capacity of the proposed algorithm is very much larger than those of previously suggested algorithms.

Patent
06 Apr 2011
TL;DR: In this article, packet deduplication systems and methods are disclosed for in-line removal of duplicate network packets in network packet streams operating at high speeds (e.g., 1-10 Gbps and above).
Abstract: Systems and methods are disclosed for in-line removal of duplicate network packets in network packet streams operating at high speeds (e.g., 1-10 Gbps and above). A hash generator applies at least one hash algorithm to incoming packets to form one or more different hash values. The packet deduplication systems and methods then use the one or more hash values for each incoming packet to identify data stored for previously received backs and use the identified data to determine if incoming packets are duplicate packets. Duplicate packets are then removed from the output packet stream thereby reducing duplicate packets for downstream processing. A deduplication window can further be utilized to limit the amount of data stored for previous packets based upon one or more parameters, such as an amount of time that has passed and/or a number of packets for which data has been stored. These parameters can also be selected, configured and/or adjusted to achieve desired operational objectives.

Journal ArticleDOI
TL;DR: Stochastic stability results are derived and a noise-shaping model of the closed loop system is provided that is employed for performance analysis by using rate-distortion theory.
Abstract: We study a control architecture for linear time-invariant plants with random disturbances and where a network is placed between the controller output and the plant input. The network imposes a constraint on the expected bit-rate and is affected by random independent and identically distributed (i.i.d.) dropouts. Dropout-rates and acknowledgments of receipt are not available at the controller side. To achieve robustness with respect to i.i.d. dropouts, the controller transmits data packets containing quantized plant input predictions. These are provided by an appropriate optimal entropy coded dithered lattice vector quantizer. Within this context, we derive stochastic stability results and provide a noise-shaping model of the closed loop system. This model is employed for performance analysis by using rate-distortion theory.

Journal ArticleDOI
TL;DR: It was observed that the proposed CBR-Mobile protocol improves the packet delivery ratio, energy consumption, delay and fairness in mobility environment compared to LEACH-Mobile and AODV protocols.
Abstract: Mobility of sensor nodes in wireless sensor network (WSN) has posed new challenges particularly in packet delivery ratio and energy consumption. Some real applications impose combined environments of fixed and mobile sensor nodes in the same network, while others demand a complete mobile sensors environment. Packet loss that occurs due to mobility of the sensor nodes is one of the main challenges which comes in parallel with energy consumption. In this paper, we use cross layer design between medium access control (MAC) and network layers to overcome these challenges. Thus, a cluster based routing protocol for mobile sensor nodes (CBR-Mobile) is proposed. The CBR-Mobile is mobility and traffic adaptive protocol. The timeslots assigned to the mobile sensor nodes that had moved out of the cluster or have not data to send will be reassigned to incoming sensor nodes within the cluster region. The protocol introduces two simple databases to achieve the mobility and traffic adaptively. The proposed protocol sends data to cluster heads in an efficient manner based on received signal strength. In CBR-Mobile protocol, cluster based routing collaborates with hybrid MAC protocol to support mobility of sensor nodes. Schedule timeslots are used to send the data message while the contention timeslots are used to send join registration messages. The performance of proposed CBR-Mobile protocol is evaluated using MATLAB and was observed that the proposed protocol improves the packet delivery ratio, energy consumption, delay and fairness in mobility environment compared to LEACH-Mobile and AODV protocols.

Book ChapterDOI
06 Mar 2011
TL;DR: The first homomorphic network coding signatures in the standard model are described: the security proof does not use random oracles and, at the same time, the scheme allows signing individual vectors on-the-fly and has constant per-packet overhead in terms of signature size.
Abstract: Network coding is known to provide improved resilience to packet loss and increased throughput. Unlike traditional routing techniques, it allows network nodes to perform transformations on packets they receive before transmitting them. For this reason, packets cannot be authenticated using ordinary digital signatures, which makes it difficult to hedge against pollution attacks, where malicious nodes inject bogus packets in the network. To address this problem, recent works introduced signature schemes allowing to sign linear subspaces (namely, verification can be made w.r.t. any vector of that subspace) and which are well-suited to the network coding scenario. Currently known network coding signatures in the standard model are not homomorphic in that the signer is forced to sign all vectors of a given subspace at once. This paper describes the first homomorphic network coding signatures in the standard model: the security proof does not use random oracles and, at the same time, the scheme allows signing individual vectors on-the-fly and has constant per-packet overhead in terms of signature size. The construction is based on the dual encryption technique introduced by Waters (Crypto'09) to prove the security of hierarchical identity-based encryption schemes.

Journal ArticleDOI
TL;DR: Simulations for the scalable video coding (SVC) extension of the H.264/AVC standard showed that the proposed method for unequal error protection with a Fountain code required a smaller transmission bit budget to achieve high-quality video.
Abstract: Application-layer forward error correction (FEC) is used in many multimedia communication systems to address the problem of packet loss in lossy packet networks. One powerful form of application-layer FEC is unequal error protection which protects the information symbols according to their importance. We propose a method for unequal error protection with a Fountain code. When the information symbols were partitioned into two protection classes (most important and least important), our method required a smaller transmission bit budget to achieve low bit error rates compared to the two state-of-the-art techniques. We also compared our method to the two state-of-the-art techniques for video unicast and multicast over a lossy network. Simulations for the scalable video coding (SVC) extension of the H.264/AVC standard showed that our method required a smaller transmission bit budget to achieve high-quality video.

Proceedings ArticleDOI
24 Aug 2011
TL;DR: The string stability of CACC is discussed and its performance with various packet loss ratios, beacon sending frequencies and time headway in simulations is evaluated.
Abstract: Recent development in wireless technology enables communication between vehicles. The concept of Co-operative Adaptive Cruise Control (CACC) — which uses wireless communication between vehicles — aims at string stable behaviour in a platoon of vehicles. “String stability” means any non-zero position, speed, and acceleration errors of an individual vehicle in a string do not amplify when they propagate upstream. In this paper, we will discuss the string stability of CACC and evaluate its performance with various packet loss ratios, beacon sending frequencies and time headway in simulations. The simulation framework is built up with a controller prototype, a traffic simulator, and a network simulator.

Journal ArticleDOI
TL;DR: It is shown that the optimal estimation performances critically depend on the channel accessing probabilities of the network nodes and the packet loss probability, and the optimal filters can be obtained by solving recursive Lyapunov and Riccati equations.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: An analytical model of the packet loss probability and delay of a IEEE 802.15.4 network is presented and it is shown that the optimal traffic load is similar when the communication throughput or control cost are optimized.
Abstract: A framework for the joint design of wireless network and controllers is proposed. Multiple control systems are considered where the sensor measurements are transmitted to the controller over the IEEE 802.15.4 protocol. The essential issues of wireless networked control systems (NCSs) are investigated to provide an abstraction of the wireless network for a co-design approach. We first present an analytical model of the packet loss probability and delay of a IEEE 802.15.4 network. Through optimal control techniques we derive the control cost as a function of the packet loss probability and delay. Simulation results show the feasible control performance. It is shown that the optimal traffic load is similar when the communication throughput or control cost are optimized. The co-design approach is based on a constrained optimization problem, for which the objective function is the energy consumption of the network and the constraints are the packet loss probability and delay, which are derived from the desired control cost. The co-design is illustrated through a numerical example.

Proceedings ArticleDOI
19 Aug 2011
TL;DR: It is argued that passively monitoring the transport-level statistics in the server's network stack is a better approach and provides a much more accurate view of the performance of the network paths than what is possible with server logs alone.
Abstract: Content distribution networks (CDNs) need to make decisions, such as server selection and routing, to improve performance for their clients. The performance may be limited by various factors such as packet loss in the network, a small receive buffer at the client, or constrained server CPU and disk resources. Conventional measurement techniques are not effective for distinguishing these performance problems: application-layer logs are too coarse-grained, while network-level traces are too expensive to collect all the time. We argue that passively monitoring the transport-level statistics in the server's network stack is a better approach.This paper presents a tool for monitoring and analyzing TCP statistics, and an analysis of a CoralCDN node in PlanetLab for six weeks. Our analysis shows that more than 10% of connections are server-limited at least 40% of the time, and many connections are limited by the congestion window despite no packet loss. Still, we see that clients in 377 Autonomous Systems (ASes) experience persistent packet loss. By separating network congestion from other performance problems, our analysis provides a much more accurate view of the performance of the network paths than what is possible with server logs alone.

Journal ArticleDOI
TL;DR: This work proposes simple easily deployable protocol improvements in terms of utilizing as much range information as possible, reducing range broadcasts by piggybacking, compressing the range information, tuning the broadcast frequency, and combining multiple packets using network coding.
Abstract: Cooperative positioning (CP) can potentially improve the accuracy of vehicle location information, which is vital for several road safety applications. Although concepts of CP have been introduced, the efficiency of CP under real-world vehicular communication constraints is largely unknown. Our simulations reveal that the frequent exchange of large amounts of range information required by existing CP schemes not only increases the packet collision rate of the vehicular network but reduces the effectiveness of the CP as well. To address this issue, we propose simple easily deployable protocol improvements in terms of utilizing as much range information as possible, reducing range broadcasts by piggybacking, compressing the range information, tuning the broadcast frequency, and combining multiple packets using network coding. Our results demonstrate that, even under dense traffic conditions, these protocol improvements achieve a twofold reduction in packet loss rates and increase the positioning accuracy of CP by 40%.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: In this paper, the authors proposed burst forwarding, a generic packet forwarding technique that combines low power consumption with high throughput for multi-purpose wireless networks, using radio duty cycling to maintain a low energy consumption, recovers efficiently from interference, and inherently supports both single streams and cross-traffic.
Abstract: As sensor networks move towards general-purpose low-power wireless networks, there is a need to support both traditional low-data rate traffic and high-throughput transfer. To attain high throughput, existing protocols monopolize the network resources and keep the radio on for all nodes involved in the transfer, leading to poor energy efficiency. This becomes progressively problematic in networks with packet loss, which inevitably occur in any real-world deployment. We present burst forwarding, a generic packet forwarding technique that combines low power consumption with high throughput for multi-purpose wireless networks. Burst forwarding uses radio duty cycling to maintain a low power consumption, recovers efficiently from interference, and inherently supports both single streams and cross-traffic. We experimentally evaluate our mechanism under heavy interference and compare it to PIP, a state-of-the-art sensornet bulk transfer protocol. Burst forwarding gracefully adapts radio duty cycle both to the level of interference and to traffic load, keeping a low and nearly constant energy cost per byte when carrying TCP traffic.

Posted Content
TL;DR: This paper introduces two new network coding signature schemes that are provably secure in the standard model, rely on standard assumptions, and are in the same efficiency class as previous solutions based on random oracles.
Abstract: Network Coding is a routing technique where each node may actively modify the received packets before transmitting them. While this departure from passive networks improves throughput and resilience to packet loss it renders transmission susceptible to pollution attacks where nodes can misbehave and change in a malicious way the messages transmitted. Nodes cannot use standard signature schemes to authenticate the modified packets: this would require knowledge of the original sender’s signing key. Network coding signature schemes offer a cryptographic solution to this problem. Very roughly, such signatures allow signing vector spaces (or rather bases of such spaces). Furthermore, these signatures are homomorphic: given signatures on a set of vectors it is possible to create signatures for any linear combination of these vectors. Designing such schemes is a difficult task, and the few existent constructions either rely on random oracles or are rather inefficient. In this paper we introduce two new network coding signature schemes. Both of our schemes are provably secure in the standard model, rely on standard assumptions, and are in the same efficiency class with previous solutions based on random oracles.

Proceedings ArticleDOI
30 Aug 2011
TL;DR: It is found that the NetEm behaviour conforms to expectations for the emulation of delay and packet loss without correlation, however, in the case of jitter emulation, the actual realized jitter is lower than the given input value.
Abstract: In this paper we have evaluated the main functionalities of NetEm, a popular Linux based network emulator, which we have used to stress test the performance of the Games@Large distributed gaming system. We have performed a number of tests on different NetEm functionalities in order to evaluate their practical performance conformity and validity versus the NetEm description and theoretical expectations. We have found that the NetEm behaviour conforms to expectations for the emulation of delay and packet loss without correlation. However, in the case of jitter emulation, the actual realized jitter is lower than the given input value. It is an important fact to be aware of when using NetEm for different application testing. This paper also provides a baseline methodology for network emulation tool validation.

01 Oct 2011
TL;DR: This framework can be used to define Content Delivery Protocols that provide Forward Error Correction for streaming media delivery or other packet flows that can support any FEC Scheme (and associated FEC codes) which is compliant with various requirements defined in this document.
Abstract: This document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection against packet loss. The framework supports applying FEC to arbitrary packet flows over unreliable transport and is primarily intended for real-time, or streaming, media. This framework can be used to define Content Delivery Protocols that provide FEC for streaming media delivery or other packet flows. Content Delivery Protocols defined using this framework can support any FEC scheme (and associated FEC codes) which is compliant with various requirements defined in this document. Thus, Content Delivery Protocols can be defined which are not specific to a particular FEC scheme, and FEC schemes can be defined which are not specific to a particular Content Delivery Protocol.

Journal ArticleDOI
TL;DR: A new slow start algorithm, called Hybrid Start (HyStart), is proposed, that finds a ''safe'' exit point for slow start at which it can terminate and safely advance to the congestion avoidance phase without causing heavy packet loss.

Journal ArticleDOI
TL;DR: It is shown that the optimal mean square estimation error that can be achieved under packet loss, referred as the infinite bandwidth filter (IBF), cannot be reached using a limited bandwidth channel; this paper proposes novel mathematical tools to derive analytical upper and lower bounds for the expected estimation error covariance of the MF and the IBF strategies assuming identical sensors.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: A novel enhancement of the well known ALOHA random access mechanism, called Constant Rate Assignment (CRA), is presented which largely extends the achievable throughput compared to traditional AlOHA and provides significantly lower packet loss rates.
Abstract: In this paper, a novel enhancement of the well known ALOHA random access mechanism is presented which largely extends the achievable throughput compared to traditional ALOHA and provides significantly lower packet loss rates. The novel mechanism, called Constant Rate Assignment (CRA), is based on transmitting multiple replicas of a packet in an unslotted ALOHA system and applying interference cancellation techniques. In this paper the methodology for this new random access technique is presented, also w.r.t. existing Interference Cancellation (IC) techniques. Moreover numerical results for performance comparison with state of the art random access mechanisms, such as Contention Resolution Diversity Slotted ALOHA (CRDSA) are provided. Finally the benefit of taking strong forward error correcting codes for the performance of CRA is shown.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: In this article, a framework of using OpenFlow to handle the transient link failure is proposed, where the data and control plane are separated by shifting the management to a remote centralized controller and the forwarders just forward packets based on flow table.
Abstract: Due to the default settings of the OSPF, the network takes several tens of seconds to recover from a failure During the convergence, the network service will be disrupted This paper proposed a framework of using OpenFlow to handle the transient link failure OpenFlow separate the data and control plane by shifting the management to a remote centralized controller and the forwarders just forward packets based on flow table We exploit OpenFlow technology into the existing routers The routing information database (RIB) and the flow table construct a two-layer forwarding structure When link failure happens, the controller computed the backup paths for the destinations that affected by the failure link and sent the related flow table to the OpenFlow forwarders All the packets received by the forwarders will be compared against flow table entries first If the packets matched with the flow table, the corresponding actions of the flow entries will be taken Otherwise, the packets will be processed by the RIB After the network recovering from the failure, the controller will remove the temporary flow table used to bypass the failed link in the forwarders and the RIB will take over the packets forwarding Experiments results demonstrate that the proposed framework have a low packet loss to the link failure