scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2004"


Proceedings ArticleDOI
30 Aug 2004
TL;DR: The causes of packet loss in a 38-node urban multi-hop 802.11b network are analyzed to gain an understanding of their relative importance, of how they interact, and of the implications for MAC and routing protocol design.
Abstract: This paper analyzes the causes of packet loss in a 38-node urban multi-hop 802.11b network. The patterns and causes of loss are important in the design of routing and error-correction protocols, as well as in network planning.The paper makes the following observations. The distribution of inter-node loss rates is relatively uniform over the whole range of loss rates; there is no clear threshold separating "in range" and "out of range." Most links have relatively stable loss rates from one second to the next, though a small minority have very bursty losses at that time scale. Signal-to-noise ratio and distance have little predictive value for loss rate. The large number of links with intermediate loss rates is probably due to multi-path fading rather than attenuation or interference.The phenomena discussed here are all well-known. The contributions of this paper are an understanding of their relative importance, of how they interact, and of the implications for MAC and routing protocol design.

1,135 citations


Proceedings ArticleDOI
03 Nov 2004
TL;DR: An analysis of data from a second generation sensor networks deployed during the summer and autumn of 2003 sheds light on a number of design issues from network deployment, through selection of power sources to optimizations of routing decisions.
Abstract: Habitat and environmental monitoring is a driving application for wireless sensor networks. We present an analysis of data from a second generation sensor networks deployed during the summer and autumn of 2003. During a 4 month deployment, these networks, consisting of 150 devices, produced unique datasets for both systems and biological analysis. This paper focuses on nodal and network performance, with an emphasis on lifetime, reliability, and the the static and dynamic aspects of single and multi-hop networks. We compare the results collected to expectations set during the design phase: we were able to accurately predict lifetime of the single-hop network, but we underestimated the impact of multi-hop traffic overhearing and the nuances of power source selection. While initial packet loss data was commensurate with lab experiments, over the duration of the deployment, reliability of the backend infrastructure and the transit network had a dominant impact on overall network performance. Finally, we evaluate the physical design of the sensor node based on deployment experience and a post mortem analysis. The results shed light on a number of design issues from network deployment, through selection of power sources to optimizations of routing decisions.

1,056 citations


Proceedings ArticleDOI
07 Mar 2004
TL;DR: This work presents a new congestion control scheme that alleviates RTT unfairness while supporting TCP friendliness and bandwidth scalability, and uses two window size control policies called additive increase and binary search increase.
Abstract: High-speed networks with large delays present a unique environment where TCP may have a problem utilizing the full bandwidth. Several congestion control proposals have been suggested to remedy this problem. The existing protocols consider mainly two properties: TCP friendliness and bandwidth scalability. That is, a protocol should not take away too much bandwidth from standard TCP flows while utilizing the full bandwidth of high-speed networks. This work presents another important constraint, namely, RTT (round trip time) unfairness where competing flows with different RTTs may consume vastly unfair bandwidth shares. Existing schemes have a severe RTT unfairness problem because the congestion window increase rate gets larger as the window grows ironically the very reason that makes them more scalable. RTT unfairness for high-speed networks occurs distinctly with drop tail routers for flows with large congestion windows where packet loss can be highly synchronized. After identifying the RTT unfairness problem of existing protocols, This work presents a new congestion control scheme that alleviates RTT unfairness while supporting TCP friendliness and bandwidth scalability. The proposed congestion control algorithm uses two window size control policies called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures square RTT unfairness as well as good scalability. Under small congestion windows, binary search increase supports TCP friendliness. The simulation results confirm these properties of the protocol.

984 citations


Proceedings ArticleDOI
30 Aug 2004
TL;DR: This paper deduced typical real world values of packet loss and latency experienced on the Internet by monitoring numerous operational UT2003 game servers and designed maps that isolated the fundamental first person shooter interaction components of movement and shooting, and conducted numerous user studies under controlled network conditions.
Abstract: The growth in the popularity of interactive network games has increased the importance of a better understanding of the effects of packet loss and latency on user performance. While previous work on network games has studied user tolerance for high latencies and has studied the effects of latency on user performance in real-time strategy games, to the best of our knowledge, there has been no systematic study of the effects of loss and latency on user performance. In this paper we study user performance for Unreal Tournament 2003 (UT2003), a popular first person shooter game, under varying amounts of packet loss and latency. First, we deduced typical real world values of packet loss and latency experienced on the Internet by monitoring numerous operational UT2003 game servers. We then used these deduced values of loss and latency in a controlled networked environment that emulated various conditions of loss and latency, allowing us to monitor UT2003 at the network, application and user levels. We designed maps that isolated the fundamental first person shooter interaction components of movement and shooting, and conducted numerous user studies under controlled network conditions. We find that typical ranges of packet loss have no impact on user performance or on the quality of game play. The levels of latency typical for most UT2003 Internet servers, while sometimes unpleasant, do not significantly affect the outcome of the game. Since most first person shooter games typically consist of generic player actions similar to those that we tested, we believe that these results have broader implications.

353 citations


Journal ArticleDOI
TL;DR: This paper shows how adaptive media playout (AMP), the variation of the playout speed of media frames depending on channel conditions, allows the client to buffer less data, thus introducing less delay, for a given buffer underflow probability.
Abstract: When media is streamed over best-effort networks, media data is buffered at the client to protect against playout interruptions due to packet losses and random delays. While the likelihood of an interruption decreases as more data is buffered, the latency that is introduced increases. In this paper we show how adaptive media playout (AMP), the variation of the playout speed of media frames depending on channel conditions, allows the client to buffer less data, thus introducing less delay, for a given buffer underflow probability. We proceed by defining models for the streaming media system and the random, lossy, packet delivery channel. Our streaming system model buffers media at the client, and combats packet losses with deadline-constrained automatic repeat request (ARQ). For the channel, we define a two-state Markov model that features state-dependent packet loss probability. Using the models, we develop a Markov chain analysis to examine the tradeoff between buffer underflow probability and latency for AMP-augmented video streaming. The results of the analysis, verified with simulation experiments, indicate that AMP can greatly improve the tradeoff, allowing reduced latencies for a given buffer underflow probability.

249 citations


Journal ArticleDOI
TL;DR: Results from simulations show that in a congestion free network with 1% of random wireless packet loss rate, TCP-Jersey achieves 17% and 85% improvements in goodput over TCP-Westwood and TCP-Reno, respectively; in a congested network where TCP flow competes with VoIP flows, the design and results from experiments using the NS-2 network simulator show that the scheme maintains the fair and friendly behavior with respect to other TCP flows.
Abstract: Improving the performance of the transmission control protocol (TCP) in wireless Internet protocol (IP) communications has been an active research area. The performance degradation of TCP in wireless and wired-wireless hybrid networks is mainly due to its lack of the ability to differentiate the packet losses caused by network congestions from the losses caused by wireless link errors. In this paper, we propose a new TCP scheme, called TCP-Jersey, which is capable of distinguishing the wireless packet losses from the congestion packet losses, and reacting accordingly. TCP-Jersey consists of two key components, the available bandwidth estimation (ABE) algorithm and the congestion warning (CW) router configuration. ABE is a TCP sender side addition that continuously estimates the bandwidth available to the connection and guides the sender to adjust its transmission rate when the network becomes congested. CW is a configuration of network routers such that routers alert end stations by marking all packets when there is a sign of an incipient congestion. The marking of packets by the CW configured routers helps the sender of the TCP connection to effectively differentiate packet losses caused by network congestion from those caused by wireless link errors. This paper describes the design of TCP-Jersey, and presents results from experiments using the NS-2 network simulator. Results from simulations show that in a congestion free network with 1% of random wireless packet loss rate, TCP-Jersey achieves 17% and 85% improvements in goodput over TCP-Westwood and TCP-Reno, respectively; in a congested network where TCP flow competes with VoIP flows, with 1% of random wireless packet loss rate, TCP-Jersey achieves 9% and 76% improvements in goodput over TCP-Westwood and TCP-Reno, respectively. Our experiments of multiple TCP flows show that TCP-Jersey maintains the fair and friendly behavior with respect to other TCP flows.

238 citations


Patent
29 Oct 2004
TL;DR: In this paper, a scalable layered video coding scheme that encodes video data frames into multiple layers, including a base layer of comparatively low quality video and multiple enhancement layers of increasingly higher quality video, adds error resilience to the enhancement layer.
Abstract: A scalable layered video coding scheme that encodes video data frames into multiple layers, including a base layer of comparatively low quality video and multiple enhancement layers of increasingly higher quality video, adds error resilience to the enhancement layer. Unique resynchronization marks are inserted into the enhancement layer bitstream in headers associated with each video packet, headers associated with each bit plane, and headers associated with each video-of-plane (VOP) segment. Following transmission of the enhancement layer bitstream, the decoder tries to detect errors in the packets. Upon detection, the decoder seeks forward in the bitstream for the next known resynchronization mark. Once this mark is found, the decoder is able to begin decoding the next video packet. With the addition of many resynchronization marks within each frame, the decoder can recover very quickly and with minimal data loss in the event of a packet loss or channel error in the received enhancement layer bitstream. The video coding scheme also facilitates redundant encoding of header information from the higher-level VOP header down into lower level bit plane headers and video packet headers. Header extension codes are added to the bit plane and video packet headers to identify whether the redundant data is included.

237 citations


Proceedings ArticleDOI
04 Oct 2004
TL;DR: It is shown that the right combination of primitives can yield more than 99% reliability with low overhead, providing a viable alternative to end-to-end retransmission over multiple hops, and overcomes different kinds of failures.
Abstract: Many applications in wireless sensor networks, including structure monitoring, require collecting all data without loss from the nodes. End-to-end retransmission, which is used in the Internet for reliable transport, becomes very inefficient in wireless sensor networks, since wireless communication, and constrained resources pose new challenges. We look at factors affecting reliability, and search for efficient combinations of the possible options. Information redundancy like retransmission, and erasure codes, can be used. Route fix, which tries alternative next hop after some failures, also reduces packet loss. We implemented and evaluated these options on a real test bed of Berkeley Mica2Dot motes. Our experimental results show that each option overcomes different kinds of failures. Link-level retransmission is efficient but limited in achieving reliability. Erasure code enables very high reliability by tolerating packet losses. Route fix responds to link failures quickly. Previous work had found it difficult to increase reliability past a certain threshold. We show that the right combination of primitives can yield more than 99% reliability with low overhead, providing a viable alternative to end-to-end retransmission over multiple hops.

222 citations


Patent
21 Apr 2004
TL;DR: In this article, a technique for controlling traffic in a computer network includes modifying a packet generated by a first computer, but is modified to be forwarded to a third computer, which allows for scanning of early generated packets, redirection of selected packets and routing of packets from a computer in general.
Abstract: In one embodiment, a technique for controlling traffic in a computer network includes modifying a packet generated by a first computer. The packet may be intended for a second computer, but is modified to be redirected to a third computer. The packet may be processed in the third computer prior to being forwarded from the third computer to the second computer. The packet may be scanned for viruses at the third computer, for example. Among other advantages, the technique allows for scanning of early generated packets, redirection of selected packets, and routing of packets from a computer in general.

211 citations


Journal ArticleDOI
TL;DR: It is proved that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm, and the competitive ratio of any on-line algorithm for a uniform bounded-delay buffer is bounded away from 1, independent of the delay size.
Abstract: We consider two types of buffering policies that are used in network switches supporting Quality of Service (QoS). In the FIFO type, packets must be transmitted in the order in which they arrive; the constraint in this case is the limited buffer space. In the bounded-delay type, each packet has a maximum delay time by which it must be transmitted, or otherwise it is lost. We study the case of overloads resulting in packet loss. In our model, each packet has an intrinsic value, and the goal is to maximize the total value of transmitted packets. Our main contribution is a thorough investigation of some natural greedy algorithms in various models. For the FIFO model we prove tight bounds on the competitive ratio of the greedy algorithm that discards packets with the lowest value when an overflow occurs. We also prove that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm. This algorithm can be as much as 1.5 times better than the tail-drop greedy policy, which drops the latest lowest-value packets. In the bounded-delay model we show that the competitive ratio of any on-line algorithm for a uniform bounded-delay buffer is bounded away from 1, independent of the delay size. We analyze the greedy algorithm in the general case and in three special cases: delay bound 2, link bandwidth 1, and only two possible packet values. Finally, we consider the off-line scenario. We give efficient optimal algorithms and study the relation between the bounded-delay and FIFO models in this case.

194 citations


Journal ArticleDOI
TL;DR: In this article, the causes of packet loss in a 38-node urban multi-hop 802.11b network were analyzed and the patterns and causes of loss are important in the design of routing and error-correction protocols.
Abstract: This paper analyzes the causes of packet loss in a 38-node urban multi-hop 802.11b network. The patterns and causes of loss are important in the design of routing and error-correction protocols, as...

Patent
Steven L. Grobman1
30 Sep 2004
TL;DR: In this paper, the authors present a system and method relating to protecting network communication flow using packet encoding/certification and the network stack, which involves protecting network communications in a virtualized platform.
Abstract: In some embodiments, the invention involves protecting network communications in a virtualized platform. An embodiment of the present invention is a system and method relating to protecting network communication flow using packet encoding/certification and the network stack. One embodiment uses a specialized engine or driver in the network stack to encode packets before being sent to physical network controller. The network controller may use a specialized driver to decode the packets, or have a hardware implementation of a decoder. If the decoded packet is certified, the packet is transmitted. Otherwise, the packet is dropped. An embodiment of the present invention utilizes virtualization architecture to implement the network communication paths. Other embodiments are described and claimed.

Journal ArticleDOI
TL;DR: This work proposes a receiver-driven protocol for simultaneous video streaming from multiple senders to a single receiver in order to achieve higher throughput, and to increase tolerance to packet loss and delay due to network congestion.
Abstract: With the explosive growth of video applications over the Internet, many approaches have been proposed to stream video effectively over packet switched, best-effort networks. We propose a receiver-driven protocol for simultaneous video streaming from multiple senders to a single receiver in order to achieve higher throughput, and to increase tolerance to packet loss and delay due to network congestion. Our receiver-driven protocol employs a novel rate allocation algorithm (RAA) and a packet partition algorithm (PPA). The RAA, run at the receiver, determines the sending rate for each sender by taking into account available network bandwidth, channel characteristics, and a prespecified, fixed level of forward error correction, in such a way as to minimize the probability of packet loss. The PPA, run at the senders based on a set of parameters estimated by the receiver, ensures that every packet is sent by one and only one sender, and at the same time, minimizes the startup delay. Using both simulations and Internet experiments, we demonstrate the effectiveness of our protocol in reducing packet loss.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: This work presents a cross-layer framework for the joint design of wireless networks and distributed controllers, and illustrates this framework by aCross-layer optimization of the link layer, MAC layer, and sample period selection in an inverted pendulum system.
Abstract: We present a cross-layer framework for the joint design of wireless networks and distributed controllers. The design objective is to optimize the control performance. This control performance is a complex function of the network parameters, such as throughput, packet delay and packet loss probabilities. The goal of optimizing the control performance imposes implicit tradeoffs on the wireless network design as opposed to the explicit tradeoffs typical in wireless data and voice applications. Specifically, the tradeoffs between network throughput, time delay and packet loss probability are intricate and implicit in the control performance index, which complicates network optimization. We show that this optimization requires a cross-layer design framework. We first present this framework for a broad class of distributed control applications. We then illustrate this framework by a cross-layer optimization of the link layer, MAC layer, and sample period selection in an inverted pendulum system. Our results indicate that cross-layer design significantly improves the performance and stability of the controller.

Journal ArticleDOI
TL;DR: This work presents three methods to estimate mean squared error (MSE) due to packet losses directly from the video bitstream, which uses only network-level measurements and extracts sequence-specific information including spatio-temporal activity and the effects of error propagation.
Abstract: We consider monitoring the quality of compressed video transmitted over a packet network from the perspective of a network service provider. Our focus is on no-reference methods, which do not access the original signal, and on evaluating the impact of packet losses on quality. We present three methods to estimate mean squared error (MSE) due to packet losses directly from the video bitstream. NoParse uses only network-level measurements (like packet loss rate), QuickParse extracts the spatio-temporal extent of the impact of the loss, and FullParse extracts sequence-specific information including spatio-temporal activity and the effects of error propagation. Our simulation results with MPEG-2 video subjected to transport packet losses illustrate the performance possible using the three methods.

Proceedings ArticleDOI
07 Mar 2004
TL;DR: This work proposes multiple TFRC connections as an end-to-end rate control solution for wireless video streaming and shows that this approach not only avoids modifications to the network infrastructure or network protocol, but also results in full utilization of the wireless channel.
Abstract: Rate control is an important issue in video streaming applications for both wired and wireless networks. A widely accepted rate control method in wired networks is equation based rate control (Sally Floyd et al., Aug. 2000), in which the TCP friendly rate is determined as a function of packet loss rate, round trip time and packet size. This approach, also known as TFRC, assumes that packet loss in wired networks is primarily due to congestion, and as such is not applicable to wireless networks in which the bulk of packet loss is due to error at the physical layer. We propose multiple TFRC connections as an end-to-end rate control solution for wireless video streaming. We show that this approach not only avoids modifications to the network infrastructure or network protocol, hut also results in full utilization of the wireless channel. NS-2 simulations and experiments over 1/spl times/RTT CDMA wireless data network are carried out to validate, and characterize the performance of our proposed approach.

Patent
09 Sep 2004
TL;DR: In this article, a method for optimizing the throughput of TCP/IP applications by aggregating user application data and consolidating multiple TCP and IP connection streams into a single optimized stream for delivery to a destination application is presented.
Abstract: A method for optimizing the throughput of TCP/IP applications by aggregating user application data and consolidating multiple TCP/IP connection streams into a single optimized stream for delivery to a destination application. Optimization of the internet protocol uses a packet interceptor to intercept packets from a source application, a packet driver to aggregate the intercepted packets, a data mover to transport the aggregated packets to another data mover at the destination, a destination packet driver to disaggregate the transported aggregated packets, and a destination end processor to deliver the disaggregated IP packets to the destination application.

Journal ArticleDOI
TL;DR: A broad look at the problem of enhancing TCP performance under corruption losses, and provides a taxonomy of potential practical classes of mitigations that TCP end-points and intermediate network elements can cooperatively use to decrease the performance impact of corruption-based loss.

Journal ArticleDOI
TL;DR: TP-Planet replaces the inefficient slow start algorithm with a novel Initial State algorithm, which allows the capture of link resources in a very fast and controlled manner, and decouples congestion decisions from single packet losses in order to avoid the erroneous congestion decisions due to high link errors.
Abstract: Space exploration missions are crucial for acquisition of information about space and the Universe. The entire success of a mission is directly related to the satisfaction of its communications needs. For this goal, the challenges posed by the InterPlaNetary (IPN) Internet need to be addressed. Current transmission control protocols (TCPs) have very poor performance in the IPN Internet, which is characterized by extremely high propagation delays, link errors, asymmetrical bandwidth, and blackouts. The window-based congestion control, which injects a new packet into the network upon an ACK reception, is responsible for such performance degradation due to high propagation delay. Slow start algorithms of the existing TCPs further contribute to the performance degradation by wasting long time periods to reach the actual data rate. Moreover, wireless link errors amplify the problem by misleading the TCP source to unnecessarily throttle the congestion window. The recovery from erroneous window decrease takes a certain amount of time, which is proportional to the round-trip time (RTT) and further decreases the network performance. In this paper, a reliable transport protocol (TP-Planet) is presented for data traffic in the IPN Internet. It is intended to address the challenges and to achieve high throughput performance and reliable data transmission on deep-space links of the IPN Backbone Network. TP-Planet deploys a rate-based additive-increase multiplicative-decrease (AIMD) congestion control, whose AIMD parameters are tuned to help avoid throughput degradation. TP-Planet replaces the inefficient slow start algorithm with a novel Initial State algorithm, which allows the capture of link resources in a very fast and controlled manner. A new congestion detection and control mechanism is developed, which decouples congestion decisions from single packet losses in order to avoid the erroneous congestion decisions due to high link errors. In order to reduce the effects of blackout conditions on the throughput performance, TP-Planet incorporates the blackout state procedure into the protocol operation. The bandwidth asymmetry problem is addressed by the adoption of delayed selective acknowledgment (SACK). Simulation experiments show that the TP-Planet significantly improves the throughput performance and addresses the challenges posed by the IPN Backbone Network.

Proceedings ArticleDOI
07 Mar 2004
TL;DR: A passive methodology for TCP performance evaluation over general packet radio service (GPRS) networks is presented that relies on traffic monitoring at the GPRS ingress/egress router interface to analyze TCP performance and demonstrate the applicability of the method.
Abstract: In this paper a passive methodology for TCP performance evaluation over general packet radio service (GPRS) networks is presented that relies on traffic monitoring at the GPRS ingress/egress router interface (Gi). Based on the IP and TCP headers of the packets we estimate the end-to-end performance of TCP connections such as connection setup behavior and data transfer goodput. In order to identify the effects behind the measured performance the introduced algorithms estimate round trip delays, packet loss ratios, available channel rates, throughput and cany out bottleneck analysis. Large-scale GPRS measurements in seven countries are presented to analyze TCP performance and demonstrate the applicability of the method. The effects of different TCP parameters such as maximum segment size, selective acknowledgements, timestamp usage and receiver window size are also quantified. GPRS measurement results are compared to a wireline dial-up network to identify the effects specific to the wireless environment

Patent
21 Dec 2004
TL;DR: In this paper, a method for encoding network data, such as Internet Protocol (IP) data, into a format for transmission over a satellite system is described, where the network data is configured in a packet having a data block and header information.
Abstract: A method for encoding network data, such as Internet Protocol (IP) data, into a format for transmission over a satellite system is described. The network data is configured in a packet having a data block and header information. The network data packet is encoded into a variable-length multi-packet transport (MPT) frame. The MPT frame comprises a data frame to hold data and header information. The IP packet in inserted its entirety into the data frame of the MPT frame. The variable-length MTP frame is then encoded into one or more fixed-length MTP packets. Each MPT packet has a data fragment block comprising a portion of the MTP frame and associated header information to designate what portion of the MTP frame is contained in the data fragment block. The MPT packets are sized to be embedded as a specific size payload of the satellite packet that is transmitted over a satellite network. Using this method, data received over a data network (i.e., Ethernet or Internet) in large network data packets are broken into smaller packets defined by the mult-packet transport. These smaller packets are then inserted as the data payload within standard fixed-size packets suitable for transmission across a particular distribution medium, such as satellite network. The network data remains independent of the underlying network and can be easily extracted at the receiver for use by computer applications.

Patent
Tom Mcbeath1
07 Jun 2004
TL;DR: In this paper, a system for testing a segment of a data-packet-network has a first probe connected substantially at one end of the segment; a second probe connected at the opposite end from the location of the first probe; and a process application distributed to each probe.
Abstract: A system for testing a segment of a data-packet-network has a first probe connected substantially at one end of the segment; a second probe connected substantially at the opposite end of the segment from the location of the first probe; and a process application distributed to each probe. The first and second probes collect data from and time stamp data packets as they pass forming first and second records of the individual packets whereupon the second-formed records of each packet are compared with the first records of each packet for record matching, time-stamp comparison and test result processing.

Proceedings ArticleDOI
20 Jun 2004
TL;DR: Preliminary results show that the proposed algorithm can achieve the optimum perceived voice quality compared with other algorithms under all network conditions considered.
Abstract: Perceived voice quality is an important metric in VoIP applications. The quality is mainly affected by network impairments such as delay, jitter and packet loss. Playout buffer at the receiving side can be used to compensate for the effects of jitter based on a tradeoff between delay and loss. The main aim in this paper is to find an efficient perceived quality prediction method for perceptual optimization of playout buffer. The contributions of the paper are three-fold. First, we propose an efficient new method for predicting voice quality for buffer design/optimization. The method can also be used for voice quality monitoring and for QoS control. In the method, nonlinear regression models are derived for a variety of codecs (e.g. G.723.1/G.729/AMR/iLBC) with the aid of ITU PESQ and the E-model. Second, we propose the use of minimum overall impairment as a criterion for buffer optimization. This criterion is more efficient than using traditional maximum mean opinion score (MOS). Third, we show that the delay characteristics of voice over IP traffic is better characterized by a Weibull distribution than a Pareto or an exponential distribution. Based on the new voice quality prediction model, the Weibull delay distribution model and the minimum impairment criterion, we propose a perceptual optimization buffer algorithm. Preliminary results show that the proposed algorithm can achieve the optimum perceived voice quality compared with other algorithms under all network conditions considered.

Journal ArticleDOI
TL;DR: A family of new algorithms for rate-fidelity optimal packetization of scalable source bit streams with uneven error protection does away with the expediency of fractional bit allocation, a limitation of some existing algorithms.
Abstract: In this paper, we present a family of new algorithms for rate-fidelity optimal packetization of scalable source bit streams with uneven error protection. In the most general setting where no assumption is made on the probability function of packet loss or on the rate-fidelity function of the scalable code stream, one of our algorithms can find the globally optimal solution to the problem in O(N/sup 2/L/sup 2/) time, compared to a previously obtained O(N/sup 3/L/sup 2/) complexity, where N is the number of packets and L is the packet payload size. If the rate-fidelity function of the input is convex, the time complexity can be reduced to O(NL/sup 2/) for a class of erasure channels, including channels for which the probability function of losing n packets is monotonically decreasing in n and independent erasure channels with packet erasure rate no larger than N/2(N + 1). Furthermore, our O(NL/sup 2/) algorithm for the convex case can be modified to rind an approximation solution for the general case. All of our algorithms do away with the expediency of fractional bit allocation, a limitation of some existing algorithms.

Patent
13 Dec 2004
TL;DR: In this article, a compression context for a plurality of packets (130) is established with a receiving device (102b). Each of these packets is associated with one or more reliable multicast protocols.
Abstract: A compression context for a plurality of packets (130) is established with a receiving device (102b). Each of these packets is associated with one or more reliable multicast protocols. Such as the Layered Coding Transform (LCT) protocol, the Asynchronous Layered Coding (ALC) protocol, the FLUTE protocol, the MUPPET protocol, and the NACK-Oriented Reliable Multicast (NORM) protocol. Upon establishment of the compression context, a compressed packet is generated for one of the plurality of packets and transmitted to the receiving device. The compressed packet has a reduced number of bits in its header. Upon receipt, a decompresso (116) of the receiving device decompresses the compressed packet based on the compression context.

Journal ArticleDOI
TL;DR: This paper proposes a new partitioning approach that results in an FSMC model with tractable queueing performance that utilizes Jake's level-crossing analysis, the distribution of the received SNR, and the elegant analytical structure of Mitra's producer-consumer fluid queueing model.
Abstract: Finite-state Markov chain (FSMC) models have often been used to characterize the wireless channel. The fitting is typically performed by partitioning the range of the received signal-to-noise ratio (SNR) into a set of intervals (states). Different partitioning criteria have been proposed in the literature, but none of them was targeted to facilitating the analysis of the packet delay and loss performance over the wireless link. In this paper, we propose a new partitioning approach that results in an FSMC model with tractable queueing performance. Our approach utilizes Jake's level-crossing analysis, the distribution of the received SNR, and the elegant analytical structure of Mitra's producer-consumer fluid queueing model. An algorithm is provided for computing the various parameters of the model, which are then used in deriving closed-form expressions for the effective bandwidth (EB) subject to packet loss and delay constraints. Resource allocation based on the EB is key to improving the perceived capacity of the wireless medium. Numerical investigations are carried out to study the interactions among various key parameters, verify the adequacy of the analysis, and study the impact of error control parameters on the allocated bandwidth for guaranteed packet loss and delay performance.

Patent
27 Oct 2004
TL;DR: A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network is described in this article. But this linecard is not suitable for wireless networks.
Abstract: A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: This work develops a model which captures the impact of quantization and packet loss on the overall video quality and proposes to limit the number of routes to overcome the limitations of multi-path congestion-based partitioning schemes.
Abstract: We analyze the benefits of optimal multi-path routing on video streaming, in a band-limited ad hoc network. In such environments, the actions of each node can potentially impact the overall network conditions. Thus optimal routing solutions which seek to minimize congestion are attractive as they make use of the resources efficiently. For low-latency video streaming, we propose to limit the number of routes to overcome the limitations of such solutions. To predict the performance in terms of rate and distortion, we develop a model which captures the impact of quantization and packet loss on the overall video quality. Simulations are performed to illustrate the advantages of the multi-path congestion-based partitioning scheme and confirm the validity of the model.

Patent
17 Nov 2004
TL;DR: In this paper, the authors present a method for managing a data stream encoded according to a digital transmission protocol and configured for broadcasting to a consumer network device within a broadband communications network, where a message relating to the data stream is encapsulated within a transport layer data packet.
Abstract: A method (400) for managing a data stream (30) encoded according to a digital transmission protocol and configured for broadcasting to a consumer network device (14) within a broadband communications network (10). A message relating to the data stream is encapsulated (402) within a transport layer data packet (502). The packet has a destination port number field. A value associated with a predetermined parameter of the digital transmission protocol is created (404) within the field. Based on the value, the packet is forwarded (406) to the consumer network device according to a network layer protocol. When the forwarded message is received by the consumer network device, the consumer network device processes the data stream based on the message, and establishes an application layer communication socket based on the destination port number value. The socket is usable to receive further messages associated with the predetermined parameter of the digital transmission protocol.

Patent
Hicham Hatime1
08 Jan 2004
TL;DR: In this article, a roundtrip time parameter of TCP/IP communications networks may be adjusted according to actual measured response time of successful packet transmission and used as a timeout value to detect a possible packet loss sooner than previous techniques.
Abstract: Systems and methods for improving network throughput in a packet ties to network communication environment. In one aspect here of, delays associated with retransmission of a packet or reduced to improve total performance and bandwidth utilization on the communication network medium. A roundtrip time parameter of TCP/IP communications networks may be adjusted according to actual measured response time of a successful packet transmission and used as a timeout value to detect a possible packet loss sooner than previous techniques. Another aspect hereof provides for dynamically adjusting estimated available bandwidth of the communication medium. The estimated available bandwidth is then useful to better avoid congestion on the medium and the resulting packet loss.