scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1999"


Journal ArticleDOI
TL;DR: The prevalence of unusual network events such as out-of-order delivery and packet replication are characterized and a robust receiver-based algorithm for estimating "bottleneck bandwidth" is discussed that addresses deficiencies discovered in techniques based on "packet pair".
Abstract: We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20000 TCP bulk transfers between 35 Internet sites. Because we traced each 100-kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behavior due to the different directions of the Internet paths, which often exhibit asymmetries. We: (1) characterize the prevalence of unusual network events such as out-of-order delivery and packet replication; (2) discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair;" (3) investigate patterns of packet loss, finding that loss events are not well modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and (4) analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.

913 citations


01 Jan 1999
TL;DR: This note describes a proposed addition of ECN (Explicit Congestion Notification) to IP, and describes what modifications would be needed to TCP to make it ECN-capable.
Abstract: This note describes a proposed addition of ECN (Explicit Congestion Notification) to IP. TCP is currently the dominant transport protocol used in the Internet. We begin by describing TCP's use of packet drops as an indication of congestion. Next we argue that with the addition of active queue management (e.g., RED) to the Internet infrastructure, where routers detect congestion before the queue overflows, routers are no longer limited to packet drops as an indication of congestion. Routers could instead set a Congestion Experienced (CE) bit in the packet header of packets from ECN-capable transport protocols. We describe when the CE bit would be set in the routers, and describe what modifications would be needed to TCP to make it ECN-capable. Modifications to other transport protocols (e.g., unreliable unicast or multicast, reliable multicast, other reliable unicast transport protocols) could be considered as those protocols are developed and advance through the standards process.

808 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: It is found that a large memory size is necessary and that the sliding windowaverage provides a more accurate estimate for the same effective memory size than the exponential smoothing or a sliding window average to estimate average loss rate.
Abstract: Understanding and modelling packet loss in the Internet is especially relevant for the design and analysis of delay-sensitive multimedia applications. We present analysis of 128 hours of end-to-end unicast and multicast packet loss measurement. From these we selected 76 hours of stationary traces for further analysis. We consider the dependence as seen in the autocorrelation function of the original loss data as well as the dependence between good run lengths and loss run lengths. The correlation timescale is found to be 1000 ms or less. We evaluate the accuracy of three models of increasing complexity: the Bernoulli model, the 2-state Markov chain model and the k-th order Markov chain model. Out of the 38 trace segments considered, the Bernoulli model was found to be accurate for 7 segments, and the 2-state model was found to be accurate for 10 segments. A Markov chain model of order 2 or greater was found to be necessary to accurately model the rest of the segments. For the case of adaptive applications which track loss, we address two issues of on-line loss estimation: the required memory size and whether to use exponential smoothing or a sliding window average to estimate average loss rate. We find that a large memory size is necessary and that the sliding window average provides a more accurate estimate for the same effective memory size.

648 citations


Journal ArticleDOI
TL;DR: By appropriately marking packets at overloaded resources and by charging a fixed small amount for each mark received, end-nodes are provided with the necessary information and the correct incentive to use the network efficiently.

586 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: This paper shows that the effectiveness of RED depends, to a large extent, on the appropriate parameterization of the RED queue, and proposes and experiments with more adaptive RED gateways which self-parameterize themselves based on the traffic mix.
Abstract: The congestion control mechanisms used in TCP have been the focus of numerous studies and have undergone a number of enhancements. However, even with these enhancements, TCP connections still experience alarmingly high loss rates, especially during times of congestion. To alleviate this problem, the IETF is considering active queue management mechanisms, such as random early detection (RED), for deployment in the network. In this paper, we first show that the effectiveness of RED depends, to a large extent, on the appropriate parameterization of the RED queue. We then show that there is no single set of RED parameters that work well under different congestion scenarios. In light of this observation, we propose and experiment with more adaptive RED gateways which self-parameterize themselves based on the traffic mix. The results show that traffic cognizant parameterization of RED gateways can effectively reduce packet loss, while maintaining high link utilizations under a range of network loads.

547 citations


Journal ArticleDOI
TL;DR: It is found that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected and that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.
Abstract: It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.

434 citations


Proceedings ArticleDOI
21 Sep 1999
TL;DR: It is shown via simulations that this new carrier-sense multiple access (CSMA) protocol provides a higher throughput compared to its single channel counterpart by reducing the packet loss due to collisions and the use of channel reservation provides better performance than multichannel CSMA with purely random idle channel selection.
Abstract: We describe a new carrier-sense multiple access (CSMA) protocol for multihop wireless networks, sometimes also called ad hoc networks. The CSMA protocol divides the available bandwidth into several channels and selects an idle channel randomly for packet transmission. It also employs a notion of "soft" channel reservation as it gives preference to the channel that was used for the last successful transmission. We show via simulations that this multichannel CSMA protocol provides a higher throughput compared to its single channel counterpart by reducing the packet loss due to collisions. We also show that the use of channel reservation provides better performance than multichannel CSMA with purely random idle channel selection.

402 citations


Proceedings ArticleDOI
30 Aug 1999
TL;DR: An end-system architecture centered around a Congestion Manager (CM) that ensures proper congestion behavior and allows applications to easily adapt to network congestion, and concludes that the CM provides a useful and pragmatic framework for building adaptive Internet applications.
Abstract: This paper presents a novel framework for managing network congestion from an end-to-end perspective. Our work is motivated by trends in traffic patterns that threaten the long-term stability of the Internet. These trends include the use of multiple independent concurrent flows by Web applications and the increasing use of transport protocols and applications that do not adapt to congestion. We present an end-system architecture centered around a Congestion Manager (CM) that ensures proper congestion behavior and allows applications to easily adapt to network congestion. Our framework integrates congestion management across all applications and transport protocols. The CM maintains congestion parameters and exposes an API to enable applications to learn about network characteristics, pass information to the CM, and schedule data transmissions. Internally, it uses a window-based control algorithm, a scheduler to regulate transmissions, and a lightweight protocol to elicit feedback from receivers.We describe how TCP and an adaptive real-time streaming audio application can be implemented using the CM. Our simulation results show that an ensemble of concurrent TCP connections can effectively share bandwidth and obtain consistent performance, without adversely affecting other network flows. Our results also show that the CM enables audio applications to adapt to congestion conditions without having to perform congestion control or bandwidth probing on their own. We conclude that the CM provides a useful and pragmatic framework for building adaptive Internet applications.

383 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: A simple algorithm is obtained that optimizes a subjective measure as opposed to an objective measure of quality, and incorporates the constraints of rate control and playout delay adjustment schemes, and it adapts to varying loss conditions in the network.
Abstract: Excessive packet loss rates can dramatically decrease the audio quality perceived by users of Internet telephony applications. Previous results suggest that error control schemes using forward error correction (FEC) are good candidates for decreasing the impact of packet loss on audio quality. However, the FEC scheme must be coupled to a rate control scheme. Furthermore, the amount of redundant information used at any given point in time should also depend on the characteristics of the loss process at that time (it would make no sense to send much redundant information when the channel is loss free), on the end to end delay constraints (destination typically have to wait longer to decode the FEC as more FEC information is used), on the quality of the redundant information, etc. However, it is not clear given all these constraints how to choose the "best" possible redundant information. We address this issue, and illustrate the approach using an FEC scheme for packet audio standardized in the IETF. We show that the problem of finding the best redundant information can be expressed mathematically as a constrained optimization problem for which we give explicit solutions. We obtain from these solutions a simple algorithm with very interesting features, namely (i) the algorithm optimizes a subjective measure (such as the audio quality perceived at a destination) as opposed to an objective measure of quality (such as the packet loss rate at a destination), (ii) it incorporates the constraints of rate control and playout delay adjustment schemes, and (iii) it adapts to varying loss conditions in the network (estimated online with RTCP feedback). We have been using the algorithm, together with a TCP-friendly rate control scheme and we have found it to provide very good audio quality even over paths with high and varying loss rates. We present simulation and experimental results to illustrate its performance.

377 citations


Journal ArticleDOI
TL;DR: A theoretical framework is derived by which the Internet packet loss behavior can be directly related to the picture quality perceived at the receiver and it is demonstrated how this framework can be used to select appropriate parameter values for the overall system design.
Abstract: In this article we describe and investigate an Internet video streaming system based on a scalable video coder combined with unequal error protection that maintains an acceptable picture quality over a wide range of connection qualities. The proposed approach does not require any specific support from the network layer and is especially suited for Internet multicast applications where different users are perceiving different transmission conditions and no feedback channel can be employed. We derive a theoretical framework for the overall system by which the Internet packet loss behavior can be directly related to the picture quality perceived at the receiver. We demonstrate how this framework can be used to select appropriate parameter values for the overall system design. Experimental results show how the presented system achieves a gracefully degrading picture quality for packet losses up to 30%.

296 citations


01 Sep 1999
TL;DR: This memo defines a metric for one-way packet loss across Internet paths and states that it is likely that the number of packets per path will increase over time.
Abstract: This memo defines a metric for one-way packet loss across Internet paths. [STANDARDS-TRACK]

01 Jan 1999
TL;DR: In this article, a simple congestion control law for high-speed data networks is proposed to guarantee stability of network queues and full utilization of network links in a general network topology and tra$c scenario during both transient and steady state condition.
Abstract: High-speed communication networks are characterized by large bandwidth-delay products. This may have an adverse impact on the stability of closed-loop congestion control algorithms. In this paper, classical control theory and Smith’s principle are proposed as key tools for designing an e!ective and simple congestion control law for high-speed data networks. Mathematical analysis shows that the proposed control law guarantees stability of network queues and full utilization of network links in a general network topology and tra$c scenario during both transient and steady-state condition. In particular, no data loss is guaranteed using bu!ers with any capacity, whereas full utilization of links is ensured using bu!ers with capacity at least equal to the bandwidth-delay product. The control law is transformed to a discrete-time form and is applied to ATM networks. Moreover a comparison with the ERICA algorithm is carried out. Finally, the control law is transformed to a window form and is applied to Internet. The resulting control law surprisingly reveals that today’s Transmission Control Protocol/Internet Protocol implements a Smith predictor for congestion control. This provides a theoretical insight into the congestion control mechanism of TCP/IP along with a method to modify and improve this mechanism in a way that is backward compatible. ( 1999 Elsevier Science Ltd. All rights reserved.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a control law for high-speed data networks that guarantees stability of network queues and full utilization of network links in a general network topology and traffic scenario during both transient and steady state condition.

Patent
28 Jun 1999
TL;DR: In this paper, the authors define two short message protocols, one of which relies on a statistical model and the other of which uses positive acknowledgement to track receipt of transmitted packets by intended recipient.
Abstract: In a network with a sending system networked to at least one receiving system, it is sometimes desirable to transfer relatively short messages between the sending system and one or more receiving systems in a highly reliable yet highly efficient manner. The present invention defines two short message protocols, one of which relies on a statistical model and the other of which uses positive acknowledgement to track receipt of transmitted packets by intended recipient. The statistical reliability mode is based on the observation that for each packet in a message that is transmitted, the probability that at least one packet of the message is received by a given system increases. Thus, in the statistical reliability mode messages are divided into a guaranteed minimum number of packets, with additional packets being added if the message length is insufficient to fill the minimum number of packets. The positive reliability mode of the present invention periodically sets an acknowledgement flag in the packets transmitted for a message. Receiving systems send an acknowledgement in response to receipt of that packet. The sending system tracks receipt of acknowledgements by intended recipient and retransmits any unacknowledged packets so as to positively assure the packets are received. Receiving systems send negative acknowledgements to request retransmission of missing packets. Negative acknowledgement suppression is implemented at both the sender and receiver to prevent a flood of negative acknowledgements from overwhelming the network. Packets are transmitted by the sending system at a transmission rate selected to avoid any adverse impact on the packet loss rate of the network.

Proceedings ArticleDOI
21 Mar 1999
TL;DR: The network routing messages exchanged between core Internet backbone routers are examined to show that as a result of specific router vendor software changes suggested by earlier analysis, the volume of Internet routing updates has decreased by an order of magnitude.
Abstract: This paper examines the network routing messages exchanged between core Internet backbone routers. Internet routing instability, or the rapid fluctuation of network reachability information, is an important problem currently facing the Internet engineering community. High levels of network instability can lead to packet loss, increased network latency and time to convergence. At the extreme, high levels of routing instability have led to the loss of internal connectivity in wide-area, national networks. In an earlier study of inter-domain routing, we described widespread, significant pathological behaviour in the routing information exchanged between backbone service providers at the major US public Internet exchange points. These pathologies included several orders of magnitude more routing updates in the Internet core than anticipated, large numbers of duplicate routing messages, and unexpected frequency components between routing instability events. The work described in this paper extends our earlier analysis by identifying the origins of several of these observed pathological Internet routing behaviour. We show that as a result of specific router vendor software changes suggested by our earlier analysis, the volume of Internet routing updates has decreased by an order of magnitude. We also describe additional router software changes that can decrease the volume of routing updates exchanged in the Internet core by an additional 30 percent or more. We conclude with a discussion of trends in the evolution of Internet architecture and policy that may lead to a rise in Internet routing instability.

01 Jan 1999
TL;DR: It is demonstrated that algorithms embedded in the end-systems are able to synthesize a telephony-like service by blocking calls at times when the load in the packet network is high.
Abstract: We describe how a packet network with a simple pricing mechanism and no connection acceptance control may be used to carry a telephony-like service with low packet loss and some call blocking. The packet network uses packet marking to indicate congestion and endsystems are charged a fixed small amount per mark received. The end-systems are thus provided with the information and the incentive to use the packet network efficiently. We demonstrate that algorithms embedded in the end-systems are able to synthesize a telephony-like service by blocking calls at times when the load in the packet network is high.

Proceedings Article
11 Oct 1999
TL;DR: This paper explores using the TCP protocol to provide more accurate network measurements than traditional tools, while still preserving their near-universal applicability.
Abstract: Understanding wide-area network characteristics is critical for evaluating the performance of Internet applications. Unfortunately, measuring the end-to-end network behavior between two hosts can be problematic. Traditional ICMP-based tools, such as ping, are easy to use and work universally, but produce results that are limited and inaccurate. Measurement infrastructures, such as NIMI, can produce highly detailed and accurate results, but require specialized software to be deployed at both the sender and the receiver. In this paper we explore using the TCP protocol to provide more accurate network measurements than traditional tools, while still preserving their near-universal applicability. Our first prototype, a tool called sting, is able to accurately measure the packet loss rate on both the forward and reverse paths between a pair of hosts. We describe the techniques used to accomplish this, how they were validated, and present our preliminary experience measuring the packet loss rates to and from a variety of Web servers.

Proceedings ArticleDOI
08 Nov 1999
TL;DR: It is shown how learning can support intelligent behavior of cognitive packets in a cognitive packet networks in which intelligent capabilities for routing and flow control are concentrated in the packets, rather than in the nodes and protocols.
Abstract: We propose cognitive packet networks (CPN) in which intelligent capabilities for routing and flow control are concentrated in the packets, rather than in the nodes and protocols. Cognitive packets within a CPN route themselves. They are assigned goals before entering the network and pursue these goals adaptively. Cognitive packets learn from their own observations about the network and from the experience of other packets with whom they exchange information via mailboxes. Cognitive packets rely minimally on routers. This paper describes CPN and shows how learning can support intelligent behavior of cognitive packets.

Patent
18 Feb 1999
TL;DR: In this article, a system and method for improving the efficiency of packet-based networks by using aggregate packets is described, which can reduce the transmission time when there are multiple packets being sent to common destinations because the interpacket time may be reduced.
Abstract: A system and method for improving the efficiency of a packet-based network by using aggregate packets are described. One example method involves determining which network devices support aggregate packets. If a first packet is received on a route that supports aggregate packets, it is then held for a short period. During this short period, if an additional packet is received that shares at least one common route element that also supports aggregate packets with the first packet, the first packet and the additional packet are combined into a single larger aggregate packet. This can reduce the transmission time when there are multiple packets being sent to common destinations because the inter-packet time may be reduced. Additionally, in some networks, this technique allows the bandwidth of a common medium to be more fully used because more of the packets will be closer to the maximum size allowed.

Patent
15 Oct 1999
TL;DR: In this article, the authors present a method and apparatus for monitoring packet loss activity in an Internet Protocol (IP) network clustering system which can provide a useful discrete and tangible mechanism for controlled failover of the TCP/IP network cluster system.
Abstract: The present invention is a method and apparatus for monitoring packet loss activity in an Internet Protocol (IP) network clustering system which can provide a useful discrete and tangible mechanism for controlled failover of the TCP/IP network cluster system. An adaptive interval value is determined as a function of the average packet loss in the system, and this adaptive interval value used to determine when a cluster member must send a next keepalive message to alt other cluster members, and wherein the keepalive message is used to determine network packet loss.

Patent
08 Feb 1999
TL;DR: In this paper, a high-speed rule processing method for packet filtering is presented, where the rules are divided into N orthogonal dimensions that comprise aspects of each packet that may be examined and tested, each of the dimensions are then divided into a set of dimension rule ranges.
Abstract: As Internet packet flow increases, the demand for high speed packet filtering has grown. The present invention introduces a high-speed rule processing method that may be used for packet filtering. The method pre-processes a set of packet filtering rules such that the rules may be searched in parallel by a set of independent search units. Specifically, the rules are divided into N orthogonal dimensions that comprise aspects of each packet that may be examined and tested. Each of the N dimensions are then divided into a set of dimension rule ranges. Each rule range is assigned a value that specifies the rules that may apply in that range. The rule preprocessing is completed by creating a search structure to be used for classifying a packet into one of the rule ranges in each of the N dimensions. Each search structure may be used by an independent search unit such that all N dimensions may be searched concurrently. The packet processing method of the present invention activates the N independent search units to search the N pre-processor created search structures. The output of each of the N search structures is then logically combined to select a rule to be applied.

Proceedings ArticleDOI
Henning Sanneck1, Georg Carle1
27 Dec 1999
TL;DR: This paper provides means for a comprehensive characterization of loss processes by employing a model that captures loss burstiness and distances between loss bursts, and serves as a framework in which packet loss metrics existing in the literature can be described as model parameters and thus integrated into the loss process characterization.
Abstract: For the same long-term loss ratio, different loss patterns lead to different application-level Quality of Service (QoS) perceived by the users (short-term QoS). While basic packet loss measures like the mean loss rate are widely used in the literature, much less work has been devoted to capturing a more detailed characterization of the loss process. In this paper, we provide means for a comprehensive characterization of loss processes by employing a model that captures loss burstiness and distances between loss bursts. Model parameters can be approximated based on run-lengths of received/lost packets. We show how the model serves as a framework in which packet loss metrics existing in the literature can be described as model parameters and thus integrated into the loss process characterization. Variations of the model with different complexity are introduced, including the well-known Gilbert model as a special case. Finally we show how our loss characterization can be used by applying it to actual Internet loss traces.

Patent
22 Sep 1999
TL;DR: In this paper, the authors propose a protocol that provides link-level and media access control (MAC) level functions for wireless ad hoc networks and is robust to mobility or other dynamics, and for scaling to dense networks.
Abstract: A communication protocol that provides link-level and media access control (MAC) level functions for wireless (e.g., ad-hoc) networks and is robust to mobility or other dynamics, and for scaling to dense networks. In a mobile or otherwise dynamic network, any control-packet collisions will be only temporary and fair. In a dense network, the network performance degrades gracefully, ensuring that only a certain percentage of the common channel is consumed with control packets. The integrated protocol allows packets (e.g., data scheduling control packets) to be scheduled in a collision-free and predictable manner (known to all neighbors), multicast packets can be reliably scheduled, as well as streams of delay- or delay-jitter-sensitive traffic. Further, using an optional network code, the scheduling of control packets can appear to observers to be randomized.

Journal ArticleDOI
TL;DR: This paper presents a rate-distortion optimized mode selection method for packet lossy environments that takes into account the network conditions and the error concealment method used at the decoder.
Abstract: Reliable transmission of compressed video in a packet lossy environment cannot be achieved without error recovery mechanisms. We describe an effective method for increasing error resilience of video transmission over packet lossy networks such as the Internet. Intra coding (without reference to a previous picture) is a well-known technique for eliminating temporal error propagation in a predictive video coding system. Randomly intra coding of blocks increases error resilience to packet loss. However, when the error concealment used by the decoder is known, intra encoding following a method that optimizes the tradeoffs between compression efficiency and error resilience is a better alternative. In this paper, we present a rate-distortion optimized mode selection method for packet lossy environments that takes into account the network conditions and the error concealment method used at the decoder. We present results for different packet loss rates and typical packet sizes of the Internet, that illustrate the advantages of the proposed method.

Patent
29 Sep 1999
TL;DR: In this paper, a programmable network element (400) operates on packet traffic flowing through the element in accordance with a gateway program (404, 405, 406) which is dynamically uploaded into the network element or unloaded from it via a mechanism separate from the actual packet traffic as the element operates.
Abstract: A programmable network element (400) operates on packet traffic flowing through the element in accordance with a gateway program (404, 405, 406) which is dynamically uploaded into the network element or unloaded from it via a mechanism separate from the actual packet traffic as the element operates. Such programmable network element can simultaneously operate on plural packet flows with different or the same programs being applied to each flow. A dispatcher (402) provides a packet filter (403) with a set of rules provided by one or more of the dynamically loaded and invoked programs. These rules define, for each program, the characteristics of those packets flowing through the network element that are to be operated upon in some manner. A packet that flows from the network through the filter and satisfies one or more of such rules is sent by the packet filter to the dispatcher. The dispatcher, in accordance with one of the programs, either sends the packet to the program for manipulation by the program itself, or manipulates the packet itself in a manner instructed by the program. The processed packet is sent back through the filter to the network for routing to its destination.

Patent
02 Aug 1999
TL;DR: In this article, a TCP-aware agent sublayer (TAS) is proposed to minimize the effects of faults over an air link of a wireless transmission channel utilizing Transport Control Protocol (TCP).
Abstract: Disclosed is a system for minimizing the effects of faults over an air link of a wireless transmission channel utilizing Transport Control Protocol (TCP). The system includes a TCP-Aware Agent Sublayer (TAS) in a protocol stack, which has a mechanism for caching both TCP packets during forward transmission and acknowledgment (ACK) return packets. The caching mechanism is located near a wireless link of the wireless transmission channel. The system also includes a link monitoring agent coupled to the TAS. The link monitoring agent monitors the condition of the wireless transmission channel for an occurrence of a predefined fault. Once a predefined fault is detected, a system response is implemented based on the type of fault encountered. When the fault is an air link packet loss, an associated packet is immediately retransmitted from the cache, and when the fault is a temporary disconnect, a congestion window of the TCP source is closed.

Patent
Juin-Hwey Chen1
30 Mar 1999
TL;DR: In this paper, a scalable and low-complexity adaptive transform coding method for speech and general audio signals is presented. But the method is not suitable for the Internet Protocol (IP)-based multimedia communications.
Abstract: High-quality, low-complexity and low-delay scalable and embedded system and method are disclosed for coding speech and general audio signals. The invention is particularly suitable in Internet Protocol (IP)-based multimedia communications. Adaptive transform coding, such as a Modified Discrete Cosine Transform, is used, with multiple small-size transforms in a given signal frame to reduce the coding delay and computational complexity. In a preferred embodiment, for a chosen sampling rate of the input signal, one or more output sampling rates may be decoded with varying degrees of complexity. Multiple sampling rates and bit rates are supported due to the scalable and embedded coding approach underlying the present invention. Further, a novel adaptive frame loss concealment approach is used to reduce the distortion caused by packet loss in communications using IP networks.

Journal ArticleDOI
01 Aug 1999
TL;DR: A solution for this problem of inability to separate congestion packet loss from other types of packet loss has been presented and analysed using simulation techniques.
Abstract: Most commonly used data transfer protocols assume that every packet loss is an indication of network congestion. This interaction between the error recovery and the congestion control procedures results in a low utilisation of the link when there is an appreciable rate of losses due to link errors. This issue is significant for wireless links, particularly those with a long propagation delay. A solution for this problem of inability to separate congestion packet loss from other types of packet loss has been presented and analysed using simulation techniques.

Proceedings ArticleDOI
21 Mar 1999
TL;DR: This work presents MTCT, a congestion control scheme for large-scale reliable multicast that incorporates several novel features, and proposes new techniques that can effectively handle instances of congestion occurring simultaneously at various parts of a multicast tree.
Abstract: We present MTCT, a congestion control scheme for large-scale reliable multicast. Congestion control for reliable multicast is important because of its wide applications in multimedia and collaborative computing, yet nontrivial, because of the potentially large number of receivers involved. Many schemes have been proposed to handle the recovery of lost packets in a scalable manner; but there is little work on the design and implementation of congestion control schemes for reliable multicast. We propose new techniques that can effectively handle instances of congestion occurring simultaneously at various parts of a multicast tree. Our protocol incorporates several novel features: (1) hierarchical congestion status reports that distribute the load of processing feedback from all receivers across the multicast group, (2) the relative time delay (RTD) concept which overcomes the difficulty of estimating round-trip times in tree-based multicast environments, (3) window-based control that prevents the sender from transmitting faster than packets leave the bottleneck link an the multicast path through which the sender's traffic flows, (4) a retransmission window that regulates the flow of repair packets to prevent local recovery from causing congestion, and (5) a selective acknowledgment scheme that prevents independent (i.e., non-congestion-related) packet loss from reducing the sender's transmission rate. We have implemented MTCP both on UDP in SunOS 5.6 and on the simulator ns, and we have conducted extensive Internet experiments and simulation to test the scalability and inter-fairness properties of the protocol. The encouraging results we have obtained support our confidence that TCP-like congestion control for large-scale reliable multicast is within our grasp.

Patent
Pawan Goyal1, Gisli Hjalmtysson1
09 Apr 1999
TL;DR: In this paper, a method and apparatus for communicating information in a network is described, where a packet for the information is generated at a first network device and a flow label is assigned to the packet.
Abstract: A method and apparatus for communicating information in a network is described. A packet for the information is generated at a first network device. The first network device assigns a flow label to the packet. The flow label indicates that the packet is part of a particular sequence of packets. The first network device also assigns a direction to the packet by, for example, setting a bit in the flow label. The packet is then sent to a second network device through at least one intermediate network device. This process is continued for the entire sequence of packets. The intermediate network device actually routes the packets to the second network device. The intermediate network device receives the packets at an input port. A flow label is identified for each packet. The intermediate network device determines whether a flow table has an entry for the flow label. If there is no present entry for the flow label in the flow table, an entry for the flow label is created. If there is an entry for the flow label, an output port associated with the flow label is obtained. The intermediate network device then sends the packet to the output port. This continues at each intermediate network device until each packet reaches the second network device.