scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2002"


01 Jan 2002
TL;DR: The TESLA (Timed Efficient Stream Loss-tolerant Authentication) broadcast authentication protocol is presented, an efficient protocol with low communication and computation overhead, which scales to large numbers of receivers, and tolerates packet loss.
Abstract: One of the main challenges of securing broadcast communication is source authentication, or enabling receivers of broadcast data to verify that the received data really originates from the claimed source and was not modified en route. This problem is complicated by mutually untrusted receivers and unreliable communication environments where the sender does not retransmit lost packets. This article presents the TESLA (Timed Efficient Stream Loss-tolerant Authentication) broadcast authentication protocol, an efficient protocol with low communication and computation overhead, which scales to large numbers of receivers, and tolerates packet loss. TESLA is based on loose time synchronization between the sender and the receivers. Despite using purely symmetric cryptographic functions (MAC functions), TESLA achieves asymmetric properties. We discuss a PKI application based purely on TESLA, assuming that all network nodes are loosely time synchronized.

958 citations


Journal ArticleDOI
TL;DR: Stochastic Fair Blue is proposed and evaluated, a queue management algorithm which can identify and rate-limit nonresponsive flows using a very small amount of state information and is shown to perform significantly better than Red, both in terms of packet loss rates and buffer size requirements in the network.
Abstract: In order to stem the increasing packet loss rates caused by an exponential increase in network traffic, the IETF has been considering the deployment of active queue management techniques such as RED (random early detection) (see Floyd, S. and Jacobson, V., IEEE/ACM Trans. Networking, vol.1, p.397-413, 1993). While active queue management can potentially reduce packet loss rates in the Internet, we show that current techniques are ineffective in preventing high loss rates. The inherent problem with these algorithms is that they use queue lengths as the indicator of the severity of congestion. In light of this observation, a fundamentally different active queue management algorithm, called BLUE, is proposed, implemented and evaluated. BLUE uses packet loss and link idle events to manage congestion. Using both simulation and controlled experiments, BLUE is shown to perform significantly better than RED, both in terms of packet loss rates and buffer size requirements in the network. As an extension to BLUE, a novel technique based on Bloom filters (see Bloom, B., Commun. ACM, vol.13, no.7, p.422-6, 1970) is described for enforcing fairness among a large number of flows. In particular, we propose and evaluate stochastic fair BLUE (SFB), a queue management algorithm which can identify and rate-limit nonresponsive flows using a very small amount of state information.

587 citations


Patent
01 Feb 2002
TL;DR: In this paper, a peer-to-peer probing/network quality of service (QoS) analysis system utilizes a UDP-based probing tool for determining latency, bandwidth, and packet loss ratio between peers in a network.
Abstract: A peer-to-peer (P2P) probing/network quality of service (QoS) analysis system utilizes a UDP-based probing tool for determining latency, bandwidth, and packet loss ratio between peers in a network. The probing tool enables network QoS probing between peers that connect through a network address translator. The list of peers to probe is provided by a connection server based on prior probe results and an estimate of the network condition. The list includes those peers which are predicted to have the best QoS with the requesting peer. Once the list is obtained, the requesting peer probes the actual QoS to each peer on the list, and returns these results to the connection server. P2P probing in parallel using a modified packet-pair scheme is utilized. If anomalous results are obtained, a hop-by-hop probing scheme is utilized to determine the QoS of each link. In such a scheme, differential destination measurement is utilized.

305 citations


Proceedings ArticleDOI
02 May 2002
TL;DR: In this article, an approach for reducing packet losses in optical burst switched networks is proposed based on the concept of burst segmentation, where instead of dropping the entire burst during contention, the burst may be broken into multiple segments, and only the overlapping segments are dropped.
Abstract: We address the issue of contention resolution in optical burst switched networks, and we introduce an approach for reducing packet losses which is based on the concept of burst segmentation In burst segmentation, rather than dropping the entire burst during contention, the burst may be broken into multiple segments, and only the overlapping segments are dropped The segmentation scheme is investigated by simulation in conjunction with a deflection scheme, and it is shown that segmentation with deflection can achieve a significantly reduced packet loss rate

279 citations


Patent
11 Apr 2002
TL;DR: In this paper, a transmit timer is incorporated within the sender device and exploited host-level statistics for a plurality of connections between a sender and receiver, which can reduce or eliminate bursty data transmission commonly associated with conventional TCP architectures.
Abstract: Improved data transport and management within a network communication system may be achieved by utilizing a transmit timer incorporated within the sender device and exploiting host-level statistics for a plurality of connections between a sender and receiver. The period of the transmit timer may be periodically adjusted based on a ratio of the smoothed round-trip time and the smoothed congestion window, thereby reducing or eliminating bursty data transmission commonly associated with conventional TCP architectures. For applications having a plurality of connections between a sender and a receiver that share a common channel, such as web applications, the congestion window and smoothed round trip time estimates for all active connections may be used to initialize new connections and allocate bandwidth among existing connections. This aspect of the present invention may reduce the destructive interference that may occur as different connections compete with one another to maximize the bandwidth of each connection without regard to other connections serving the same application. Error recovery may also be improved by incorporating a short timer and a long timer that are configured to reduce the size of the congestion window and the corresponding transmission rate in response to a second packet loss with a predefined time period in order to increase resilience to random packet loss.

216 citations


Patent
13 Sep 2002
TL;DR: In this article, a method and apparatus for client-side detection of network congestion in a best-effort packet network comprising streaming media traffic is disclosed. But this method is limited to streaming media services.
Abstract: A method and apparatus for client-side detection of network congestion in a best-effort packet network comprising streaming media traffic is disclosed. Said method and apparatus provide for quality streaming media services in a congested network with constrained bandwidth over the last-mile link. A client media buffer detects at least one level of congestion and signals a server to enact at least one error mechanism. Preferred error mechanisms include packet retransmissions, stream prioritization, stream acceleration, changes in media compression rate, and changes in media resolution. Said method and apparatus allow distributed management of network congestion for networks comprising multiple clients and carrying significant streaming media traffic.

201 citations


Patent
09 Sep 2002
TL;DR: In this article, a method and system for security monitoring in a computer network has a packet sink with filtering and data analysis capabilities, the packet sink is a default destination for data packets having an address unrecognized by the network routers.
Abstract: A method and system for security monitoring in a computer network has a packet sink with filtering and data analysis capabilities. The packet sink is a default destination for data packets having an address unrecognized by the network routers. At the packet sink, the packets are filtered and statistical summaries about the data traffic are created. The packet sink then forwards the data to a monitor, the information content depending on the level of traffic in the network.

195 citations


Patent
03 Jan 2002
TL;DR: In this article, a system and method for generating and transmitting false packets along with a true packet to hide or obscure the actual message traffic is presented. But the method is not suitable for the transmission of large numbers of false packets.
Abstract: A system and method for generating and transmitting false packets along with a true packet to thereby hide or obscure the actual message traffic. A new extension header having a plurality of fields is positioned in the hierarchy of Internet protocol headers that control passage of the false packets and the true packet through the network. A sending host computer generates a plurality of false packets for each true packet and transmits the false packets and the true packet containing the Internet protocol headers and the extension header over the network. The new extension header is decrypted and re-encrypted each host that handles a message packet that uses the new extension header to control the random re-encryption of the true packet body at random hosts and the random generation of false packets at each host visited by a true packet, at the recipient of the true packet, and at any hosts that receive a false packet.

183 citations


Journal ArticleDOI
TL;DR: In this article, the authors derive gradient estimators for packet loss and workload related performance metrics with respect to threshold parameters and demonstrate their use in buffer control problems where their SFM-based estimators are evaluated based on data from an actual system.
Abstract: Uses stochastic fluid models (SFMs) for control and optimization (rather than performance analysis) of communication networks, focusing on problems of buffer control. We derive gradient estimators for packet loss and workload related performance metrics with respect to threshold parameters. These estimators are shown to be unbiased and directly observable from a sample path without any knowledge of underlying stochastic characteristics, including traffic and processing rates (i.e., they are nonparametric). This renders them computable in online environments and easily implementable for network management and control. We further demonstrate their use in buffer control problems where our SFM-based estimators are evaluated based on data from an actual system.

180 citations


Proceedings ArticleDOI
05 Jul 2002
TL;DR: A threshold-based burst assembly scheme in conjunction with a burst segmentation policy to provide QoS in optical burst switched networks and shows there is an optimal value of burst threshold that minimizes packet loss for given network parameters.
Abstract: In this paper, we propose a threshold-based burst assembly scheme in conjunction with a burst segmentation policy to provide QoS in optical burst switched (OBS) networks. Bursts are assembled at the network edge by collecting packets that have the same QoS requirements. Once the number of packets in a burst reaches a threshold value, the burst is sent into the network. We investigate various burst assembly strategies which differentiate bursts by utilizing different threshold values or assigning different burst priorities to bursts that contain packets with differing QoS requirements. The primary objective of this work is to find the optimal threshold values for varous classes of bursts. We show through simulation that there is an optimal value of burst threshold that minimizes packet loss for given network parameters.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

177 citations


Proceedings ArticleDOI
18 Aug 2002
TL;DR: Several techniques for reducing packet loss are explored, including quality-based routing and passive acknowledgment, and an empirical evaluation of the effect of these techniques on packet loss and data freshness are presented.
Abstract: While it is often suggested that moderate-scale ad hoc sensor networks are a promising approach to solving real-world problems, most evaluations of sensor network protocols have focused on simulation, rather than realworld, experiments. In addition, most experimental results have been obtained in limited scale. This paper describes a practical application of moderate-scale ad hoc sensor networks. We explore several techniques for reducing packet loss, including quality-based routing and passive acknowledgment, and present an empirical evaluation of the effect of these techniques on packet loss and data freshness.

01 Jan 2002
TL;DR: The effects of packet loss on the quality of MPEG-4 video are quantified, an analytical model is developed, and a mechanism for recovering this data using postprocessing techniques at the receiver is proposed.
Abstract: While there is an increasing demand for streaming video applications on the Internet, various network characteristics make the deployment of these applications more challenging than traditional TCP-based applications like email and the Web. Packet loss can be detrimental to compressed video with interdependent frames because errors potentially propagate across many frames. While latency requirements do not permit retransmission of all lost data, we leverage the characteristics of MPEG-4 to selectively retransmit only the most important data in the bitstream. When latency constraints do not permit retransmission, we propose a mechanism for recovering this data using postprocessing techniques at the receiver. We quantify the effects of packet loss on the quality of MPEG-4 video, develop an analytical model to explain these effects, present a system to adaptively deliver MPEG-4 video in the face of packet loss and variable Internet conditions, and evaluate the effectiveness of the system under various network conditions.

01 Dec 2002
TL;DR: This memo describes the use of Forward Error Correction (FEC) codes to efficiently provide and/or augment reliability for one-to-many reliable data transport using IP multicast.
Abstract: This memo describes the use of Forward Error Correction (FEC) codes to efficiently provide and/or augment reliability for one-to-many reliable data transport using IP multicast. One of the key properties of FEC codes in this context is the ability to use the same packets containing FEC data to simultaneously repair different packet loss patterns at multiple receivers. Different classes of FEC codes and some of their basic properties are described and terminology relevant to implementing FEC in a reliable multicast protocol is introduced. Examples are provided of possible abstract formats for packets carrying FEC.

Proceedings ArticleDOI
12 May 2002
TL;DR: This work describes a novel method for authenticating multicast packets that is robust against packet loss and compares the technique with four other previously proposed schemes using analytical and empirical results.
Abstract: We describe a novel method for authenticating multicast packets that is robust against packet loss. Our main focus is to minimize the size of the communication overhead required to authenticate the packets. Our approach is to encode the hash values and the signatures with Rabin's Information Dispersal Algorithm (IDA) to construct an authentication scheme that amortizes a single signature operation over multiple packets. This strategy is especially efficient in terms of space overhead, because just the essential elements needed for authentication (i.e., one hash per packet and one signature per group of packets) are used in conjunction with an erasure code that is space optimal. To evaluate the performance of our scheme, we compare our technique with four other previously proposed schemes using analytical and empirical results. Two different bursty loss models are considered in the analyses.

Journal ArticleDOI
M. Laor1, L. Gendel1
TL;DR: The results show that only a small percentage of reordered packets, by at least three packet locations, in a backbone link can cause significant degradation of application throughput, and long flows are affected most.
Abstract: Packet reordering in the Internet is a well-known phenomenon. As the delay and speed of backbone links continue to increase, what used to be a negligible amount of packet reordering may now, combined with some level of dropped packets, cause multiple invocations of fast recovery within a TCP window. This may result in a significant drop in link utilization and hence in application throughput. What adds to the difficulty is that packet reordering is a silent problem. It may result in significant application throughput degradation while leaving little to no trace. In this article we try to measure and quantify the effect of reordering packets in a backbone link that multiplexes multiple TCP flows on application throughput. Different operating systems and delay values as well as various types of flow mixes were tested in a laboratory setup. The results show that only a small percentage of reordered packets, by at least three packet locations, in a backbone link can cause significant degradation of application throughput. Long flows are affected most. Due to the potential impact of this phenomenon, minimization of packet reordering as well as mitigating the effect algorithmically should be considered.

Patent
26 Dec 2002
TL;DR: In this paper, a method for deriving packet delivery statistics from a UDP stream simulating a service level provided by a VoIP network was proposed, where the second pre-defined interval is substantially larger than the first predefined interval.
Abstract: A method includes deriving packet delivery statistics from a User Datagram Protocol (UDP) stream simulating a service level provided by a Voice over Internet Protocol (VoIP) network and transmitted across the VoIP network at a first pre-defined interval, and processing the derived packet delivery statistics at a second pre-defined interval to generate network performance statistics for the VoIP network, where the second pre-defined interval is substantially larger than the first pre-defined interval.

Proceedings ArticleDOI
06 Nov 2002
TL;DR: Using packet traces from a tier-1 ISP backbone, this work explains how routing loops manifest in packet traces in terms of the packet types caught in the loop, the loop sizes, and the loop durations, and analyzes the impact of routing loops on network performance in Terms of loss and delay.
Abstract: Routing loops are caused by inconsistencies in routing state among a set of routers. They occur in perfectly engineered networks, and have a detrimental effect on performance. They impact end-to-end performance through increased packet loss and delay for packets caught in the loop, and through increased link utilization and corresponding delay and jitter for packets that traverse the link but are not caught in the loop.Using packet traces from a tier-1 ISP backbone, we first explain how routing loops manifest in packet traces. We characterize routing loops in terms of the packet types caught in the loop, the loop sizes, and the loop durations. Finally, we analyze the impact of routing loops on network performance in terms of loss and delay.

Journal ArticleDOI
01 Sep 2002
TL;DR: Here this work model multiple connections maintained in the congestion avoidance regime by the RED mechanism, and introduces a mean-field approximation to one such RED system as the number of flows tends to infinity.
Abstract: Active queue management schemes like RED (random early detection) have been suggested when multiple TCP sessions are multiplexed through a bottleneck buffer. The idea is to detect congestion before the buffer overflows and packets are lost. When the queue length reaches a certain threshold RED schemes drop/mark incoming packets with a probability that increases as the queue size increases. The objectives are an equitable distribution of packet loss, reduced delay and delay variation and improved network utilization.Here we model multiple connections maintained in the congestion avoidance regime by the RED mechanism. The window sizes of each TCP session evolve like independent dynamical systems coupled by the queue length at the buffer. We introduce a mean-field approximation to one such RED system as the number of flows tends to infinity. The deterministic limiting system is described by a transport equation. The numerical solution of the limiting system is found to provide a good description of the evolution of the distribution of the window sizes, the average queue size, the average loss rate per connection and the total throughput. TCP with RED or tail-drop may exhibit limit cycles and this causes unnecessary packet delay variation and variable loss rates. The root cause of these limit cycles is the hysteresis due to the round trip time delay in reacting to a packet loss.

Patent
10 Dec 2002
TL;DR: In this paper, a data communications system is provided to dynamically change the error processing between an ARQ function and an FEC function in accordance with the network status, thus enabling high-quality data playback.
Abstract: A data communications system is provided to dynamically change the error processing between an ARQ function and an FEC function in accordance with the network status, thus enabling high-quality data playback. In packet transmission, error correction control is performed on the basis of the network status monitored by a network monitoring unit. The error control mode is switched between FEC-based error control and ARQ-based error control (retransmission request processing) in accordance with packet loss or error occurrence on the network, and packet transmission is performed. If the RTT is short, error correction based on ARQ is selected. If the RTT is long, error correction not based on ARQ but on FEC is selected. Such dynamic error correction control is achieved.

Proceedings ArticleDOI
28 Sep 2002
TL;DR: The network survivability is perceived as a composite measure consisting of both network failure duration and failure impact on the network, and the excess packet loss due to failures is taken as the survivability performance measure.
Abstract: Network survivability reflects the ability of a network to continue to function during and after failures. Our purpose in this paper is to propose a quantitative approach to evaluate network survivability. We perceive the network survivability as a composite measure consisting of both network failure duration and failure impact on the network. A wireless ad-hoc network is analyzed as an example, and the excess packet loss due to failures (ELF) is taken as the survivability performance measure. To obtain ELF, we adopt a two phase approach consisting of the steady-state availability analysis and transient performance analysis. Assuming Markovian property for the system, this measure is obtained by solving a set of Markov models. By utilizing other analysis paradigms, our approach in this paper may also be applied to study the survivability performance of more complex systems.

Proceedings ArticleDOI
19 May 2002
TL;DR: A new marking technique for tracing a sequence of packets sent along the same path can be traced back to their source using only a single bit in the packet header, and this new technique is effective even when b=1.
Abstract: There has been considerable recent interest in probabilistic packet marking schemes for the problem of tracing a sequence of network packets back to an anonymous source. An important consideration for such schemes is the number of packet header bits that need to be allocated to the marking protocol. Let b denote this value. All previous schemes belong to a class of protocols for which b must be at least log n, where n is the number of bits used to represent the path of the packets. In this paper, we introduce a new marking technique for tracing a sequence of packets sent along the same path. This new technique is effective even when b=1. In other words, the sequence of packets can be traced back to their source using only a single bit in the packet header. With this scheme, the number of packets required to reconstruct the path is O(22n), but we also show that ω(2n) packets are required for any protocol where b=1. We also study the tradeoff between b and the number of packets required. We provide a protocol and a lower bound that together demonstrate that for the optimal protocol, the number of packets required (roughly) increases exponentially with n, but decreases doubly exponentially with b. The protocol we introduce is simple enough to be useful in practice. We also study the case where the packets are sent along k different paths. For this case, we demonstrate that any protocol must use at least log(2k—1) header bits. We also provide a protocol that requires ⌈log(2k+1)⌉ header bits in some restricted scenarios. This protocol introduces a new coding technique that may be of independent interest.

01 Aug 2002
TL;DR: This document defines two derived metrics "loss distance" and "loss period", and the associated statistics that together capture loss patterns experienced by packet streams on the Internet.
Abstract: Using the base loss metric defined in RFC 2680, this document defines two derived metrics "loss distance" and "loss period", and the associated statistics that together capture loss patterns experienced by packet streams on the Internet. The Internet exhibits certain specific types of behavior (e.g., bursty packet loss) that can affect the performance seen by the users as well as the operators. The loss pattern or loss distribution is a key parameter that determines the performance observed by the users for certain real-time applications such as packet voice and video. For the same loss rate, two different loss distributions could potentially produce widely different perceptions of performance.

Proceedings ArticleDOI
12 May 2002
TL;DR: This work uses the Gilbert loss model to infer that changing the packet interval affects loss burstiness, which in turn influences forward error correction (FEC) performance, and performs subjective listening tests based on Mean Opinion Score to evaluate the effect of bursty loss on VoIP perceived quality.
Abstract: Packet loss degrades the perceived quality of voice over IP (VoIP). In addition, packet loss in the Internet tends to come in bursts, which may further degrade audio quality. Using the Gilbert loss model, we infer that changing the packet interval affects loss burstiness, which in turn influences forward error correction (FEC) performance. Next, we perform subjective listening tests based on Mean Opinion Score (MOS) to evaluate the effect of bursty loss on VoIP perceived quality. Then, we compare the perceived quality achieved by two major loss repair methods: FEC and low bit-rate redundancy (LBR). Our MOS test results show that FEC is much preferred over LBR. In addition, our MOS results reveal that, under bursty loss, FEC quality is much better with a moderately large packet interval. Finally, because FEC introduces an extra delay proportional to the packet interval, we present a method of optimizing the packet interval to maximize FEC MOS by considering the delay impairment in ITU's E-model standard.

Journal ArticleDOI
TL;DR: A review of several recent advances for channel-adaptive video streaming that will benefit the design of video streaming systems in the future, and considers three architectures for wireless video and discusses the utility of the reviewed techniques.
Abstract: Despite the well-known challenges of variations in throughput, delay, and packet loss over the Internet, video streaming has experienced phenomenal growth, owing to the extensive research in video coding and transmission. In this paper, we review several recent advances for channel-adaptive video streaming that, we believe, will benefit the design of video streaming systems in the future. Employed in different components of the system, these techniques have the common objective of providing efficient, robust, scalable, and low-latency streaming video. Firstly, by allowing the client to control the rate at which it consumes data, adaptive media playout can be used to reduce receiver buffering and therefore average latency, and provide limited rate scalability. Secondly, rate-distortion optimized packet scheduling, a transport technique, provides a flexible framework to determine the best packet to send, given the channel behaviors, the packets' deadlines, their transmission histories, the distortion reduction associated with sending each packet, and the interpacket dependencies. Thirdly, at the source encoder, channel-adaptive packet-dependency control can greatly improve the error resilience of streaming video and reduce latency. Finally, we address the specific additional challenges for wireless video streaming. We consider three architectures for wireless video and discuss the utility of the reviewed techniques for each architecture. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper addresses the problem of building optical packet switches that are able to effectively cope with variable length packet traffic and quality of service management, therefore able to support IP traffic.
Abstract: This paper addresses the problem of building optical packet switches that are able to effectively cope with variable length packet traffic and quality of service management, therefore able to support IP traffic. The paper aims at showing that the availability of dense wavelength division multiplexing is crucial. By suitably exploiting the wavelength dimension a multistage fiber delay line buffer can be implemented, with fine granularity and long delay with an architecture of limited complexity. This is necessary to fulfill the buffering requirements of variable length packets. Furthermore, the wavelength domain is proved to be more effective than the time domain to manage different levels of quality of service. Algorithms are presented that are peculiarly designed for this environment showing that they can effectively differentiate the packet loss probability between three priority classes.

Journal ArticleDOI
TL;DR: This work presents MTCP, a congestion control scheme for large-scale reliable multicast that incorporates several novel features, and proposes new techniques that can effectively handle instances of congestion occurring simultaneously at various parts of a multicast tree.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: Simulation results show the implemented QoS control schemes that use the feedbacks from the agent and control transmission rate and robustness provide better video quality than a simple rate control mechanism.
Abstract: We propose a of control architecture for mobile multimedia streaming in which RTP monitoring agents report QoS information to media servers. The RTP monitoring agents lie midway between wired networks and radio links, monitor RTP packets sent from media servers to mobile terminals, and report quality information to media servers so that the servers can realize network-adaptive QoS control. By analyzing the information sent from the agents, the servers can distinguish quality degradation caused by network congestion from that caused by radio link errors, and can improve service quality by controlling the transmission rate and robustness against packet loss. Simulation results show our implemented QoS control schemes that use the feedbacks from the agent and control transmission rate and robustness provide better video quality than a simple rate control mechanism.

Proceedings Article
14 Mar 2002
TL;DR: Under realistic conditions, PLP provides strong differentiation between congestion and wireless type of loss based on distinguishable RTT distributions and an HMM is trained so observed RTTs can be mapped to model states that represent either wireless loss or wireless loss.
Abstract: End-to-End differentiation between wireless and conges- tion loss can equip TCP control so it operates effectively in a hybrid wired/wireless environment. Our approach integrates two techniques: packet loss pairs (PLP) and Hidden Markov Modeling (HMM). A packet loss pair is formed by two back-to-back packets, where one packet is lost while the second packet is successfully received. The purpose is for the sec- ond packet to carry the state of the network path, namely the round trip time (RTT), at the time the other packet is lost. Under realistic conditions, PLP provides strong differentiation between congestion and wireless type of loss based on distinguishable RTT distributions. An HMM is then trained so observed RTTs can be mapped to model states that represent either con- gestion loss or wireless loss. Extensive simulations confirm the accuracy of our HMM-based technique in classifying the cause of a packet loss. We also show the superiority of our technique over the Vegas predictor, which was recently found to perform best and which exemplifies other existing loss labeling techniques.

Journal ArticleDOI
TL;DR: The experimental results obtained revealed that it is feasible to augment existing wireless computers with ad hoc networking capability, and route discovery time in ad hoc wireless networks are more dependent on channel conditions and route length than variations in beaconing intervals.
Abstract: Adaptive and self-organizing wireless networks are gaining in popularity. Several media access and routing protocols were proposed for such networks and the performance of such protocols were evaluated based on simulations. In this paper, we evaluate the practicality of realizing an ad hoc wireless network and investigate on performance issues. Several mobile computers were enhanced with ad hoc routing capability and were deployed in an outdoor environment and communication performance associated with ad hoc communications were evaluated. These computers periodically send beacons to their neighbors to declare their presence. We examined the impact of varying packet size, beaconing interval, and route hop count on route discovery time, communication throughput, end-to-end delay, and packet loss. We had also performed mobility experiments and evaluated the route reconstruction time incurred. File transfer times associated with sending information reliably (via TCP) over multihop wireless links are also presented. The experimental results obtained revealed that it is feasible to augment existing wireless computers with ad hoc networking capability. End-to-end performance in ad hoc routes are less affected by beaconing intervals than packet size or route length. Similarly, communication throughput is more dependent on packet size and route length with the exception at very high beaconing frequencies. Packet loss, on the other hand, is not significantly affected by packet size, route length or beaconing frequency. Finally, route discovery time in ad hoc wireless networks are more dependent on channel conditions and route length than variations in beaconing intervals.

Patent
08 May 2002
TL;DR: In this paper, the effects of port congestion and of congestion control operations are limited to the terminal group concerned, and congestion notification packets may be transmitted from a single port or from a plurality of ports of a switching hub, in accordance with the detected degree of congestion.
Abstract: In flow control apparatus formed of a network of switching hubs in a hierarchy configuration for performing data packet transfer within each of a plurality of respectively separate groups of terminals, based on use of group identifiers contained in the data packets, occurrence of congestion of the output section of a port of a switching hub is judged respectively separately for each of the terminal groups, and one or more congestion notification packets for effecting a pause in data packet transmission are generated and transmitted from that switching hub, directed only to one or more terminals of a group relating to the congestion. As a result, the effects of port congestion and of congestion control operations are limited to the terminal group concerned. Congestion can be judged for a terminal group based on a level of utilization of a port output buffer that is used only by that group, or based on a rate of flow of data from that group into a port output buffer which is used in common for all terminal groups, and congestion notification packets may be transmitted from a single port or from a plurality of ports of a switching hub, in accordance with the detected degree of congestion.