scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1995"


Journal ArticleDOI
TL;DR: This paper describes the additions and modifications to the standard Internet protocol stack (TCP/IP) to improve end-to-end reliable transport performance in mobile environments and implements a routing protocol that enables low-latency handoff to occur with negligible data loss.
Abstract: TCP is a reliable transport protocol tuned to perform well in traditional networks where congestion is the primary cause of packet loss. However, networks with wireless links and mobile hosts incur significant losses due to bit-errors and hand-offs. This environment violates many of the assumptions made by TCP, causing degraded end-to-end performance. In this paper, we describe the additions and modifications to the standard Internet protocol stack (TCP/IP) to improve end-to-end reliable transport performance in mobile environments. The protocol changes are made to network-layer software at the base station and mobile host, and preserve the end-to-end semantics of TCP. One part of the modifications, called the snoop module, caches packets at the base station and performs local retransmissions across the wireless link to alleviate the problems caused by high bit-error rates. The second part is a routing protocol that enables low-latency handoff to occur with negligible data loss. We have implemented this new protocol stack on a wireless testbed. Our experiments show that this system is significantly more robust at dealing with unreliable wireless links than normal TCP; we have achieved throughput speedups of up to 20 times over regular TCP and handoff latencies over 10 times shorter than other mobile routing protocols.

729 citations


Journal ArticleDOI
TL;DR: This work shows how current TCP implementations introduce unacceptably long pauses in communication during cellular handoffs, and proposes an end-to-end fast retransmission scheme that can reduce these pauses to levels more suitable for human interaction.
Abstract: We explore the performance of reliable data communication in mobile computing environments. Motion across wireless cell boundaries causes increased delays and packet losses while the network learns how to route data to a host's new location. Reliable transport protocols like TCP interpret these delays and losses as signs of network congestion. They consequently throttle their transmissions, further degrading performance. We quantify this degradation through measurements of protocol behavior in a wireless networking testbed. We show how current TCP implementations introduce unacceptably long pauses in communication during cellular handoffs (800 ms and longer), and propose an end-to-end fast retransmission scheme that can reduce these pauses to levels more suitable for human interaction (200 ms). Our work makes clear the need for reliable transport protocols to differentiate between motion-related and congestion-related packet losses and suggests how to adapt these protocols to perform better in mobile computing environments. >

607 citations


Journal ArticleDOI
TL;DR: The authors investigate two packet-discard strategies that alleviate the effects of fragmentation and introduce early packet discard, a strategy in which the switch drops whole packets prior to buffer overflow that prevents fragmentation and restores throughput to maximal levels.
Abstract: Investigates the performance of transport control protocol (TCP) connections over ATM networks without ATM-level congestion control and compares it to the performance of TCP over packet-based networks. For simulations of congested networks, the effective throughput of TCP over ATM can be quite low when cells are dropped at the congested ATM switch. The low throughput is due to wasted bandwidth as the congested link transmits cells from "corrupted" packets, i.e., packets in which at least one cell is dropped by the switch. The authors investigate two packet-discard strategies that alleviate the effects of fragmentation. Partial packet discard, in which remaining cells are discarded after one cell has been dropped from a packet, somewhat improves throughput. They introduce early packet discard, a strategy in which the switch drops whole packets prior to buffer overflow. This mechanism prevents fragmentation and restores throughput to maximal levels. >

432 citations


Proceedings ArticleDOI
01 Oct 1995
TL;DR: The notion of Log-Based Receiver-reliable Multicast (LBRM) communication is introduced, and a collection of log-based receiver reliable multicast optimizations are described and evaluated that provide an efficient, scalable protocol for high-performance simulation applications.
Abstract: Reliable multicast communication is important in large-scale distributed applications. For example, reliable multicast is used to transmit terrain and environmental updates in distributed simulations. To date, proposed protocols have not supported these applications' requirements, which include wide-area data distribution, low-latency packet loss detection and recovery, and minimal data and management over-head within fine-grained multicast groups, each containing a single data source.In this paper, we introduce the notion of Log-Based Receiver-reliable Multicast (LBRM) communication, and we describe and evaluate a collection of log-based receiver reliable multicast optimizations that provide an efficient, scalable protocol for high-performance simulation applications. We argue that these techniques provide value to a broader range of applications and that the receiver-reliable model is an appropriate one for communication in general.

391 citations


Journal ArticleDOI
TL;DR: A new taxonomy for congestion control algorithms in packet switching computer networks based on control theory is proposed, which provides a coherent framework for the comparative study of existing algorithms and offers clues toward the development of new congestion control strategies.
Abstract: The authors propose a new taxonomy for congestion control algorithms in packet switching computer networks based on control theory. They view a network as a large, distributed control system, in which a congestion control scheme is a (distributed) control policy executable at each node (host or switches) of the network in order to a certain level of stable conditions. This taxonomy provides a coherent framework for the comparative study of existing algorithms and offers clues toward the development of new congestion control strategies. >

197 citations


Patent
18 Sep 1995
TL;DR: In this article, a single chip router for a multiplex communication network comprises a packet memory for storing data packets, a Reduced Instruction Set Computer (RISC) processor for converting the packets between a Local Area Network (LAN) protocol and a Wide Area Network(WAN) protocol, a LAN interface and a WAN interface.
Abstract: A single chip router for a multiplex communication network comprises a packet memory for storing data packets, a Reduced Instruction Set Computer (RISC) processor for converting the packets between a Local Area Network (LAN) protocol and a Wide Area Network (WAN) protocol, a LAN interface and a WAN interface. A Direct Memory Access (DMA) controller transfers packets transferring packets between the packet memory and the LAN and WAN interfaces. A packet attribute memory stores attributes of the data packets, and an attribute processor performs a non-linear hashing algorithm on an address of a packet being processed for accessing a corresponding attribute of said packet in the packet attribute memory. An address window filter identifies the address of a packet being processed by examining only a predetermined portion of said address, and can comprise a dynamic window filter or a static window filter.

128 citations


Book ChapterDOI
19 Apr 1995
TL;DR: It is shown using measurements over the Internet as well as analytic modeling that the number of consecutively lost audio packets is small unless the network load is very high, which indicates that open loop error control mechanisms based on forward error correction would be adequate to reconstruct most lostaudio packets.
Abstract: We consider the problem of distributing audio data over networks such as the Internet that do not provide support for real-time applications. Experiments with such networks indicate that audio quality is mediocre in large part because of excessive audio packet losses. In this paper, we show using measurements over the Internet as well as analytic modeling that the number of consecutively lost audio packets is small unless the network load is very high. This indicates that open loop error control mechanisms based on forward error correction would be adequate to reconstruct most lost audio packets.

126 citations


Patent
14 Mar 1995
TL;DR: In this paper, a method and apparatus for transmitting data in a packet radio communication system having data sources, destinations and intermediate repeaters is described, where a repeat count in the protocol is decremented each time a packet is retransmitted, until the repeat count reaches zero, at which time the packet is discarded.
Abstract: A method and apparatus for transmitting data in a packet radio communication system having data sources, destinations and intermediate repeaters. According to a packet protocol, a repeat count in the protocol is decremented each time a packet is retransmitted, until the repeat count reaches zero, at which time the packet is discarded. According to another packet protocol, a sequence index is used to prevent duplicate packets from being received by requiring that the sequence number fall within a sequence number window at each device, which is incremented each time a packet is received. The sequence number is also used to cause the retransmission of packets which are lost, at which time the sequence number windows in the devices which are affected are reset to allow transmission of the lost packet.

94 citations


Patent
05 Oct 1995
TL;DR: A network failure simulation tool for automated software testing under software control is presented in this paper, which intercepts packets being sent or received by a computer on a network by redirecting the packets from a network I/O architecture to substitute packet handlers.
Abstract: A network failure simulation tool provides simulation of a network failure suitable for automated software testing under software control. The tool intercepts packets being sent or received by a computer on a network by redirecting the packets from a network I/O architecture to substitute packet handlers. The tool also resumes normal network operation by again directing packets through actual packet handlers of the computer's network I/O architecture. Commands are provided for controlling suspension and resumption of network operation by the tool from an automated software testing program.

86 citations


Proceedings ArticleDOI
01 Oct 1995
TL;DR: Deterministic and statistical bounds on queue size and packet delay in isolated variable-rate communication server-nodes are derived, including cases of single-input and multiple-input under first-come-first-serve queueing.
Abstract: In most network models for quality of service support, the communication links interconnecting the switches and gateways are assumed to have fixed bandwidth and zero error rate. This assumption of steadiness, especially in a heterogeneous internet-working environment, might be invalid owing to subnetwork multiple-access mechanism, link-level flow/error control, and user mobility. Techniques are presented in this paper to characterize and analyze work-conserving communication nodes with varying output rate. In the deterministic approach, the notion of "fluctuation constraint," analogous to the "burstiness constraint" for traffic characterization, is introduced to characterize the node. In the statistical approach, the variable-rate output is modelled as an "exponentially bounded fluctuation" process in a way similar to the "exponentially bounded burstiness" method for traffic modelling. Based on these concepts, deterministic and statistical bounds on queue size and packet delay in isolated variable-rate communication server-nodes are derived, including cases of single-input and multiple-input under first-come-first-serve queueing. Queue size bounds are shown to be useful for buffer requirement and packet loss probability estimation at individual nodes. Our formulations also facilitate the computation of end-to-end performance bounds across a feedforward network of variable-rate server-nodes. Several numerical examples of interest are given in the discussion.

75 citations


Patent
10 Feb 1995
TL;DR: In this article, a method for playing out packets, such as voice or video packets, received through a packet network subject to variable transmission delays is described, where incoming packets are received in a delay buffer and a predetermined delay applied to the first packet of a sequence of packets.
Abstract: A method is described for playing out packets, such as voice or video packets, received through a packet network subject to variable transmission delays. The incoming packets are received in a delay buffer and a predetermined delay applied to the first packet of a sequence of packets. A variable delay is applied to subsequent packets to produce an appropriate constant play-out rate to reproduce the desired output. The fill level of the delay buffer is monitored and the predetermined delay applied to the first packet of a following sequence of packets adjusted to maintain the fill level within desired limits to minimize the risk of said buffer underflowing or overflowing.

Patent
09 May 1995
TL;DR: In this article, a multicast network system comprises a data network which provides a medium for data transfer, and a media receiver is coupled to the network and receives the control packet (20) and the media packets (34) from the data network to process the control packets (20), and produce a media output.
Abstract: A multicast network system comprises a data network which provides a medium for data transfer. A media source having a control packet (20) and media packets (34) coupled to the data network broadcasts the control packet (29) and the media packets (34) to the data network and rebroadcasts the control packet (20) in conjunction with the media packets (34) to the data network. A media receiver is coupled to the network and receives the control packet (20) and the media packets (34) from the data network to process the control packet (20) and the media packets (34) to produce a media output.

Patent
Tsang-Ling Sheu1
11 Oct 1995
TL;DR: In this paper, a fault-tolerant bridge/router with a distributed switch-over mechanism is proposed, which can tolerate any single failures and does not rely on network reconfiguration.
Abstract: A fault-tolerant bridge/router ("brouter") with a distributed switch-over mechanism of the present invention can tolerate any single failures and does not rely on network reconfiguration (or alternative paths) and, therefore, substantially improves system reliability/availability. The fault-tolerant brouter utilizes a plurality of processing elements communicating through a multiple-bus switching fabric. Each processing element can effectively support two ports, each port providing an interface to an individual LAN. Each LAN is then linked to two different ports on two different processing elements, respectively, thereby providing processing element redundancy. If a processing element fails, bridging/routing functions can be performed by the other, redundant processing element. The functions are switched using the switch-over mechanism. Because the switch-over mechanism is distributed, no centralized control mechanism is required. The fault-tolerant brouter of the present invention provides the prevention of packet loss so that a source station does not have to resend lost packets blocked due to a failed processing element and provides transparency to end stations so that the packet recovery is independent of the networking protocols implemented. In addition, due to the redundancy of the processing elements for each LAN, traffic from unlike LANs with different media speeds can be evenly balanced. In this manner, the fault-tolerant brouter of the present invention provides significant improvement in system reliability and availability.

Patent
26 Dec 1995
TL;DR: In this paper, a method, apparatus, and computer program product are provided for generating test packets to be used in developing network protocol devices, including a packet shell generation facility and a generic command language interface mechanism and a packet management function generator.
Abstract: A method, apparatus, and computer program product are provided for generating test packets to be used in developing network protocol devices. According to the method of the invention, a first computer system is provided that includes a packet shell generation facility. The packet generation facility includes a generic command language interface mechanism and a packet management function generator. Using the packet shell generation facility test packets are generated for use in testing various aspects of network protocol devices.

Patent
Vijay Kapoor1
04 Dec 1995
TL;DR: In this paper, the congestion status bit is set and the congestion indicator bit is incrementally decremented each time a predetermined time interval expires, congestion controller (26) decrements the congestion counter.
Abstract: Method (100) executed by congestion controller (26) in each node (20, 30, 40) of a network (10) determines whether congestion is going to occur or is occurring in a particular node (20, 30, 40). If congestion is imminent or occurring, a congestion status bit is set. Each time a data packet is received and the congestion indicator bit is set, congestion controller (26) increments a congestion counter. Each time a predetermined time interval expires, congestion controller (26) decrements the congestion counter. By using the congestion counter as an index into a credit table, a credit value is determined by destination node (31) that represents how many data packets are permitted to be sent from source node (30) to destination node (31). The credit value is sent to the source node via a message.

Proceedings ArticleDOI
15 May 1995
TL;DR: Concord, which provides an integrated solution for single and multiple stream synchronization problems, defines a single framework to deal with both problems, and operates under the influence of parameters which can be supplied by the application involved.
Abstract: Synchronizing different data streams from multiple sources simultaneously at a receiver is one of the basic problems involved in multimedia distributed systems. This requirement stems from the nature of packet based networks which can introduce end-to-end delays that vary both within and across streams. We present a new algorithm called Concord, which provides an integrated solution for these single and multiple stream synchronization problems. It is notable because it defines a single framework to deal with both problems, and operates under the influence of parameters which can be supplied by the application involved. In particular these parameters are used to allow a trade-off between the packet loss rates, total end-to-end delay and skew for each of the streams. For applications like conferencing this is used to reduce delay by determining the minimum buffer delay/size required.

Journal ArticleDOI
01 Jun 1995
TL;DR: A new, more general analytic model of the knockout switch is presented, which enables us to evaluate the knockoutswitch under nonuniform traffic and incorporates the effects of a concentrator and a shared buffer on the packet loss probability.
Abstract: The knockout switch is a nonblocking, high-performance switch suitable for broadband packet switching. It allows packet losses, but the probability of a packet loss can be kept extremely small in a cost-effective way. The performance of the knockout switch was analyzed under uniform traffic. In this paper, we present a new, more general analytic model of the knockout switch, which enables us to evaluate the knockout switch under nonuniform traffic. The new model also incorporates the effects of a concentrator and a shared buffer on the packet loss probability. Numerical results for nonuniform traffic patterns of interest are presented. >

01 Nov 1995
TL;DR: This work experimentally and quantitatively examines the spatial and temporal correlation in packet loss among participants in a multicast session and derives a Markov chain characterization of temporal loss correlation.
Abstract: The recent success of multicast applications such as Internet teleconferencing illustrates the tremendous potential of applications built upon wide-area multicast communication services. A critical issue for such multicast applications and the higher layer protocols that support them is the manner in which packet losses occur within the multicast network. In this paper we present and analyze packet loss data collected via experiments run on 14 multicast-capable hosts at 11 geographically distinct locations in Europe and the US and connected via the MBone. In this work we experimentally and quantitatively examine the spatial and temporal correlation in packet loss among participants in a multicast session. Our results show that there is a significant spatial correlation in loss among the multicast sites. We also find a fairly significant amount of burst loss (consecutive losses) at a site. We also use these empirical measurements to derive a Markov chain characterization of temporal loss correlation.

Book
01 Jan 1995
TL;DR: The paper discusses different aspects of protocol validation, some verification tools based on the finite state formalism, and the basic limitations of the finitestate modelling of protocols.
Abstract: A finite state model for the specification and validation of communication protocols is considered. The concept of “direct coupling” between interactiing finite state components is used to describe a hierarchical structure of protocol layers. The paper discusses different aspects of protocol validation, some verification tools based on the finite state formalism, and the basic limitations of the finite state modelling of protocols. An “empty medium abstraction” is proposed for reducing the complexity of the overall system description. The concept of “adjoint states” can be useful for summarizing the relative synchronization between the communicating system components. These concepts are applied to the analysis of a simple alternating bit protocol, and to the X.25 call set-up and clearing procedures. The analysis of X.25 shows that the procedures are stable in respect to intermittant perturbations in the synchronization of the interface introduced for different reasons, including occasional packet loss. However, on very rare occasions, an undesirable cyclic behaviour could be encountered.

Journal ArticleDOI
TL;DR: Simulation results for evaluation of the performance of the K-winner network controller with 10 neurons are presented to study the constraints of the "frozen state" as well as those of same initial state.
Abstract: A novel approach to solving the output contention in packet switching networks with synchronous switching mode is presented. A contention controller has been designed based on the K-winner-take-all neural-network technique with a speedup factor to achieve a real-time computation of a nonblocking switching high-speed high-capacity packet switch without packet loss. Simulation results for evaluation of the performance of the K-winner network controller with 10 neurons are presented to study the constraints of the "frozen state" as well as those of same initial state. An optoelectronic contention controller constructed from a K-winner neural network is proposed. >

Journal ArticleDOI
TL;DR: An efficient technique for estimating, via simulation, the probability of buffer overflows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches.
Abstract: This article describes an efficient technique for estimating, via simulation, the probability of buffer overflows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has a buffer of finite capacity. Each stream is designated as being of either high or low priority. When the queue length reaches a certain threshold, only high priority packets are admitted to the switch's buffer. The problem is to estimate the loss rate of high priority packets. An asymptotically optimal importance sampling approach is developed for this rare event simulation problem. In this approach, the importance sampling is done in two distinct phases. In the first phase, an importance sampling change of measure is used to bring the queue length up to the threshold at which low priority packets get rejected. In the second phase a different importance sampling change of measure is used to move the queue length from the threshold to the buffer capacity.

Journal ArticleDOI
Yoram Ofek1, Moti Yung1
TL;DR: The MetaNet is asynchronous, distributed, and designed for transmission of fixed size cells or variable size packets, and can be viewed as a "general-topology buffer-insertion architecture with fairness," thus generalizing the MetaRing architecture.
Abstract: The MetaNet is a scalable local area network (LAN) architecture with an arbitrary topology and a switch at each node (i.e., a switch based LAN). Its design provides on one hand a service in which any node can try to transmit asynchronously in a bursty manner without reservation as much as it can (as in traditional LAN), and on the other hand the network access and flow control ensure the following properties: (1) no packet loss due to congestion, (2) fair access to the network, (3) no deadlocks, and (4) self-routing with broadcast. The switching over this network requires only a (5) single buffer per input link. The MetaNet is asynchronous, distributed, and designed for transmission of fixed size cells or variable size packets. It can be viewed as a "general-topology buffer-insertion architecture with fairness," thus generalizing the MetaRing architecture. >

Journal ArticleDOI
TL;DR: Through analysis and simulation, the authors show that pipeline banyan has a better throughput and packet loss performance when compared with other banyans-type switch architectures.
Abstract: Proposes a new fast packet switch architecture-pipeline banyan. It has a control plane and a number of parallel data planes which are of the same banyan topology. Packet headers are self-routed through the control plane to their destinations. As a result, they establish the corresponding routing paths in the data planes. The data planes do not need to do routing decisions, hence their complexity can be significantly reduced. Pipeline banyan can give a close to 100% maximum throughput and can deliver packets in a sequential order. Through analysis and simulation, the authors show that pipeline banyan has a better throughput and packet loss performance when compared with other banyan-type switch architectures. >

Patent
Jurgen Fritz Rosengren1
23 Feb 1995
TL;DR: In this article, the authors propose to associate a time window (LTW) with a data packet and accommodate position information (31-36) in the packet about the position of said data packet within said window.
Abstract: Method and device for transmitting data packets (P11, P12, P13) from an input stream (TS1) of data packets into an output stream (TS) of packets. When a plurality of transport streams (TS1, TS2) is multiplexed, packet jitter may be introduced to such an extent that decoder buffers can overflow or underflow. This is avoided by associating a time window (LTW) with a data packet and accommodating position information (31-36) in the packet about the position of said data packet within said window.

Proceedings ArticleDOI
01 May 1995
TL;DR: NIFDY, a network interface that uses admission control to reduce congestion and ensures that packets are received by a processor in the order in which they were sent, even if the underlying network delivers the packets out of order is presented.
Abstract: In this paper we present NIFDY, a network interface that uses admission control to reduce congestion and ensures that packets are received by a processor in the order in which they were sent, even if the underlying network delivers the packets out of order. The basic idea behind NIFDY is that each processor is allowed to have at most one outstanding packet to any other processor unless the destination processor has granted the sender the right to send multiple unacknowledged packets. Further, there is a low upper limit on the number of outstanding packets to all processors.We present results from simulations of a variety of networks (meshes, tori, butterflies, and fat trees) and traffic patterns to verify NIFDY's efficacy. Our simulations show that NIFDY increases throughput and decreases overhead. The utility of NIFDY increases as a network's bisection bandwidth decreases. When combined with the increased payload allowed by in-order delivery NIFDY increases total bandwidth delivered for all networks. The resources needed to implement NIFDY are small and constant with respect to network size.

Journal ArticleDOI
T. E. Tedijanto1, Raif O. Onvural1, D. C. Verma2, Levent Gun1, Roch Guerin2 
TL;DR: This paper describes the path selection function in Networking BroadBand Services (NBBS), which is IBM's architecture for high-speed, multimedia networks, and develops a heuristic solution based on the Bellman-Ford algorithm, which has a polynomial order of complexity.
Abstract: This paper describes the path selection function in Networking BroadBand Services (NBBS), which is IBM's architecture for high-speed, multimedia networks. The distinguishing feature of a multimedia network is its ability to integrate different applications with different traffic characteristics and service requirements in the network, such as voice, video, and data. In order to meet their service requirements, it is necessary for the network to provide unique quality-of-service (QOS) guarantees to each application. QOS guarantees, specified as multiple end-to-end performance objectives, translate into path and link constraints in the shortest path routing problem. For a general cost function, shortest path routing subject to path constraints is known to be a nonpolynomial- (NP-) complete problem. The NBBS path selection algorithm, a heuristic solution based on the Bellman-Ford algorithm, has a polynomial order of complexity. The algorithm finds a minimum hop path satisfying an end-to-end delay (or delay variation) constraint, that in most cases also optimizes a load balancing function. To reduce the number of path constraints, other QOS requirements such as packet loss ratio are implemented as a link constraint. The notion of primary and secondary links is used to minimize the long-term overall call blocking probability by dynamically limiting the hop count of a given path. The path selection algorithm developed for point-to-point connections is described first, followed by its extension to the case of point-to-multipoint connections.

Patent
30 Mar 1995
TL;DR: In an inverse transport processor of the type which directs video payloads of a packet signal to buffer memory space, apparatus is included for writing a media error code at sequential first memory address locations in the memory ahead of each packet payload as discussed by the authors.
Abstract: In an inverse transport processor of the type which directs video payloads of a packet signal to buffer memory space, apparatus is included for writing a media error codes at sequential first memory address locations in the memory ahead of each packet payload. Concurrently a processor examines the current packet to determine if it occurs in proper sequence. If a packet is lost, the payload is written to memory in subsequent sequential address locations. If there is no packet loss, the sequential first memory address locations are simply overwritten by the packet payload to excise the undesired media error code. Media error codes can thus be inserted in the packet payload stream without creating timing obstacles, for a system designer.

Proceedings ArticleDOI
02 Apr 1995
TL;DR: As long as a mobile moves from one cell to another but stays in the same region, the protocol avoids loss of packets and preserves order of transmission, which increases the performance of the transport layer protocol by minimizing the need to retransmit packets.
Abstract: This paper proposes a distributed handover protocol for a micro-cell packet switched mobile network. In such a network, users move from one cell to another very often, and each change of location may result in misrouted and lost packets. The purpose of the new protocol is to minimize these consequences of location changes: as long as a mobile moves from one cell to another but stays in the same region, the protocol avoids loss of packets and preserves order of transmission. Thus it increases the performance of the transport layer protocol by minimizing the need to retransmit packets.

Journal ArticleDOI
TL;DR: The simulation results show that, under certain conditions, best-effort TCP traffic may experience as much as 2% cell loss, and the probability of cell and packet loss decreases logarithmically with increased buffer size.
Abstract: This paper reports the findings of a simulation study of the queueing behavior of "best-effort" traffic in the presence of constant bit-rate and variable bit-rate isochronous traffic. In this study, best-effort traffic refers to ATM cells that support communications between host end systems executing various applications and exchanging information using TCP/IP. The performance measures considered are TCP cell loss, TCP packet loss, mean cell queueing delay, and mean cell queue length. Our simulation results show that, under certain conditions, best-effort TCP traffic may experience as much as 2% cell loss. Our results also show that the probability of cell and packet loss decreases logarithmically with increased buffer size. >

Proceedings ArticleDOI
Ahmed E. Kamal1
02 Apr 1995
TL;DR: The objective of this paper is to study the performance of this layer and the effectiveness of the EOP indicator, and an approximate analytical model is constructed in which the blocking of a tagged source is kept track of in an exact manner.
Abstract: The ATM adaptation layer type 5 (AAL5) is a new adaptation layer protocol for the ATM layer of broadband integrated services digital networks. Among several features, it is unique in the sense that it includes an end-of-packet (EOP) indicator in the header of the ATM cell (the ATM-layer-user-to-ATM-layer-user (AUU) parameter in the payload type (PT) field). It was previously suggested that the use of this indicator may be used to provide a means by which buffer occupancy can be reduced by dropping cells from already incomplete packets (Armitage, 1993). The objective of this paper is to study the performance of this layer and the effectiveness of the EOP indicator. The performance measures of interest are the probability of packet loss and the mean packet delay. An approximate analytical model is constructed in which the blocking of a tagged source is kept track of in an exact manner. The rest of the sources are modeled approximately. The accuracy of the model is enhanced through an iterative approach. A simulation model is also constructed to assess the accuracy of the approximate model.