scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1993"


Proceedings ArticleDOI
01 Oct 1993
TL;DR: The measured round trip delays of small UDP probe packets sent at regular time intervals are used to analyze the end-to-end packet delay and loss behavior in the Internet and find that the losses of probe packets are essentially random unless the probe traffic uses a large fraction of the available bandwidth.
Abstract: We use the measured round trip delays of small UDP probe packets sent at regular time intervals to analyze the end-to-end packet delay and loss behavior in the Internet. By varying the interval between probe packets, it is possible to study the structure of the Internet load over different time scales. In this paper, the time scales of interest range from a few milliseconds to a few minutes. Our observations agree with results obtained by others using simulation and experimental approaches. For example, our estimates of Internet workload are consistent with the hypothesis of a mix of bulk traffic with larger packet size, and interactive traffic with smaller packet size. We observe compression (or clustering) of the probe packets, rapid fluctuations of queueing delays over small intervals, etc. Our results also show interesting and less expected behavior. For example, we find that the losses of probe packets are essentially random unless the probe traffic uses a large fraction of the available bandwidth. We discuss the implications of these results on the design of control mechanisms for the Internet.

789 citations


Journal ArticleDOI
TL;DR: Estimates of Internet workload are consistent with the hypothesis of a mix of bulk traffic with larger packet size, and interactive traffic with smaller packet size and a phenomenon of compression of the probe packets similar to the acknowledgement compression phenomenon recently observed in TCP.
Abstract: We use the measured round trip delays of small UDP probe packets sent at regular time intervals to characterize the end-to-end packet delay and loss behavior in the Internet. By varying the interval between probe packets, it is possible to study the structure of the Internet load over different time scales. In this paper, the time scales of interest range from a few milliseconds to a few minutes. Our observations agree with results obtained by others using simulation and experimental approaches. For example, our estimates of Internet workload are consistent with the hypothesis of a mix of bulk traffic with larger packet size, and interactive traffic with smaller packet size. The interarrival time distribution for Internet packets is consistent with an exponential distribution. We also observe a phenomenon of compression (or clustering) of the probe packets similar to the acknowledgement compression phenomenon recently observed in TCP. Our results also show interesting and less expected behavior. For example, we find that the losses of probe packets are essentially random when the probe traffic uses a small fraction of the available bandwidth.

419 citations


Patent
Hamid Ahmadi1, Roch Guerin1, Levent Gun1
16 Jul 1993
TL;DR: In this article, the authors propose a link traffic metric which represents the effective capacity of each link in the network which participates in the packet connection route and calculate leaky bucket parameters which govern the access of packets to the network once the connection is set up.
Abstract: In a packet communications network, the addition or deletion of a connection to the network by a user is governed by a link traffic metric which represents the effective capacity of each link in the network which participates in the packet connection route. The link metric is calculated in real-time and updated by simple vector addition or subtraction. Moreover, this link metric is also used to calculate leaky bucket parameters which govern the access of packets to the network once the connection is set up. A packet network using these link metrics and metric generation techniques provides maximum packet throughput while, at the same time, preserving grade of service guarantees.

339 citations


Journal ArticleDOI
TL;DR: The results show that forward error correction schemes become less efficient due to the bursty nature of the packet loss processes; real-time traffic might be more sensitive to network congestion than was previously assumed; and the retransmission probability of ATM messages has been overestimated by the use of the independence assumption.
Abstract: The packet loss process in a single-server queueing system with a finite buffer capacity is analyzed. The model used addresses the packet loss probabilities for packets within a block of a consecutive sequence of packets. An analytical approach is presented that yields efficient recursions for the computation of the distribution of the number of lost packets within a block of packets of fixed or variable size for several arrival models and several numbers of sessions. Numerical examples are provided to compare the distribution obtained with that obtained using the independence assumption to compute the loss probabilities of packets within a block. The results show that forward error correction schemes become less efficient due to the bursty nature of the packet loss processes; real-time traffic might be more sensitive to network congestion than was previously assumed; and the retransmission probability of ATM messages has been overestimated by the use of the independence assumption. >

200 citations


Patent
27 Apr 1993
TL;DR: In this paper, a congestion prediction circuit is proposed to predict the future packet transfer rate in a packet integrated network with variable rate terminal nodes and fixed rate node nodes, and a congestion signal is output or a rate increase request indication is deleted when it is predicted that the packet transfer ratio will exceed a permissible value.
Abstract: Packet transfer is controlled by using an acceleration rate of packet transfers or by using a packet transfer rate acceleration ratio to predict that congestion will occur at a prescribed time in the future. Congestion avoidance in packet integrated networks is thereby achieved in a network having both variable rate terminal nodes and fixed rate terminal nodes. A future packet transfer rate is predicted in a congestion prediction circuit on the basis of a pre-established upper limit for the packet transfer acceleration or acceleration ratio. When it is predicted that the packet transfer rate will exceed a permissible value, a congestion prediction signal is output or a rate increase request indication is deleted. The invention prevents packets from being discarded in the packet network, allows buffer memory capacity of nodes in the network to be decreased, and avoids the generation of new packets when signal congestion is predicted.

123 citations


11 Jan 1993
TL;DR: Dynamic feedback control of the priority partition based on network load conditions is shown to be effective, even with substantial feedback delay, and a new traffic source model results from combining the marginal distribution with long-range dependence.
Abstract: Packet switched communications services with real-time delay constraints, such as voice and video, combine the established fields of digital signal processing and data communications networking. Each field is outlined, and new open problems due to the combination are identified. A simulation study investigates layered coding using a packet voice Markov chain source model. Information is partitioned into two or more priority layers to protect important components from loss. A parameter, $\alpha$, identifies the proportion of traffic placed in each priority. Packet loss rates for each priority and $\alpha$ are used to compute the signal to noise ratio, which is a more appropriate performance measure for voice and video services than loss rates alone. Dynamic feedback control of the priority partition based on network load conditions is shown to be effective, even with substantial feedback delay. Feedback, in conjunction with priority, provides graceful service degradation with increasing load and loss rate. A queueing analysis of this system is also investigated using a two-dimensional Markov chain source model that represents both load and $\alpha$, which vary dynamically. The simulation and analytic models are compared. Two methods for reducing the numerical complexity are given. A two-hour long empirical sample of variable rate video is derived by applying a simple intraframe video compression code to an action movie. Statistical characteristics are measured, including an accurate model for the heavy-tailed marginal distribution of video frame bandwidth. A statistical property called long-range dependence is described, measured, and shown to be significant for this data. A new traffic source model results from combining the marginal distribution with long-range dependence. Extensive trace driven simulations characterize network queueing behavior and allocation of bandwidth/buffer resources. Statistical multiplexing gain of variable rate video is evaluated as well as the advantage due to multiplexing video with data services. We discuss the implications of this traffic analysis for for the design of congestion control mechanisms for integrated packet networks. We close with some comments on the ramifications of advancing electronic hardware speed and complexity for multi-media communications.

120 citations


Patent
13 Dec 1993
TL;DR: In this article, a hierarchical addressing technique is employed in a packet communications system to enhance flexibility in storing and referencing packet information, which permits packet message data and certain packet control data to be stored in memory locations without having to be duplicated at a different memory location prior to transmission of the packet.
Abstract: A hierarchical addressing technique is employed in a packet communications system to enhance flexibility in storing and referencing packet information. This method permits packet message data and certain packet control data to be stored in memory locations without having to be duplicated at a different memory location prior to transmission of the packet. This method is preferably employed in a ring configuration in which a series of packets have addressing mechanisms which points sequentially to each other to form a ring of packets received or to be transmitted.

99 citations


Patent
24 Aug 1993
TL;DR: In this article, a congestion control system for packet communications networks in which access to the network is controlled to prevent such congestion is presented. But the system is limited to prevent red packet saturation of the network.
Abstract: A congestion control system for packet communications networks in which access to the network is controlled to prevent such congestion. Packets within the pre-specified statistical description of each packet source are marked as high priority ("green" packets) while packets exceeding the pre-specified characteristics are marked with a lower priority ("red" packets). The overall red packet rate is limited to prevent red packet saturation of the network. The introduction of red packets into the network is subjected to a degree of hysteresis to provide better interaction with higher layer error recovery protocols. The amount of hysteresis introduced into the red packet marking can be fixed or varied, depending on the statistics of the incoming data packets at the entry point to the network.

67 citations


Patent
22 Mar 1993
TL;DR: In this article, the authors proposed a method and devices for handling a buffer (11) in packet networks, particularly in regard of loss and delay of packets, where packets are supposed to belong to predetermined or implicitly given loss priority classes and delay priority classes.
Abstract: Methods and devices are proposed for handling a buffer (11) in packet networks are proposed, particularly in regard of loss and delay of packets. The packets are supposed to belong to predetermined or implicitly given loss priority classes and delay priority classes. When a packet arrives to the buffer (11) the class of the packet is determined, both for loss and delay. For each loss priority class there is a predetermined threshold value (T1, TBusy, TIdle, T3, TL) and the total filling level (M) of the buffer, i.e. the total number of packets stored, is compared to the threshold value of the loss priority class to which the received packet belongs. If said threshold value (T1, TBusy, TIdle, T3, TL) is larger than said filling level (M), the packet is buffered in order to be forwarded, and otherwise it is lost. In the determination if said packet is to be buffered or lost the delay priority of the packet is not taken into account. In the forwarding of packets from the buffer (11) those packets are chosen in the usual way, which belong to higher delay priority classes before packets belonging to lower delay priority classes, where this is performed in such a way that packets belonging to all delay priority classes can be warranted a specific minimum service level. It is achieved by associating each such class with a maximum time period, within which at least one packet of the delay class considered will be forwarded, if such a packet is available in the buffer (11).

66 citations


Journal ArticleDOI
TL;DR: It is proved that the time required to complete transmission of a packet in a set is bounded by its route length plus the number of other packets in the set, which holds for any greedy algorithm, even in the ease of different starting times and different route lengths.

53 citations


Book ChapterDOI
27 Sep 1993
TL;DR: In this article, the authors discuss the QOS requirements of different applications and survey recent developments in the areas of call admission, link scheduling, and the interaction between the provision of QOS and call routing and traffic monitoring and policing.
Abstract: Increases in bandwidths and processing capabilities of future packet switched networks will give rise to a dramatic increase in the types of applications using them. Many of these applications will require guaranteed quality of service (QOS) such as a bound on the maximum end-to-end packet delay and/or on the probability of packet loss. This poses exciting challenges to network designers. In this paper we discuss the QOS requirements of different applications and survey recent developments in the areas of call admission, link scheduling, and the interaction between the provision of QOS and call routing and traffic monitoring and policing. We identify what some of the important issues are in these areas and point out important directions for future research efforts.

Patent
20 Oct 1993
TL;DR: In this article, an address window filter identifies the address of a packet being processed by examining only a predetermined portion of said address, and can comprise a dynamic window filter or a static window filter.
Abstract: A single chip hub for an electronic communication network comprises a packet memory for storing data packets, a Reduced Instruction Set Computer (RISC) processor for processing the packets, and a plurality of media access interfaces. A Direct Memory Access (DMA) controller transfers packets transferring packets between the packet memory and the interfaces. A packet attribute memory stores attributes of the data packets, and an attribute processor performs a non-linear hashing algorithm on an address of a packet being processed for accessing a corresponding attribute of said packet in the packet attribute memory. An address window filter identifies the address of a packet being processed by examining only a predetermined portion of said address, and can comprise a dynamic window filter or a static window filter.

Journal ArticleDOI
TL;DR: A simple medium access protocol, POPSMAC, based on using both tunable transmitter and tunable receiver is proposed and analyzed and is suitable to MANNAN applications in which network synchronization is difficult.
Abstract: Abstmct-A simple medium access protocol, POPSMAC, based on using both tunable transmitter and tunable receiver is proposed and analyzed in this paper. The feasibility and architectures of using this protocol to establish a passive optical packetswitched metropolitadwide area network (MAN/WAN) are also investigated. In POPSMAC, each transmission consists of a small header packet over the signaling wavelength of the destination receiver, and a data packet over the distinct wavelength of the transmitter. The receiver is tuned to the transmitter wavelength after successfully received the header packet and knowing there is an incoming data packet. Both connection-oriented and datagram traffic can be supported by this protocol. Performance study shows that the maximum average throughput of each transmitter or receiver approaches 50% with small header packet size (as compared to the data packet). This performance is achieved without requiring network synchronization and is thus suitable to MANNAN applications in which network synchronization is difficult.

Patent
14 Dec 1993
TL;DR: In this paper, a routing system in a multimedia integrated network formed of nodes connected by links is provided for transmitting various media such as voice, image and data in a packet format, where the respective nodes forming the integrated network output the packets in an optimum output direction so that conditions required by various media and reliability of communication are satisfied.
Abstract: A routing system in a multimedia integrated network formed of nodes connected by links is provided for transmitting various media such as voice, image and data in a packet format. The respective nodes forming the integrated network output the packets in an optimum output direction so that conditions required by various media and reliability of communication are satisfied. Each node includes an interconnection type neural network for determining the packet output direction. An external stimulus input unit outputs an external stimulus to the neurons in the neural network in response to a present state of the integrated network, such as a packet delay time and a packet loss ratio for respective links, and a condition required by the media such as an allowable packet loss ratio. Therefore, the packet is output in an optimum direction which is adaptive to the present state of the integrated network and which satisfies a condition required by the media.

Journal ArticleDOI
TL;DR: A space-division, nonblocking packet switch with data concentration and output buffering with very good delay-throughput performance over a wide range of input traffic is proposed.
Abstract: A space-division, nonblocking packet switch with data concentration and output buffering is proposed. The performance of the switch is evaluated with respect to packet loss probability, the first and second moments of the equilibrium queue length and waiting time, throughput, and buffer overflow probability. Numerical results indicate that the switch exhibits very good delay-throughput performance over a wide range of input traffic. The switch compares favorably with some previously proposed switches in terms of fewer basic building elements used to attain the same degree of output buffering. >

Proceedings ArticleDOI
28 Mar 1993
TL;DR: The steady-state performance of generic high-speed queuing system with priority traffic is studied, using the Markov modulated fluid models, and it is found that the waiting time is more sensitive to changes of alpha than the buffer contents.
Abstract: The steady-state performance of generic high-speed queuing system with priority traffic is studied, using the Markov modulated fluid models. the focus is on systems with traffic partitioned into two priorities according to a splitting factor alpha , so that low priority traffic can be discarded when congestion occurs. The results are used to further study issues of congestion control with priority traffic. In particular, the expected values, waiting times, variances of the buffer contents, and tail distributions are evaluated. It is found that the waiting time is more sensitive to changes of alpha than the buffer contents. This shows that for time-constrained traffic, the priority assignment is important for properly balancing the tradeoff between packet loss and delays. This model can be considered as a direct extension of the fluid-flow model to include priorities. >

Patent
13 Dec 1993
TL;DR: The NI-Bus as discussed by the authors is an improved network interface architecture for a packet switch that allows for the combination of both voice and data in a single switch using a common packet structure.
Abstract: An improved network interface architecture for a packet switch provides for the combination of both voice and data in a single switch using a common packet structure. It allows for the dynamic allocation of bandwidth based on system loading. This includes not only bandwidth within the voice or data areas of the frame, but also between the voice and data portions. The network interface (NI) provides a mechanism (the NI-Bus) of passing all packets through the Network Interface or allowing the packet devices to directly transfer packets between one another. The bandwidth allocation can easily be changed because the control and data memories are synchronized to one another. The network interface architecture, according to the invention, allows for the data packets and the control of bandwidth allocation to be controlled by a single switching device. It synchronizes the transfer of the data and the allocation of bus bandwidth. The control of the packet devices can be controlled at a very high bit rate such as, for example, 40 Mbps. It also allows packet devices to directly transfer packets. It allows for easy re-allocation of bandwidth through the use of the NI Base Registers.

Journal ArticleDOI
TL;DR: A model of a switching component in a packet switching network is considered, and a tight bound on the size of the largest buffer required under this policy is obtained, and Longest Queue First is shown to require less storage than Exhaustive Round Robin and First Come First Served in preventing packet overflow.

Proceedings ArticleDOI
01 Oct 1993
TL;DR: This paper presents a distributed, end-to-end congestion control protocol for use in high-traffic packet switched networks, represented as a stochastic single-server queue, with arrival rates being the control variables.
Abstract: This paper presents a distributed, end-to-end congestion control protocol for use in high-traffic packet switched networks. The network is represented as a stochastic single-server queue, with arrival rates being the control variables. A time-stamp based measure of network state called warp is defined, and it is shown to be an estimator of network utilization. Congestion is modeled explicitly using unimodal load-service rate functions, and its monotonicity property is exploited to yield characterizations of stability and optimality. A protocol based on "perfect" information is analyzed, whose prowess is then shown to be emulated by one which only uses locally computable, delayed information. The main effect of a unimodal load-service function is to induce a division of the phase space into stable and unstable regions, the optimal operating point being its "boundary." Protocols are devised for dealing with each regime separately, rate adjustment protocol being the control that guides the system to the optimal operating point. Proactive rate protocol and reactive rate protocol deal with the issue of the optimal operating point being near to the unstable zone. Protocols for handling fairness and structural perturbation augment the basic suite. The analysis is supported by simulations showing the global dynamical properties of the system.

Book ChapterDOI
03 Nov 1993
TL;DR: Frame-Induced Packet Discarding is proposed, in which, upon detection of loss of a threshold number of packets belonging to a video frame, the network attempts to discard all the remaining packets of that frame.
Abstract: In order to provide efficient frame loss guarantees for video communication over ATM-like fast packet switched networks, we propose a simple to implement, yet effective, strategy called Frame-Induced Packet Discarding (FIPD), in which, upon detection of loss of a threshold number (determined by an application's video encoding scheme) of packets belonging to a video frame, the network attempts to discard all the remaining packets of that frame. Performance simulations are shown to demonstrate the efficacy of the FIPD strategy; networks employing FIPD exhibit close to two-fold increase in the number of video channels that they can support.

01 Jan 1993
TL;DR: The research shows that non-work-conserving rate-controlled service disciplines have several advantages that make them suitable for supporting guaranteed performance communication in a high speed networking environment, and presents a taxonomy and framework for studying and comparing service disciplines in integrated-services networks.
Abstract: We first present a taxonomy and framework for studying and comparing service disciplines in integrated-services networks. Given the framework, we show the limitations of several existing solutions, and propose a new class of service policies called rate-controlled service disciplines. This class of service disciplines may be non-work-conserving, i.e., a server may be idle even when there are packets to be transmitted. Our research shows that non-work-conserving rate-controlled service disciplines have several advantages that make them suitable for supporting guaranteed performance communication in a high speed networking environment. In particular, rate-controlled service disciplines can provide end-to-end per-connection deterministic and statistical performance guarantees in very general networking environments. Unlike existing solutions, which only apply to simple network environments, rate-controlled service disciplines also apply to internetworking environments. Moreover, unlike existing solutions, rate-controlled service disciplines can provide guarantees in arbitrary feed-forward and feedback networks. The key feature of a rate-controlled service discipline is the separation of the server into two components: a rate-controller and a scheduler. This separation has several distinct advantages: it decouples the allocation of bandwidths and delay bounds, uniformly distributes the allocation of buffer space inside the network to prevent packet loss, and allows arbitrary combinations of rate-control policies and packet scheduling policies. Rate-controlled service disciplines provide a general framework under which most of the existing non-work-conserving disciplines can be naturally expressed. One discipline in this class, called Rate-Controlled Static Priority (RCSP), is particularly suitable for providing performance guarantees in high speed networks. To increase the average utilization of the network by real-time traffic, we present new admission control conditions for deterministic service, and new stochastic traffic models for statistical service. Compared to previous admission control algorithms for deterministic service, our solution ensures that deterministic services can be guaranteed even when the sum of the peak data rates of all connections exceeds the link speed. When the traffic is bursty, the new algorithm results in a multifold increase in the number of accepted connections. Further, to better characterize bursty traffic, we propose a traffic model that captures the interval-dependent behavior of traffic sources. To test our algorithms in real environments, we have designed and implemented the Real-Time Internet Protocol, or RTIP. (Abstract shortened by UMI.)

Proceedings ArticleDOI
23 May 1993
TL;DR: It is proposed that a single connection at the transport layer be implemented as multiple source routes in the network layer, resulting in a balanced loading of network resources, and the use of the JBQ rule to effect the required traffic splitting.
Abstract: It is proposed that a single connection at the transport layer be implemented as multiple source routes in the network layer, resulting in a balanced loading of network resources. The paths are not necessarily of equal length. The problem of traffic bifurcation at the source, which achieves path splitting, is solved by computing the flows on all the links in the network to minimize a given objective function, such as average delay or packet loss probability. The use of the Join-Biased Queue (JBQ) rule to effect the required traffic splitting is proposed. The superiority of the JBQ rule over other schemes is demonstrated. It is shown that the values obtained for the flows depend on the objective function being optimized. A bound on the size of the destination resequencing buffer, necessary for packets that are received out of order, is computed. >

Journal ArticleDOI
TL;DR: Forward error control is a strong candidate for inclusion in high-speed network protocols for delay and loss sensitive applications because of the performance gains described above.
Abstract: In this paper we analyze and provide simulation results for the use of forward error control (FEC) to improve the delay-throughput performance of packetized high-speed networks. The major source of errors in high-speed networks is expected to be buffer overflow during congested conditions, resulting in lost packets. A single lost or errored packet will have to be retransmitted, or may even cause the window it belongs to to be retransmitted, causing a large delay. In high-speed networks, as in satellite and deep-space systems, the effect of retransmission delays is greatly amplified by the small ratio of packet duration to propagation delay. Consequently, the performance of delay sensitive applications, such as distributed processing, will be degraded by the retransmissions associated with the conventional error detection and retransmission (ARQ) protocols. FEC can be used to make the performance of the end-to-end system much less sensitive to packet loss. The result is a significant increase in network throughput and an associated decrease in delay, since retransmissions are avoided. Also, reliable transmission might permit simpler higher-layer processing. Because of the performance gains described above, the use of FEC is a strong candidate for inclusion in high-speed network protocols for delay and loss sensitive applications.

01 Jan 1993
TL;DR: A congestion control algorithm is proposed and analyzed that aims to discard packets if they stand little chance of reaching their destination in time as early on their path as possible, and dropping late and almost-late packets improves the likelihood that other packets will make their deadline.
Abstract: Henning G. Schulzrinne Reducing and Characterizing Packet Loss for High-Speed Computer Networks with Real-Time Services May 1993 B.S., Technische Hochschule Darmstadt (Federal Republic of Germany) M.S., University of Cincinnati Ph.D., University of Massachusetts Directed by: Professor James F. Kurose Higher bandwidths in computer networks have made application with real-time constraints, such as control, command, and interactive voice and video communication feasible. We describe two congestion control mechanisms that utilize properties of real-time applications. First, many real-time applications, such as voice and video, can tolerate some loss due to signal redundancy. We propose and analyze a congestion control algorithm that aims to discard packets if they stand little chance of reaching their destination in time as early on their path as possible. Dropping late and almost-late packets improves the likelihood that other packets will make their deadline. Secondly, in real-time systems with xed deadlines, no improvement in performance is gained by arriving before the deadline. Thus, packets that are late and have many hops to travel are given priority over those with time to spare and close to their destination by introducing a hop-laxity priority measure. Simulation results show marked improvements in loss performance. The implementation of the algorithm

Patent
29 Mar 1993
TL;DR: A message packet group having a plurality of identical message packets corresponding in number to the plurality of redundant switching matrices by multiplication for each of the message packets transmitted on one of the input trunks during the course of a virtual connection was defined in this article.
Abstract: Message packets are transmitted via a packet switching equipment which comprises at least two redundant switching matrices to output ports connected thereto, the message packets comprising a packet header identifying a respective virtual connection and being transmitted on input trunks according to an asynchronous transmission method during the course of virtual connections. A message packet group having a plurality of identical message packets corresponding in number to the plurality of redundant switching matrices by multiplication for each of the message packets transmitted on one of the input trunks during the course of a virtual connection. An identical auxiliary identifier that changes for successive message packet groups is thereby attached to each of the message packets of a message packet group. The message packets of a message packet group are subsequently separately transmitted via the redundant switching matrices in the direction toward the output port considered for the respective virtual connection. After such a transmission, only one of the message packets of a message packet group is forwarded to the output port with reference to the auxiliary identifier respectively attached to the message packets.

Proceedings ArticleDOI
David Tipper1, J. Hammond1, S. Sharma1, A. Khetan1, K. Balakrishnan1, Sunil K. Menon1 
28 Mar 1993
TL;DR: The results of a study to determine the effects of link failures on network performance are presented and a bounding relationship is developed whereby a network node can determine whether or not congestion will occur as the result of traffic restoration after a failure.
Abstract: The results of a study to determine the effects of link failures on network performance are presented. The network studied is a virtual-circuit-based packet-switched wide area network. A generic queuing framework is developed to study the effect of failures, and the subsequent traffic restoration, on network performance. In general, the congestion resulting after a failure is a transient phenomenon. Hence, a numerical-method-based nonstationary queuing analysis is conducted in order to quantify the effects of failures in terms of the transient behavior of queue lengths and packet loss probabilities. A bounding relationship is developed whereby a network node can determine whether or not congestion will occur as the result of traffic restoration after a failure. >

Patent
21 Jun 1993
TL;DR: In this article, an improved network interface architecture for a packet switch provides for the combination of both voice and data in a single switch using a common packet structure, allowing for the dynamic allocation of bandwidth based on system loading.
Abstract: An improved network interface architecture for a packet switch provides for the combination of both voice and data in a single switch using a common packet structure. It allows for the dynamic allocation of bandwidth based on system loading. This includes not only bandwidth within the voice or data areas of the frame, but also between the voice and data portions. The network interface (NI) provides a method (the NI-Bus) or passing all packets through the Network Interface or allowing the packet devices to directly transfer packets between one another. The bandwidth allocation can easily be changed because the control and data memories are synchronized to one another. The network interface architecture, according to the invention, allows for the data packets and the control of bandwidth allocation to be controlled by a single switching device. It synchronizes the transfer of the data and the allocation of bus bandwidth. The control of the packet devices can be controlled at a very high bit rate such as, for example, 40 Mbps. It also allows packet devices to directly transfer packets. It allows for easy re-allocation of bandwidth through the use of the NI Base Registers.

Proceedings ArticleDOI
28 Mar 1993
TL;DR: It is found that the mean and variance of packet delay through an ATM switch grow linearly with burst size, and that the delay distribution can be closely approximated by a normal distribution.
Abstract: A system which uses multiple asynchronous transfer mode (ATM) virtual circuits operating in parallel in order to control two WAN hosts at gigabit speeds is studied. Packets in parallel channels can bypass each other, so reordering of packets before delivery to the host is required. Performance parameters of this system, including ATM channel delay, packet loss, and resequencing delay, are analyzed, using a model for an ATM channel that multiplexes ATM virtual circuits carrying bursty and nonbursty traffic. It is found that the mean and variance of packet delay through an ATM switch grow linearly with burst size, and that the delay distribution can be closely approximated by a normal distribution. It is shown that packet loss is log-linear in the ratio of buffer size to burst size, and for maximum bursts larger than 50 cells, a buffer size of twice the maximum burst size is sufficient to achieve packet loss probabilities less than 10/sup -9/. Resequencing delay is shown to be insensitive to burst size, but the variance is large and grows linearly with burst size. >

01 Dec 1993
TL;DR: In this article, the impact of the long propagation delay on the performance of closed-loop reactive control was investigated and a scheme to overcome the problem was proposed, which uses a global feedback signal to regulate the packet arrival rate of ground stations.
Abstract: NASA LeRC is currently investigating a satellite architecture that incorporates on-board packet switching capability. Because of the statistical nature of packet switching, arrival traffic may fluctuate and thus it is necessary to integrate congestion control mechanism as part of the on-board processing unit. This study focuses on the closed-loop reactive control. We investigate the impact of the long propagation delay on the performance and propose a scheme to overcome the problem. The scheme uses a global feedback signal to regulate the packet arrival rate of ground stations. In this scheme, the satellite continuously broadcasts the status of its output buffer and the ground stations respond by selectively discarding packets or by tagging the excessive packets as low-priority. The two schemes are evaluated by theoretical queuing analysis and simulation. The former is used to analyze the simplified model and to determine the basic trends and bounds, and the later is used to assess the performance of a more realistic system and to evaluate the effectiveness of more sophisticated control schemes. The results show that the long propagation delay makes the closed-loop congestion control less responsive. The broadcasted information can only be used to extract statistical information. The discarding scheme needs carefully-chosen status information and reduction function, and normally requires a significant amount of ground discarding to reduce the on-board packet loss probability. The tagging scheme is more effective since it tolerates more uncertainties and allows a larger margin of error in status information. It can protect the high-priority packets from excessive loss and fully utilize the downlink bandwidth at the same time.

Patent
28 Sep 1993
TL;DR: In this paper, the authors propose an internetworking system for exchanging packets of information between networks, which consists of a network interface module for connecting a network to the system, receiving packets from the network in a native packet format used by the network and converting each received native packet to a packet having a generic format common to all networks connected to the network.
Abstract: An internetworking system for exchanging packets of information between networks, the system comprising a network interface module for connecting a network to the system, receiving packets from the network in a native packet format used by the network and converting each received native packet to a packet having a generic format common to all networks connected to the system, and converting each of the generic packets to the native packet format for transmission to the network; a communication channel for carrying the generic packets to and from the network interface module, the channel having bandwidth; a first processing module for controlling dynamic allocation and deallocation of the channel bandwidth to the network connected to the system via the network interface module; and a second processing module for receiving all of the generic packets put on the channel by the network interface module, determining a destination network interface module for each of the generic packets on the channel, determining whether each of the generic packet needs to be bridged to the destination network interface module, and transmitting each of the generic packets determined to need bridging to the destination network interface module via the channel.