scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1990"


Patent
04 Sep 1990
TL;DR: In this article, the authors propose a method for connecting a network so that TCP/IP and OSI 8473 packets may be routed in the same domain, where all the routers share link state information by using a common link state packet format (such as the ISO 10589 format).
Abstract: A method for connecting a network so that TCP/IP and OSI 8473 packets may be routed in the same domain. The independence of the addresses is maintained: one device in the network may be assigned only a TCP/IP address, and another device may be assigned only a ISO 8473 address. Furthermore, all of the routers share link state information by using a common link state packet format (such as the ISO 10589 format); thus routes through the network may be computed without regard for the protocols supported by the routers along the route. Where necessary, packets are encapsulated and forwarded through routers which are not capable in the protocol of the packet. In some disclosed embodiments, all of the routers in a given area support a given protocol (or, in fact, have identical capabilities, in which case encapsulation is not required). In these embodiments, the encapsulation is performed by suitable modifications to each router's packet forwarding procedures. In other disclosed embodiments, these topological restrictions are removed, and the network is expanded to support additional protocols. In these embodiments, the Dijkstra algorithm is also modified to generate information on how to encapsulate and forward packets through the network.

512 citations


Proceedings ArticleDOI
03 Jun 1990
TL;DR: A technique for fiber-optic networks based on forward-error correction (FEC) that allows the destination to reconstruct missing data packets by using redundant parity packets that the source adds to each block of data packets is presented.
Abstract: A technique for fiber-optic networks based on forward-error correction (FEC) that allows the destination to reconstruct missing data packets by using redundant parity packets that the source adds to each block of data packets is presented. Methods for generating several types of parity packets are described, along with decoding techniques and their implementations. Algorithms are presented for packet interleaving and selective rejection of packets from node buffers, both of which disperse missing packets among many blocks, thereby reducing the required coding complexity. Performance evaluation, by both analytic and simulation models, shows that this technique can result in a reduction of up to three orders of magnitude in the packet loss rate. >

263 citations


Journal ArticleDOI
TL;DR: The author's recent proposals, namely, timeout-based congestion control, a DECbit scheme and a delay-based scheme for congestion avoidance are described, and areas for future research are suggested.
Abstract: Myths about congestion control are examined, and an explanation of why the trend toward cheaper memory, higher-speed links, and higher-speed processors has intensified the need to solve the congestion problem is provided. A number of proposed solutions are described, and a classification of congestion problems as well as their solutions is presented. The reasons why the problem is so difficult are identified, and the protocol design decisions that affect the design of a congestion control scheme are discussed. The author's recent proposals, namely, timeout-based congestion control, a DECbit scheme and a delay-based scheme for congestion avoidance are described, and areas for future research are suggested. >

233 citations


Patent
28 Aug 1990
TL;DR: In this article, a congestion avoidance method for packet data communication is proposed, in which each node measures the round-trip delay occurring when it sends data to a destination and receives an acknowledgement.
Abstract: A packet data communication system employs a congestion avoidance method in which each node measures the round-trip delay occurring when it sends data to a destination and receives an acknowledgement. This delay is measured for different load levels, and a comparison of these delays is used to determine whether to increase or decrease the load level. The load level can be adjusted by adjusting the window size (number of packets sent in to the network) or by adjusting the packet rate (packets per unit time). The objective is operation at the knee in the throughput vs. traffic curve, so that the data throughput is high and the round trip delay is low. Control is accomplished at each node individually, without intervention by the router or server, so system overhead is not increased.

227 citations


Journal ArticleDOI
TL;DR: In order to reduce the time delays as well as multiplexer memory requirements in packet voice systems, a family of congestion control schemes is proposed based on the selective discarding of packets whose loss will produce the least degradation in quality of the reconstructed voice signal.
Abstract: In order to reduce the time delays as well as multiplexer memory requirements in packet voice systems, a family of congestion control schemes is proposed. They are all based on the selective discarding of packets whose loss will produce the least degradation in quality of the reconstructed voice signal. A mathematical model of the system is analyzed and queue length distributions are derived. These are used to compute performance measures, including mean waiting time and fractional packet loss. Performance curves for some typical systems are presented, and it is shown that the control procedures can achieve significant improvement over uncontrolled systems, reducing the mean waiting time and total packet loss (at transmitting and receiving ends). Congestion control with a resume level is also analyzed, showing that without increasing the fractional packet loss, the mean and variance of the queue can be reduced by selecting an appropriate resume level. The performance improvements are confirmed by the results of some informal subjective testing. >

176 citations


Journal ArticleDOI
TL;DR: It is shown that this reduces the complexity of protocol processing by removing many of the procedures required to recover from network inadequacies such as bit errors, packet loss, and out-of-sequence packets and makes it more amenable to parallel processing.
Abstract: The design, analysis, and implementation of an end-to-end transport protocol that is capable of high throughput consistent with the evolving high-speed physical networks based on fiber-optic transmission lines and high-capacity switches are presented. Unlike current transport protocols in which changes in control/state information are exchanged between the two communicating entities only when some significant event occurs, this protocol exchanges relevant and full state information periodically and frequently. It is shown that this reduces the complexity of protocol processing by removing many of the procedures required to recover from network inadequacies such as bit errors, packet loss, and out-of-sequence packets and makes it more amenable to parallel processing. Also, to increase channel utilization in the presence of high-speed, long-latency networks and to support diagrams, and efficient implementation of the selective repeat method of error control is incorporated in the protocol. An implementation using a Motorola 68030-based multiprocessor as a front-end processor is described. The current implementation can comfortably handle 10-15 kpackets/s. >

176 citations


Proceedings ArticleDOI
03 Jun 1990
TL;DR: A leaky-bucket-type scheme operating on a session basis that limits the session's average rate and the burstiness is proposed, combined with an optimistic bandwidth usage scheme which works by marking packets into two different colors, green and red.
Abstract: The authors suggest and investigate a general input congestion control scheme that takes into account a broad spectrum of network issues. As a preventive congestion control strategy, a leaky-bucket-type scheme operating on a session basis that limits the session's average rate and the burstiness is proposed. This restrictive control is combined with an optimistic bandwidth usage scheme which works by marking packets into two different colors, green and red. The packets are marked so that the average green packet rate entering the network is at the reserved average rate. The average red packet rate represents traffic in excess of this guaranteed average rate and is sent to further utilize unused bandwidth in the network. Both types of packets are further filtered by a spacer which limits the peak rate at which the packets enter the network. The marked packets are then sent into the network, where they are treated according to their color, using at each intermediate node a simple threshold policy. >

171 citations


01 Apr 1990
TL;DR: This paper describes a packet video system implementation in which commercial codecs were adapted to exploit the benefits of packet switching while addressing the problems, including clock synchronization was obviated by asynchronous operation, and packet loss was reduced by bandwidth reservation and forward error correction.
Abstract: : Packet switching technology promises to allow improvement of video quality by efficiently supporting variable-rate video coding. Its inherent multiplexing of multiple streams also allows more efficient multi-destination delivery for N-way conferencing. However, most commercial video codecs are designed to work with circuits, not packets, in part because these benefits are accompanied by some problems. This paper describes a packet video system implementation in which commercial codecs were adapted to exploit the benefits of packet switching while addressing the problems as follows: 1) clock synchronization was obviated by asynchronous operation; 2) delay was reduced by bandwidth reservation and fast packet forwarding; and 3) packet loss was reduced by bandwidth reservation and forward error correction. An overview of the system is followed by sections addressing each of the problems and benefits, plus future directions for expansion of the system.

154 citations


Patent
Kinoshita Taizo1, Eto Yoshizumi1
22 Oct 1990
TL;DR: In this paper, a video signal transmitting method comprising of a sending method and a receiving method was proposed. But the sending method was not considered in this paper, and the receiving method of recomposing the same packet block as that composed on a sending side from plurality of received packets was not discussed.
Abstract: A video signal transmitting method comprising: a sending method including the steps of: dividing an analog video signal in a unit corresponding to integer times of a number of bits composing an information field of one packet in packet transmission and converting into a digital video signal, forming a digitalized video signal into a plurality of packets, forming a plurality of packets into a first packet block in M lines×N columns, adding an error correction code which corrects a longitudinal error of data in a first packet block as a second packet block in P lines×N columns in an (M+1)th line and thereafter, and sending a packet; and a receiving method including the steps of: recomposing the same packet block as that composed on a sending side from plurality of received packets, recovering a video signal with packet loss information from an exchange and an error correction code formed into a packet, and regenerating an analog video signal from a digital video signal after recovery.

140 citations


Proceedings ArticleDOI
03 Jun 1990
TL;DR: A strategy for congestion-free communication in packet networks is proposed, which provides guaranteed services per connection with no packet loss and an end-to-end delay which is a constant plus a small bounded jitter term and provides an attractive solution for the transmission of real-time traffic in packets networks.
Abstract: The process of packet clustering in a network with well-regulated input traffic is studied. Based on this study, a strategy for congestion-free communication in packet networks is proposed. The strategy provides guaranteed services per connection with no packet loss and an end-to-end delay which is a constant plus a small bounded jitter term. Therefore, it provides an attractive solution for the transmission of real-time traffic in packet networks. The strategy is composed of an admission policy imposed per connection at the source node and a particular queuing scheme, called stop-and-go queuing, practiced at the switching nodes. The admission policy requires the packet stream of each connection to possess a certain smoothness property upon arrival to the network, while the queuing scheme eliminates the process of packet clustering and thereby preserves the smoothness property as packets travel inside the network. Implementation of the stop-and-go queuing is simple, with little processing overhead and minor hardware modifications to the conventional FIFO (first in, first out) queuing structure. >

111 citations


Journal ArticleDOI
TL;DR: NEST is particularly useful as a tool to study the performance behavior of real (or realisticly modeled) distributed systems in response to simulated complex dynamical network behaviors.
Abstract: The Network Simulation Testbed (NEST) is a graphical environment for simulation and rapid-prototyping of distributed networked systems and protocols. Designers of distributed networked systems require the ability to study the systems operations under a variety of simulated network scenarios. For example, designers of a routing protocol need to study the steady-state performance features of the mechanism as well as its dynamic response to failure of links or switching nodes. Similarly, designers of a distributed transaction processing system need to study the performance of the system under a variety of load models as well as its response to failure conditions. NEST provides a complete environment for modeling, execution and monitoring of distributed systems of arbitrary complexity.NEST is embedded within a standard UNIX environment. A user develops a simulation model of a communication network using a set of graphical tools provided by the NEST generic monitor tools. Node functions (e.g., routing protocol) as well as communication link behaviors (e.g., packet loss or delay features) are typically coded by the user in C; in theory, any high-level block-structured language could be supported for this function. These procedures provided by the user are linked with the simulated network model and executed efficiently by the NEST simulation server. The user can reconfigure the simulation scenario either through graphical interaction or under program control. The results of an execution can be graphically monitored through custom monitors, developed using NEST graphical tools.NEST may thus be used to conduct simulation studies of arbitrary distributed networked systems. However, unlike pure simulation tools, NEST may also be used as an environment for rapid prototyping of distributed systems and protocols. The actual code of the systems developed in this manner can be used at any development stage as the node functions for a simulation. The behavior of the system may be examined under a variety of simulated scenarios. For example, in the development of a routing protocol for a mobile packet radio network, it is possible to examine the speed with which the routing protocol responds to changes in the topology, the probability and expected duration of a routing loop. The actual code of the routing protocol may be embedded as node functions within NEST. The only modifications of the code will involve use of NEST calls upon the simulated network to send, receive or broadcast a message. Thus NEST is particularly useful as a tool to study the performance behavior of real (or realisticly modeled) distributed systems in response to simulated complex dynamical network behaviors. Such dynamic response is typically beyond the scope of analytical techniques restricted to model steady-state equilibrium behaviors.Traditional approaches to simulation are either language-based or model-based. Language-based approaches (e.g., Simula, Simscript) provide users with specialized programming language constructs to support modeling and simulation. The key advantage of these approaches is their generality of applications. These approaches, however, are fundamentally limited as tools to study complex distributed systems: First, they separate the tasks of modeling and simulation from those of design and development. A designer of a network protocol is required to develop the code in one environment using one language (e.g., C), while simultaneously developing a consistent simulation model (e.g., in Simscript). The distinctions between the simulation model and the actual system may be significant enough to reduce the effectiveness of simulation. This is particularly true for complex systems involving a long design cycle and significant changes. Second, these approaches require the modeler to efficiently manage the complexity of scheduling distributed system models (under arbitrary network scenarios).Model-based approaches (e.g., queuing-network simulators such as IBM's RESQ [12]) provide users with extensive collections of tools supporting a particular simulation-modeling technique. The key advantage of model-based approaches is the efficiency with which they may handle large-scale simulations by utilizing model-specific techniques (e.g., fast algorithms to solve complex queuing network models). Their key disadvantage is a narrower scope of applications and questions that they may answer. For example, it is not possible within a pure queuing-network model to model and analyze complex transient behaviors (e.g., formation of routing loops in a mobile packet radio network). The model-based approach, like the language-based approaches, suffers from having simulation/testing separated from design/development. It has the additional important disadvantage of requiring users to develop in-depth understanding of the modeling techniques. Designers of distributed database transaction systems are often unfamiliar with queuing models.NEST pursues a different approach to simulation studies: extending a networked operating system environment to support simulation modeling and efficient execution. This environment-based approach to simulation shares the generality of its modeling power with language-based approaches. NEST may be used to model arbitrary distributed interacting systems. NEST also shares with the language-based approach an internal execution architecture that accomplishes very efficient scheduling of a large number of processes. However, unlike language-based approaches, the user does not need to be concerned with management of complex simulation scheduling problems. Furthermore, NEST does not require the user to master or use a separate simulation language facility; the processes of design, development and simulation are fully integrated. The user can study the behavior of the actual system being developed (at any level of detail) under arbitrary simulated scenarios. The routing protocol designer, for example, can attach the routing protocol designed (actual code with minor adjustments) to a NEST simulation and study the system behavior. As the system changes through the design process, new simulation studies may be conducted by attaching the new code to the same simulation models. NEST can thus be used as an integral part of the design process along with other tools (e.g., for debugging).In similarity to model-based approaches, NEST is specifically targeted toward a limited scope of applications: distributed networked systems. NEST supports a built-in customizable communication network model. However, this scope has been sufficiently broad to support studies ranging from low-level communication protocols to complex distributed transaction processing systems, avionic systems and even manufacturing processes.The environment-based approach to simulation offers a few important attractions to users: 1. Simulation is integrated with the range of tools supported by the environment.The user can utilize graphics, statistical packages, debuggers and other standard tools of choice in the simulation study.Simulation can become an integral part of a standard development process.2. Users need not develop extensive new skills or knowledge to pursue simulation studies.3. Standard features of the environment can be used to enhance the range of applicability.NEST simulation is configured as a network server with monitors as clients. The client/server model permits multiple remote accesses to a shared testbed. This can be very important in supporting a large-scale multisite project.In this article we describe the architecture of NEST, illustrate its use, and describe some aspects of NEST implementation. We will also feature its design and provide examples of NEST applications.

Patent
27 Aug 1990
TL;DR: In this article, a methodology for traffic control administered locally at each individual node in a packet network is described, where the nodes apply the "back-pressure" of congestion on one another and cooperate to smooth traffic and alleviate the accumulation of packets at any single node.
Abstract: A methodology is disclosed for traffic control administered locally at each individual node in a packet network. Packet level controls at a node pertain to whether to admit a new packet into the network and whether to momentarily detain passing-through packets. The nodes apply the "back-pressure" of congestion on one another and thereby cooperate to smooth traffic and alleviate the accumulation of packets at any single node. The decision process in packet level control involves two basic operations. First, the address of the packet is translated into a binary word via a static routing table and then, secondly, the binary word and control data representative of dynamic traffic information are operated on logically; traffic control decisions, such as detain a passing-through packet or permit entry of a packet into the network, are based primarily on the result of the logical operation. In setting-up a real-time call, a stream, of scout packets representative of the real-time packets is screened for entry into the network as influx packets. If all scout packets are permitted to enter the network in a given time period, then the real-time data packets are cleared for propagation.

Patent
27 Nov 1990
TL;DR: In this article, a wireless in-building telecommunications system for voice and data communications is disclosed having at least one node (101) arranged for linking to the PSTN (151), and a multiplicity of user modules (103) (UM's) linked to the node via a shared RF communications path (107).
Abstract: A wireless in-building telecommunications system for voice and data communications is disclosed having at least one node (101) arranged for linking to the PSTN (151) and at least one digital information source (153, 155, 157, 159) multiplicity of user modules (103) (UM's) linked to the node via a shared RF communications path (107). Each UM is coupled to a voice telephone instrument (127) and to one or more data terminals (165). The UM's communicate with the node by exchanging fast packets via the common RF path (107). The node also includes a fast-packet-switched mechanism controlled by a bandwidth allocating scheme to prevent collisions of packets as they are transmitted between the various units (101, 103) (nodes and/or user modules) that may be accessing the RF path (107). Also disclosed is a method for allocating the required bandwidth to each of the users of the common communications path in a wireless in-building telephone system. The invention provides for the combination of both voice and data in a single switch using a common packet structure. It allows for the dynamic allocation of bandwidth based on system loading. This includes not only bandwidth within the voice or data areas of the frame, but also between the voice and data portions. It also synchronizes the transfer of the data and the allocation of bus bandwidth.

Patent
03 Oct 1990
TL;DR: In this paper, a congestion control strategy for a packet switching network (10) comprises an admission policy which controls the admission of packets into the network and a stop-and-go queuing strategy (50) at the network nodes.
Abstract: A congestion control strategy for a packet switching network (10) comprises an admission policy which controls the admission of packets into the network (10) and a stop-and-go queuing strategy (50) at the network nodes (n'''). The congestion control strategy utilizes multiple frame sizes (Fig. 6A) so that certain types of connections can be provided with small queuing delays while other types of connections can be allocated bandwith using small incremmental bandwidth units.

Patent
Norimasa Kudo1
02 Jul 1990
TL;DR: In this article, a packet communication system in which packets are arranged to form a packet queue, the packets in the queue are sequentially and selectively transmitted, and the selection of one of the packets of the packet queue to be immediately transmitted is determined through simple operation.
Abstract: A packet communication system in accordance with the present invention wherein, when the system receives packets each made up of a predetermined until data from a plurality of terminals, these packets are arranged to once form a packet queue, the packets in the packet queue are sequentially and selectively transmitted, and the selection of one of the packets of the packet queue to be immediately transmitted is determined through simple operation, whereby high speed processing can be realized and discardable packets in the packet queue can be selectively discarded with high freedom.

Proceedings ArticleDOI
Kotikalapudi Sriram1
16 Apr 1990
TL;DR: The author describes a novel bandwidth allocation method called the (T/sub 1/, T/sub 2/) scheme, which efficiently integrates packetized voice and data traffic and implements a voice block dropping scheme in which the less significant bits in voice packets are dropped during periods of congestion.
Abstract: The author describes a novel bandwidth allocation method called the (T/sub 1/, T/sub 2/) scheme. It efficiently integrates packetized voice and data traffic. Voice and data packets are queued separately, and the (T/sub 1/, T/sub 2/) scheme is used to facilitate dynamic bandwidth sharing and mutual overload protection between the two queues. T/sub 1/ and T/sub 2/ are time limits for transmitting voice and data packets continually while the voice and data queues are visited alternately. The scheme guarantees bandwidths to voice and data in the proportion of their respective time slice allocations, T/sub 1/ and T/sub 2/. However, the bandwidth allocations are flexible in the sense that whenever one queue is exhausted, the transmission is immediately moved over to the other queue if it has a packet waiting to be served. The system also implements a voice block dropping scheme in which the less significant bits in voice packets are dropped during periods of congestion. The author presents results based on a simulation model which illustrate that the two schemes together provide the desired performance in terms of very good voice quality, low delay and packet loss, efficient use of transmission bandwidth, and protection in overload. >

Journal ArticleDOI
TL;DR: The experiments show that both time-delay and buffer-occupancy distributions of multiplexed video sources display a marked bimodal behavior, which does not seem to depend on the buffer size.
Abstract: Real-time traffic measurements on MAGNET II, an integrated network testbed based on asynchronous time sharing, are reported. The quality of service is evaluated by monitoring the buffer-occupancy distribution, the packet time-delay distribution, the packet loss, and the gap distribution of the consecutively lost packets. The experiments show that both time-delay and buffer-occupancy distributions of multiplexed video sources display a marked bimodal behavior, which does not seem to depend on the buffer size. The reliance of the network designer on traffic sources that do not exhibit substantial correlations can lead to implementations with serious congestion problems. For asynchronous-time-sharing-based networks with different traffic classes, the impact of a traffic class on the performance of the other classes tends to be diminished when compared to single-class-based asynchronous transfer mode (ATM) networks. >

Proceedings ArticleDOI
28 May 1990
TL;DR: A congestion avoidance and control scheme that monitors the incoming traffic to each destination and provides rate-based feedback information to the sources of bursty traffic so that sources of traffic can adjust their packet rates to match the network capacity is described.
Abstract: A congestion avoidance and control scheme that monitors the incoming traffic to each destination and provides rate-based feedback information to the sources of bursty traffic so that sources of traffic can adjust their packet rates to match the network capacity is described. The congestion avoidance mechanism at nodes on the periphery of the network controls incoming traffic so that it does not exceed the capacity of paths to different destinations. The congestion control mechanism at each node monitors the performance of adjacent links and generates rate control messages that warn the sources of traffic before congestion develops. Some existing schemes are reviewed, and the congestion avoidance and control scheme and its applicability to various transport protocols are discussed. Experiments show that the scheme is effective in preventing congestion inside the network and that it manages to restrict the traffic on any overloaded path to 80%-90% of its capacity. >

Patent
18 Oct 1990
TL;DR: In this paper, a protocol conversion system for X.25 and X.32 communications is presented. But it is not applicable to couple a data communication network which has a X.34 correspondence portion to another data communications network.
Abstract: An X.25 protocol apparatus can communicate with another X.25 apparatus or a X.32 apparatus through a telephone network, an ISDN network, or a PBX network, by attaching protocol conversion system to the X.25 apparatus. The conversion system has a pair of signal identification portions (1, 6) for separating a receive packet to a data packet and a control packet, a call process portion (3, 12) for dial signal process, and an address table (4) and a packet edition portion (5) for address conversion of a control packet. A data packet is forwarded from one signal identification portion to another signal identification portion through direct path (53). A control packet is forwarded to a control packet identification portion (2) for control process which includes dial signal process, address conversion, and protocol sequences. This system is applicable to couple a data communication network which has a X.32 correspondence portion to another data communication network.

Proceedings ArticleDOI
03 Jun 1990
TL;DR: The choice of parameters for the MMPP captures aspects of the long-term correlation in the arrival process in a more intuitive manner and computes loss more accurately than previous approaches for computing the M MPP parameters.
Abstract: The three performance models studied differ primarily in the manner in which the superposition of the voice sources (i.e. the arrival process) is modeled. The first approach models the superimposed voice sources as a renewal process. The second approach is based on modeling the superimposed voice sources as a Markov modulated Poisson process (MMPP). The choice of parameters for the MMPP captures aspects of the long-term correlation in the arrival process in a more intuitive manner and computes loss more accurately than previous approaches for computing the MMPP parameters. A fluid flow approximation for the superposition is evaluated on the basis of the technique of D. Anick et al. (1982). For all three approaches, the case of multiplexing voice sources over a T1-rate link is considered. Both the new MMPP model and the fluid flow approximation can provide accurate loss predictions for parameter ranges of practical interest. The modeling of buffer overflow for general arrival processes is addressed, and modeling approaches for analyzing finite-buffer multiplexers with general arrival and service processes in a network environment are outlined. >

Proceedings ArticleDOI
03 Jun 1990
TL;DR: By analysis and simulation of a multistage virtual circuit, it is shown that this approach can cut voice-tolerable loss rates in half for high loads and the simple case of using the same local deadline throughout the network performs nearly as well as taking reduced interior traffic into consideration and optimizing loss performance over a set of heterogeneous local deadlines.
Abstract: An investigation is conducted of the possibility of locally controlling short-term congestion for loss-tolerant but delay-sensitive traffic (such as packet voice) through selective discarding of packets based on the virtual work found by a packet on arrival to a queue (local deadlines). By analysis and simulation of a multistage virtual circuit, it is shown that this approach can cut voice-tolerable loss rates in half for high loads. It is also shown that the simple case of using the same local deadline throughout the network performs nearly as well as taking reduced interior traffic into consideration and optimizing loss performance over a set of heterogeneous local deadlines. As an example, the issue of establishing control parameters at call-setup time is also considered. >

Proceedings ArticleDOI
02 Dec 1990
TL;DR: It is shown that bursty traffic can degrade switch performance significantly and that it is difficult to circumvent the degradation by merely restricting the offered traffic load, and methods that alleviate head-of-line blocking are not effective in lowering packet loss probability under bursty Traffic.
Abstract: The packet loss probabilities of several alternative input-buffered and output-buffered switch designs with finite buffer space are investigated. The effects of bursty traffic, modeled by geometrically distributed active and idle periods, are explored. It is shown that bursty traffic can degrade switch performance significantly and that it is difficult to circumvent the degradation by merely restricting the offered traffic load. For input-buffered switches, methods that alleviate head-of-line blocking are not effective in lowering packet loss probability under bursty traffic. Packet loss probability is more sensitive under bursty traffic to the specific contention resolution scheme adopted than it is under uniform random traffic. Several interesting, and perhaps unexpected, results are revealed: under bursty traffic, output queueing may have higher loss probabilities than input queuing; under bursty traffic, speeding up the switch operation may have higher loss probabilities than allocating multiple-output ports to each output address; and if buffers are not shared in a fair manner, sharing buffers could make performance worse than not sharing buffers at high traffic loads. >

Patent
Yoshiro Osaki1
26 Dec 1990
TL;DR: In this paper, a packet exchange network comprises a plurality of packet exchange nodes connected through trunk lines and terminals connected to the packet exchange node each of which is provided a packet storage buffer for storing received packets.
Abstract: A packet exchange network comprises a plurality of packet exchange nodes connected through trunk lines and terminals connected to the packet exchange nodes each of which is provided a packet storage buffer for storing received packets. When the buffer cannot store every incoming packet due to "traffic congestion," some of the packets are discarded from the buffer by examining and registering the number of relay packet exchange nodes through which the packet passes during the communication at the time of setting a call to start a packet communication and by positively changing a discard rate of packets to be discarded at each relay packet exchange node on the basis of the registered packet exchange node number, so that all the packets have substantially the same discard rate regardless of the relay packet exchange node number.

Proceedings ArticleDOI
16 Apr 1990
TL;DR: A congestion control strategy for broadband packet networks is proposed, aimed at connection-oriented services and provides congestion-free communications with guaranteed throughput and constant end-to-end delay per connection.
Abstract: A congestion control strategy for broadband packet networks is proposed. The strategy is aimed at connection-oriented services and provides congestion-free communications with guaranteed throughput and constant end-to-end delay per connection. It is composed of an admission policy imposed per connection at the source node, and a particular queuing scheme practised at the switching nodes. The admission policy requires the packet stream of each connection to possess a certain smoothness property upon arrival at the network, while the queuing scheme preserves this property as packets travel inside the network. Implementation of this strategy is simple, with little processing overhead and minor hardware modifications to the conventional FIFO (first in, first out) with queuing structure. Uniform application of the strategy to all of the services in a network may result in low transmission utilization. In order to obtain statistical multiplexing gain, less conservative traffic management schemes could be incorporated with the proposed strategy into the same network. >

Proceedings ArticleDOI
16 Apr 1990
TL;DR: The protocol is shown to provide excellent delay performance of individual segments from its distributed first come, first serve access capability and it is observed that network and traffic parameters strongly influence resulting performance and that a general comparison can be misleading.
Abstract: In the distributed queue dual bus protocol, a user packet is partitioned into several fixed-size segments. The protocol is shown to provide excellent delay performance of individual segments from its distributed first come, first serve access capability. The performance of the DQDB protocol is examined by both approximate analysis and simulation. The results show that packet delays are generally independent of network size and dependent only on packet size. Comparisons with a token ring network are made. It is observed that network and traffic parameters strongly influence resulting performance and that a general comparison can be misleading. >

Proceedings ArticleDOI
03 Jun 1990
TL;DR: A probe-ack contention resolution scheme is adopted to select the conflict-free paths for the packets to be transferred through the interconnection network, and a number of packets are limited such that the buffer overflow at the output queues never takes place.
Abstract: A broadband packet switch is described whose interconnection network is based on a Batcher-banyan structure. The switch, which is able to transfer up to M packets per slot to a given switch outlet, is provided with packet buffers both at input and output ports. A probe-ack contention resolution scheme is adopted to select the conflict-free paths for the packets to be transferred through the interconnection network. This scheme at the same time limits the access to the interconnection network to a number of packets such that the buffer overflow at the output queues never takes place. Restricting the packet loss so that it occurs only at input queues simplifies the control procedures and makes it possible to jointly optimize the packet storage capability at input and output queues. >

Journal ArticleDOI
TL;DR: Over short intervals of time, circuit emulation type sources submit packets in a deterministic and periodic fashion, and when the buffer and the packet periods are equal, delays become very regular, and calls may lose several packets in succession.
Abstract: Over short intervals of time, circuit emulation type sources submit packets in a deterministic and periodic fashion. If the buffer is finite, then it is possible that the content of the buffer will also behave deterministically and cyclically. Under fairly general assumptions, such a phenomenon may occur for various control strategies such as packet or block dropping. When the buffer and the packet periods are equal, delays become very regular, and calls may lose several packets in succession.

Patent
Masao Akata1
14 Dec 1990
TL;DR: An asynchronous transfer mode switching network system as discussed by the authors relays packets stored in packet buffer units (201 to 20n) to output ports (241 to 24m) designated by the packets, and a time slot scheduling unit assigns time slots to the packets stored at the buffer units for preventing the packets from collision in a space division switching unit.
Abstract: An asynchronous transfer mode switching network system relays packets stored in packet buffer units (201 to 20n) to output ports (241 to 24m) designated by the packets, and a time slot scheduling unit (25) assigns time slots to the packets stored in the packet buffer units upon arrival at the packet buffer units for preventing the packets from collision in a space division switching unit (23), wherein each of the packet buffer units sequentially writes new packets into respective memory locations but randomly reads out the new packets in the time slots assigned by the time slot scheduling unit so that the throughput of the space division switching unit is improved.

Patent
10 Sep 1990
TL;DR: In this article, a resolution phase is used to reduce packet collisions on the transmission channel by employing several time intervals of the resolution phase, which may be divided into several rounds of packet collisions, and each station transmits its next packet after the end of an ongoing transmission if its residual packet number is smaller than the residual packet numbers of the ongoing transmission.
Abstract: A process is disclosed whereby each of a plurality of stations, transmitting messages segmented into multiple sequential packets, is able to access a transmission channel of a local communications network. A residual packet number is associated with each message representing the number of packets remaining in the message after the instant packet has been transmitted. The number of packet collisions on the transmission channel is reduced by employing a resolution phase, which may be divided into several time intervals. Stations transmit messages with certain residual packet numbers during predetermined time intervals of the resolution phase. A station transmits its next packet after the end of the ongoing transmission if its residual packet number is smaller than the residual packet number of the ongoing message transmission.

Proceedings ArticleDOI
16 Apr 1990
TL;DR: Experiments show that both time delay and buffer occupancy distributions of multiplexed video sources display a marked bimodal behavior, which does not seem to depend on the buffer size.
Abstract: Real-time traffic measurements on MAGNET II, an integrated network testbed based on asynchronous time sharing (ATS), are reported. The quality of service is evaluated by monitoring the buffer occupancy distribution, the packet time delay distribution, the packet loss, and the gap distribution of the consecutively lost packets. Experiments show that both time delay and buffer occupancy distributions of multiplexed video sources display a marked bimodal behavior, which does not seem to depend on the buffer size. The reliance of the network designer on traffic sources that do not exhibit substantial correlations can lead to implementations with serious congestion problems. For ATS-based networks with different traffic classes, the impact of a traffic class on the performance of the other classes tends to be diminished when compared with single-class-based ATM (asynchronous transfer mode) networks. >