scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 1993"


Journal ArticleDOI
TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Abstract: The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. >

6,198 citations


Journal ArticleDOI
Abhay Parekh1, Robert G. Gallager1
TL;DR: Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments.
Abstract: Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers. The inherent flexibility of the service discipline is exploited to analyze broad classes of networks. When only a subset of the sessions are leaky bucket constrained, we give succinct per-session bounds that are independent of the behavior of the other sessions and also of the network topology. However, these bounds are only shown to hold for each session that is guaranteed a backlog clearing rate that exceeds the token arrival rate of its leaky bucket. A much broader class of networks, called consistent relative session treatment (CRST) networks is analyzed for the case in which all of the sessions are leaky bucket constrained. First, an algorithm is presented that characterizes the internal traffic in terms of average rate and burstiness, and it is shown that all CRST networks are stable. Next, a method is presented that yields bounds on session delay and backlog given this internal traffic characterization. The links of a route are treated collectively, yielding tighter bounds than those that result from adding the worst-case delays (backlogs) at each of the links in the route. The bounds on delay and backlog for each session are efficiently computed from a universal service curve, and it is shown that these bounds are achieved by "staggered" greedy regimes when an independent sessions relaxation holds. Propagation delay is also incorporated into the model. Finally, the analysis of arbitrary topology GPS networks is related to Packet GPS networks (PGPS). The PGPS scheme was first proposed by Demers, Shenker and Keshav (1991) under the name of weighted fair queueing. For small packet sizes, the behavior of the two schemes is seen to be virtually identical, and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments. >

3,967 citations


Journal ArticleDOI
Anwar Elwalid1, Debasis Mitra1
TL;DR: It is shown that for general Markovian traffic sources it is possible to assign a notional effective bandwidth to each source that is an explicitly identified, simply computed quantity with provably correct properties in the natural asymptotic regime of small loss probabilities.
Abstract: A prime instrument for controlling congestion in a high-speed network is admission control, which limits calls and guarantees a grade of service determined by delay and loss probability in the multiplexer. It is shown that for general Markovian traffic sources it is possible to assign a notional effective bandwidth to each source that is an explicitly identified, simply computed quantity with provably correct properties in the natural asymptotic regime of small loss probabilities. It is the maximal real eigenvalue of a matrix that is directly obtained from the source characteristics and the admission criterion, and for several sources it is simply additive. Both fluid and point process models are considered. Numerical results show that the acceptance set for heterogeneous classes of sources is closely approximated and conservatively bounded by the set obtained from the effective bandwidth approximation. The bandwidth-reducing properties of the leaky bucket regulator are exhibited numerically. >

759 citations


Journal ArticleDOI
TL;DR: The authors present heuristics for multicast tree construction for communication that depends on: bounded end-to-end delay along the paths from source to each destination and minimum cost of the multicasts, where edge cost and edge delay can be independent metrics.
Abstract: The authors present heuristics for multicast tree construction for communication that depends on: bounded end-to-end delay along the paths from source to each destination and minimum cost of the multicast tree, where edge cost and edge delay can be independent metrics. The problem of computing such a constrained multicast tree is NP-complete. It is shown that the heuristics demonstrate good average case behavior in terms of cost, as determined by simulations on a large number of graphs. >

695 citations


Journal ArticleDOI
TL;DR: The authors show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic and show that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a linear constraint onThe number of sources.
Abstract: The authors show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic. More precisely, it is shown that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a linear constraint on the number of sources. That is, for a small loss probability one can assume that each source transmits at a fixed rate called its effective bandwidth. When traffic parameters are known, effective bandwidths can be calculated and may be used to obtain a circuit-switched style call acceptance and routing algorithm for ATM networks. The important feature of the effective bandwidth of a source is that it is a characteristic of that source and the acceptable loss probability only. Thus, the effective bandwidth of a source does not depend on the number of sources sharing the buffer or the model parameters of other types of sources sharing the buffer. >

592 citations


Journal ArticleDOI
TL;DR: For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions, and it is shown that this Nash equilibrium point possesses interesting monotonicity properties.
Abstract: The authors consider a communication network shared by several selfish users. Each user seeks to optimize its own performance by controlling the routing of its given flow demand, giving rise to a noncooperative game. They investigate the Nash equilibrium of such systems. For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions. It is shown that this Nash equilibrium point possesses interesting monotonicity properties. For general networks, these convexity conditions are not sufficient for guaranteeing uniqueness, and a counterexample is presented. Nonetheless, uniqueness of the Nash equilibrium for general topologies is established under various assumptions. >

591 citations


Journal ArticleDOI
TL;DR: The role of pricing policies in multiple service class networks is studied and it is found that it is possible to set the prices so that users of every application type are more satisfied with the combined cost and performance of a network with service-class-sensitive prices.
Abstract: The role of pricing policies in multiple service class networks is studied. An abstract formulation of service disciplines and pricing policies that allows the interplay between service disciplines and pricing policies in determining overall network performance to be described more clearly is presented. Effective multiclass service disciplines allow networks to focus resources on performance-sensitive applications, while effective pricing policies allows the benefits of multiple service classes to be spread around to all users. Furthermore, the incentives formed by service disciplines and pricing policies must be carefully tuned so that user self-interest leads to optimal overall network performance. These concepts are illustrated through simulation of several simple example networks. It is found that it is possible to set the prices so that users of every application type are more satisfied with the combined cost and performance of a network with service-class-sensitive prices. >

518 citations


Journal ArticleDOI
TL;DR: Results show that the new algorithms for transmission scheduling in multihop broadcast radio networks perform consistently better than earlier methods, both theoretically and experimentally.
Abstract: Algorithms for transmission scheduling in multihop broadcast radio networks are presented. Both link scheduling and broadcast scheduling are considered. In each instance, scheduling algorithms are given that improve upon existing algorithms both theoretically and experimentally. It is shown that tree networks can be scheduled optimally and that arbitrary networks can be scheduled so that the schedule is bounded by a length that is proportional to a function of the network thickness times the optimum. Previous algorithms could guarantee only that the schedules were bounded by a length no worse than the maximum node degree times optimum. Since the thickness is typically several orders of magnitude less than the maximum node degree, the algorithms presented represent a considerable theoretical improvement. Experimentally, a realistic model of a radio network is given and the performance of the new algorithms is studied. These results show that, for both types of scheduling, the new algorithms (experimentally) perform consistently better than earlier methods. >

511 citations


Journal ArticleDOI
TL;DR: The results show that, under appropriately selected control gains, a stable (nonoscillatory) operation of store-and-forward packet switching networks with feedback congestion control is possible.
Abstract: Addresses a rate-based feedback approach to congestion control in packet switching networks where sources adjust their transmission rate in response to feedback information from the network nodes. Specifically, a controller structure and system architecture are introduced and the analysis of the resulting closed loop system is presented. Conditions for asymptotic stability are derived. A design technique for the controller gains is developed and an illustrative example is considered. The results show that, under appropriately selected control gains, a stable (nonoscillatory) operation of store-and-forward packet switching networks with feedback congestion control is possible. >

357 citations


Journal ArticleDOI
TL;DR: A family of distributed algorithms for the dynamic computation of the shortest paths in a computer network or internet is presented, validated, and analyzed, and these algorithms are shown to converge in finite time after an arbitrary sequence of link cost or topological changes.
Abstract: A family of distributed algorithms for the dynamic computation of the shortest paths in a computer network or internet is presented, validated, and analyzed. According to these algorithms, each node maintains a vector with its distance to every other node. Update messages from a node are sent only to its neighbors; each such message contains a distance vector of one or more entries, and each entry specifies the length of the selected path to a network destination, as well as an indication of whether the entry constitutes an update, a query, or a reply to a previous query. The new algorithms treat the problem of distributed shortest-path routing as one of diffusing computations, which was first proposed by Dijkstra and Scholten (1980). They improve on a number of algorithms introduced previously. The new algorithms are shown to converge in finite time after an arbitrary sequence of link cost or topological changes, to be loop-free at every instant, and to outperform all other loop-free routing algorithms previously proposed from the standpoint of the combined temporal, message, and storage complexities. >

302 citations


Journal ArticleDOI
TL;DR: A method for evaluating the performance of packet switching communication networks under a fixed, session-based, routing strategy is proposed, based on properly bounding the probability distribution functions of the system input processes.
Abstract: A method for evaluating the performance of packet switching communication networks under a fixed, session-based, routing strategy is proposed. The approach is based on properly bounding the probability distribution functions of the system input processes. The suggested bounds which are decaying exponentials, possess three convenient properties. When the inputs to an isolated network element are all bounded, they result in bounded outputs and assure that the delays and queues in this element have exponentially decaying distributions. In some network settings, bounded inputs result in bounded outputs. Natural traffic processes can be shown to satisfy such bounds. Consequently, this method enables the analysis of various previously intractable setups. Sufficient conditions are provided for the stability of such networks, and derive upper bounds for the parameters of network performance are derived. >

Journal ArticleDOI
Limin Hu1
TL;DR: Simulations based on well-controlled topologies (sparse topologies) show that the pairwise code-assignment scheme requires much fewer codes than transmitter-based code assignment, while maintaining similar throughput performance.
Abstract: Code-division multi-access (CDMA) techniques allow many users to transmit simultaneously in the same band without substantial interference by using approximately orthogonal (low cross-correlation) spread-spectrum waveforms. Two-phase algorithms have been devised to assign and reassign spread-spectrum codes to transmitters, to receivers and to pairs of stations in a large dynamic packet radio network in polynomial times. The purpose of the code assignments is to spatially reuse spreading codes to reduce the possibility of packet collisions and to react dynamically to topological changes. These two-phase algorithms minimize the time complexity in the first phase and minimize the number of control packets needed to be exchanged in the second phase. Therefore, they can start the network operation in a short time, then switch to the second phase with the goal of adapting to topological changes. A pairwise code-assignment scheme is proposed to assign codes to edges. Simulations based on well-controlled topologies (sparse topologies) show that the scheme requires much fewer codes than transmitter-based code assignment, while maintaining similar throughput performance. >

Journal ArticleDOI
TL;DR: A model based on arrival rate histograms for characterizing the behavior of an ATM buffer when it is carrying variable bit rate video traffic indicates that while the presence of strong correlations is an important characteristic of video traffic, the actual form of that correlation is not.
Abstract: The authors introduce a model based on arrival rate histograms for characterizing the behavior of an ATM buffer when it is carrying variable bit rate video traffic. Traffic smoothing on a frame-by-frame basis allows a quasistatic approximation that accurately predicts results such as buffer occupancy distributions and cell loss rates to be made. Convolving individual source histograms allow prediction of the queueing performance of a multiplexed stream. The approximation is investigated in more detail by modeling video as a Markov modulated Poisson process. It is shown that the multiplexer system is nearly completely decomposable (NCD). NCD systems have a well-known approximate solution, which is identical to the histogram approximation. Error bounds for the NCD approximation are also known and are reasonably tight. Results indicate that while the presence of strong correlations is an important characteristic of video traffic, the actual form of that correlation is not. >

Journal ArticleDOI
TL;DR: A reduced load approximation for estimating point-to-point blocking probabilities in loss networks (e.g., circuit switched networks) with state-dependent routing is considered and results for six-node and 36-node asymmetric networks are given.
Abstract: A reduced load approximation (also referred to as an Erlang fixed point approximation) for estimating point-to-point blocking probabilities in loss networks (e.g., circuit switched networks) with state-dependent routing is considered. In this approximation scheme, the idle capacity distribution for each link in the network is approximated, assuming that these distributions are independent from link to link. This leads to a set of nonlinear fixed-point equations which can be solved by repeated substitutions. The accuracy and the computational requirements of the approximation procedure for a particular routing scheme, namely least loaded routing, is examined. Numerical results for six-node and 36-node asymmetric networks are given. A novel reduced load approximation for multirate networks with state-dependent routing is also presented. >

Journal ArticleDOI
TL;DR: The authors have developed intermedia synchronization techniques for multimedia on-demand retrieval over integrated networks in the absence of global clocks and present strategies by which the multimedia server can adaptively control the feedback transmission rate from that mediaphone to minimize the associated overheads.
Abstract: The authors have developed intermedia synchronization techniques for multimedia on-demand retrieval over integrated networks in the absence of global clocks. In these techniques, multimedia servers use lightweight messages called feedback units transmitted by media display sites (such as audiophones and videophones, generically referred to as mediaphones) to detect asynchronies among those sites. They present strategies by which the multimedia server can adaptively control the feedback transmission rate from that mediaphone, so as to minimize the associated overheads without permitting the asynchrony to exceed tolerable limits. They compare the performance of various resynchronization policies such as conservative, aggressive, and probabilistic. Performance evaluation of the feedback techniques indicates that their overheads are negligible; for a typical audio/video playback environment, the feedback frequency was about one in a hundred. >

Journal ArticleDOI
TL;DR: The authors explore a new concept of spectral characterization of wide-band input process in high speed networks, and uses elements of DC, sinusoidal, rectangular pulse, triangle pulse, and their superpositions to represent various input correlation properties.
Abstract: The authors explore a new concept of spectral characterization of wide-band input process in high speed networks. It helps them to localize wide-band sources in a subspace, especially in the low-frequency band, which has a dominant impact on queueing performance. They choose simple periodic-chains for the input rate process construction. Analogous to input functions in signal processing, they use elements of DC, sinusoidal, rectangular pulse, triangle pulse, and their superpositions, to represent various input correlation properties. The corresponding input power spectrum is defined in the discrete-frequency domain. In principle, a continuous spectral function of stationary random input process can be asymptotically approached by its discrete version as one sufficiently reduces the discrete-frequency intervals. An understanding of the queue response to the input spectrum will provide a great deal of knowledge to develop advanced network traffic measurement theory, and help to introduce effective network resource allocation policies. The new relation between queue length and input spectrum is a fruitful starting point for further research. >

Journal ArticleDOI
TL;DR: A widely-applicable technique for integrating protocols that not only improves performance, but also preserves the modularity of protocol layers by automatically integrating independently expressed protocols is introduced.
Abstract: Integrating protocol data manipulations is a strategy for increasing the throughput of network protocols. The idea is to combine a series of protocol layers into a pipeline so as to access message data more efficiently. This paper introduces a widely-applicable technique for integrating protocols. This technique not only improves performance, but also preserves the modularity of protocol layers by automatically integrating independently expressed protocols. The paper also describes a prototype integration tool, and studies the performance limits and scalability of protocol integration. >

Journal ArticleDOI
TL;DR: A unique way to understand the effect of second- and higher-order input statistics on queues is offered, and new concepts of traffic measurement, network control, and resource allocation are developed for high-speed networks in the frequency domain.
Abstract: Queueing performance in a richer, heterogeneous input environment is studied. A unique way to understand the effect of second- and higher-order input statistics on queues is offered, and new concepts of traffic measurement, network control, and resource allocation are developed for high-speed networks in the frequency domain. The technique applies to the analysis of queue response to the individual effects of input power spectrum, bispectrum, trispectrum, and input-rate steady-state distribution. The study provides clear evidence that of the four input statistics, the input power spectrum is most essential to queueing analysis. Furthermore, input power in the low-frequency band has a dominant impact on queueing performance, whereas high-frequency power to a large extent can be neglected. >

Journal ArticleDOI
TL;DR: The authors derive the network's adjustment scheme and the users' decision rule and establish their optimality, which leads to an alternative approach to service provisioning in an ATM network, in which the network offers directly for rent its bandwidth and buffers and users purchase freely resources to meet their desired quality.
Abstract: The authors formulate and solve a problem of allocating resources among competing services differentiated by user traffic characteristics and maximum end-to-end delay. The solution leads to an alternative approach to service provisioning in an ATM network, in which the network offers directly for rent its bandwidth and buffers and users purchase freely resources to meet their desired quality. Users make their decisions based on their own traffic parameters and delay requirements and the network sets prices for those resources. The procedure is iterative in that the network periodically adjusts prices based on monitored user demand, and is decentralized in that only local information is needed for individual users to determine resource requests. The authors derive the network's adjustment scheme and the users' decision rule and establish their optimality. Since the approach does not require the network to know user traffic and delay parameters, it does not require traffic policing on the part of the network. >

Journal ArticleDOI
TL;DR: The authors begin by motivating the need for protocol implementations as user-level libraries and placing their approach in the context of previous work, which has been implemented on Mach workstations connected not only to traditional Ethernet, but also to a more modern network, the DEC SRC AN1.
Abstract: Traditionally, network software has been structured in a monolithic fashion with all protocol stacks executing either within the kernel or in a single trusted user-level server. This organization is motivated by performance and security concerns. However, considerations of code maintenance, ease of debugging, customization, and the simultaneous existence of multiple protocols argue for separating the implementations into more manageable user-level libraries of protocols. The present paper describes the design and implementation of transport protocols as user-level libraries. The authors begin by motivating the need for protocol implementations as user-level libraries and placing their approach in the context of previous work. They then describe their alternative to monolithic protocol organization, which has been implemented on Mach workstations connected not only to traditional Ethernet, but also to a more modern network, the DEC SRC AN1. Based on the authors' experience, they discuss the implications for host-network interface design and for overall system structure to support efficient user-level implementations of network protocols. >

Journal ArticleDOI
TL;DR: The interaction of congestion control with the partitioning of source information into components of varying importance for variable-bit-rate packet voice and packet video is investigated and variations are investigated such as further partition of low-priority information into multiple priorities.
Abstract: The interaction of congestion control with the partitioning of source information into components of varying importance for variable-bit-rate packet voice and packet video is investigated. High-priority transport for the more important signal components results in substantially increased objective service quality. Using a Markov chain voice source model with simple PCM speech encoding and a priority queue, simulation results show a signal-to-noise ratio improvement of 45 dB with two priorities over an unprioritized system. Performance is sensitive to the fraction of traffic placed in each priority, and the optimal partition depends on network loss conditions. When this partition is optimized dynamically, quality degrades gracefully over a wide range of load values. Results with DCT encoded speech and video samples show similar behavior. Variations are investigated such as further partition of low-priority information into multiple priorities. A simulation with delay added to represent other network nodes shows general insensitivity to delay of network feedback information. A comparison is made between dropping packets on buffer overflow and timeout based on service requirements. >

Journal ArticleDOI
TL;DR: A methodology that uses IS dynamically within each regeneration cycle to drive the system back to the regeneration state after an accurate estimate has been obtained is discussed and a statistically based technique for optimizing IS parameter values for simulations of queueing systems, including complex systems with bursty arrival processes is formulated.
Abstract: Importance sampling (IS) is a powerful method for reducing simulation run times when estimating the probabilities of rare events in communication systems using Monte Carlo simulation and is made feasible and effective for the simulation of networks of queues by regenerative techniques. However, using the most favorable IS settings very often makes the length of regeneration cycles infinite or impractically long. To address this problem, a methodology that uses IS dynamically within each regeneration cycle to drive the system back to the regeneration state after an accurate estimate has been obtained is discussed. A statistically based technique for optimizing IS parameter values for simulations of queueing systems, including complex systems with bursty arrival processes, is formulated. A deterministic variant of stochastic simulated annealing (SA), called mean field annealing (MFA), is used to minimize statistical estimates of the IS estimator variance. The technique is demonstrated by evaluating blocking probabilities. >

Journal ArticleDOI
Matthias Kaiserswerth1
TL;DR: The design and implementation of a multiprocessor-based communication subsystem called the Parallel Protocol Engine (PPE), for the parallel and pipelined execution of protocols, are described and the problems concerning the soft- and hardware interface to the PPE are presented.
Abstract: The design and implementation of a multiprocessor-based communication subsystem called the Parallel Protocol Engine (PPE), for the parallel and pipelined execution of protocols, are described. As an example, an implementation of the ISO 8802-2.2 logical link control on a four-processor version of the PPE is presented and analyzed. The performance of the implementation is more than 16,000 type-2 information protocol data units/s, commensurate with emerging high-speed networks that operate in the 100-Mb/s range. The overall end-to-end performance of an integrated system, where two workstations, each equipped with a PPE, communicate over a high-speed link, is analyzed and the problems concerning the soft- and hardware interface to the PPE are presented. >

Journal ArticleDOI
TL;DR: The authors describe the successful optimizations that were done, along with measurements that shows a UDP performance improvement of up to 25-35% on CISC and RISC systems, and overall kernal improvement of between 12% and 18%.
Abstract: As an experiment in protocol optimizations, the authors undertook to improve the performance of a stateless protocol, namely the user datagram protocol (UDP) in the 4.3 BSD Unix kernel. The authors describe the successful optimizations that were done, along with measurements that shows a UDP performance improvement of between 25-35% on CISC and RISC systems, and overall kernal improvement of between 12% and 18%. >

Journal ArticleDOI
TL;DR: The authors reexamine connection establishment in the context of a high-speed packet network, introduce a protocol for connection establishment/takedown that is appropriate for such a network, and explain its advantages over previously proposed protocols.
Abstract: Protocols for establishing, maintaining, and terminating connections in packet-switched networks have been studied, and numerous standards have been developed to address this problem. The authors reexamine connection establishment in the context of a high-speed packet network, introduce a protocol for connection establishment/takedown that is appropriate for such a network, and explain its advantages over previously proposed protocols. The main features of the proposed protocol are: fast bandwidth reservation in order to avoid as much as possible reservation conflicts, guaranteed release of the reserved bandwidth even under modal and link failures, and soft recovery from processor failures, which allows the maintenance of existing connections under processor failure provided the switch and links do not fail. The underlying model that is used is the PARIS/plaNET network, but the protocol can be adapted to other fast packet networking architectures as well. >

Journal ArticleDOI
TL;DR: In order to support applications, such as teleorchestra, that involve a large number of participants, hierarchical mixing architectures are proposed, and it is shown that they are an order of magnitude more scalable than purely centralized or distributed architectures.
Abstract: The problem of media mixing that arises in teleconferencing applications such as teleorchestra is addressed. The mixing algorithm presented minimizes the difference between generation times of the media packets that are being mixed together in the absence of globally synchronized clocks, but in the presence of jitter in communication delays on packet switched networks. In order to support applications, such as teleorchestra, that involve a large number of participants, hierarchical mixing architectures are proposed, and it is shown that they are an order of magnitude more scalable than purely centralized or distributed architectures. Furthermore, mechanisms for minimizing the delays incurred by mixing in various communication architectures are presented. The mixing algorithms are implemented on a network of workstations connected by Ethernets, and the performance of various mixing architectures is experimentally evaluated. The results reveal the maximum number of participants that can be supported in a conference. >

Journal ArticleDOI
TL;DR: An architecture for the copy network that is an integral part of multicast ATM switches that makes use of the property that the broadcast banyan network (BBN) is nonblocking if the active inputs are cyclically concentrated and the outputs are monotone is described.
Abstract: An architecture for the copy network that is an integral part of multicast ATM switches is described. The architecture makes use of the property that the broadcast banyan network (BBN) is nonblocking if the active inputs are cyclically concentrated and the outputs are monotone. In the architecture, by employing a token ring reservation scheme, the outputs of the copy network are reserved before a multicast cell is replicated. By the copy principle, the number of copies requested by a multicast call is not limited by the size of the copy network so that very large multicast switches can be configured in a modular fashion. The sequence of cells is preserved in the structure. Though physically separated, buffers within the copy network are completely shared, so that the throughput can reach 100%, and the cell delay and the cell loss probability can be made to be very small. The cell delay is estimated analytically and by computer simulation, and the results of both are found to agree with each other. The relationship between the cell loss probability under various traffic parameters and buffer sizes is studied by computer simulation. >

Journal ArticleDOI
TL;DR: Morpheus optimization techniques reduce per-layer overhead on time-critical operations to a few assembler instructions even though the protocol stack is not determined until run time, supporting divide-and-conquer simplification of the programming task by minimizing the penalty for decomposing complex protocols into combinations of simpler protocols.
Abstract: Morpheus is a special-purpose programming language that facilitates the efficient implementation of communication protocols. Protocols are divided into three categories, called shapes, so that they can inherit code and data structures based on their category. The programmer implements a particular protocol by refining the inherited structure. Morpheus optimization techniques reduce per-layer overhead on time-critical operations to a few assembler instructions even though the protocol stack is not determined until run time. This supports divide-and-conquer simplification of the programming task by minimizing the penalty for decomposing complex protocols into combinations of simpler protocols. >

Journal ArticleDOI
TL;DR: The use of a set of independent observers to detect faults in communication systems that are modeled by finite-state machines is proposed and has the potential to be incorporated into the fault management system for a high-speed communication system.
Abstract: There is a pressing need for network management systems capable of handling faults. The use of a set of independent observers to detect faults in communication systems that are modeled by finite-state machines is proposed. An algorithm for constructing these observers and a fast real-time fault detection mechanism used by each observer are given. Since these observers run in parallel and independently, one immediate benefit is that of graceful degradation-one failed observer will not cause collapse of the fault management system. In addition, each observer has a simpler structure than the original system and can be operated at higher speed. This approach has the potential to be incorporated into the fault management system for a high-speed communication system. >

Journal ArticleDOI
Duan-Shin Lee1, Bhaskar Sengupta1
TL;DR: A threshold-based priority scheme in which a tuning parameter is used to provide adequate quality of service to real-time traffic while providing the best possible service to the non-real- time traffic is proposed.
Abstract: A threshold-based priority scheme in which a tuning parameter is used to provide adequate quality of service to real-time traffic while providing the best possible service to the non-real-time traffic is proposed. The priority scheme is a generalization of the static priority scheme and the one-limited scheme and is more flexible than both. For this scheme, the authors carry out a queueing analysis and obtain the joint distribution of the queue-lengths. They show by numerical examples how the parameter of this scheme can be tuned dynamically, so that the tuning function can be integrated with the call admission policy. >