Showing papers in "IEEE ACM Transactions on Networking in 1994"
••
TL;DR: It is demonstrated that Ethernet LAN traffic is statistically self-similar, that none of the commonly used traffic models is able to capture this fractal-like behavior, and that such behavior has serious implications for the design, control, and analysis of high-speed, cell-based networks.
Abstract: Demonstrates that Ethernet LAN traffic is statistically self-similar, that none of the commonly used traffic models is able to capture this fractal-like behavior, that such behavior has serious implications for the design, control, and analysis of high-speed, cell-based networks, and that aggregating streams of such traffic typically intensifies the self-similarity ("burstiness") instead of smoothing it. These conclusions are supported by a rigorous statistical analysis of hundreds of millions of high quality Ethernet traffic measurements collected between 1989 and 1992, coupled with a discussion of the underlying mathematical and statistical properties of self-similarity and their relationship with actual network behavior. The authors also present traffic models based on self-similar stochastic processes that provide simple, accurate, and realistic descriptions of traffic scenarios expected during B-ISDN deployment. >
5,567 citations
••
TL;DR: In this paper, the authors analyzed 3 million TCP connections that occurred during 15 wide-area traffic traces, collected at five "stub" networks and two internetwork gateways, providing a diverse look at wide area traffic.
Abstract: Analyzes 3 million TCP connections that occurred during 15 wide-area traffic traces. The traces were gathered at five "stub" networks and two internetwork gateways, providing a diverse look at wide-area traffic. The author derives analytic models describing the random variables associated with TELNET, NNTP, SMTP, and FTP connections. To assess these models the author presents a quantitative methodology for comparing their effectiveness with that of empirical models such as Tcplib [Danzig and Jamin, 1991]. The methodology also allows to determine which random variables show significant variation from site to site, over time, or between stub networks and internetwork gateways. Overall the author finds that the analytic models provide good descriptions, and generally model the various distributions as well as empirical models. >
582 citations
••
TL;DR: A robust scheduling protocol is proposed which is unique in providing a topology transparent solution to scheduled access in multi-hop mobile radio networks and is robust in the presence of mobile nodes.
Abstract: Transmissions scheduling is a key design problem in packet radio networks, relevant to TDMA and CDMA systems. A large number of topology-dependent scheduling algorithms are available, in which changes of topology inevitably require recomputation of transmission schedules. The need for constant adaptation of schedules to mobile topologies entails significant, sometime insurmountable, problems. These are the protocol overhead due to schedule recomputation, performance penalty due to suspension of transmissions during schedule reorganization, exchange of control message and new schedule broadcast. Furthermore, if topology changes faster than the rate at which new schedules can be recomputed and distributed, the network can suffer a catastrophic failure. The authors propose a robust scheduling protocol which is unique in providing a topology transparent solution to scheduled access in multi-hop mobile radio networks. The proposed solution adds the main advantages of random access protocols to scheduled access. Similarly to random access it is robust in the presence of mobile nodes. Unlike random access, however, it does not suffer from inherent instability, and performance deterioration due to packet collisions. Unlike current scheduled access protocols, the transmission schedules of the proposed solution are independent of topology changes, and channel access is inherently fair and traffic adaptive. >
385 citations
••
TL;DR: The authors study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadvertent synchronization, and show that synchronization can be avoided by the addition of randomization to the traffic sources and quantify how much randomization is necessary.
Abstract: The paper considers a network with many apparently-independent periodic processes and discusses one method by which these processes can inadvertently become synchronized. In particular, the authors study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadvertent synchronization. Using simulations and analysis, they study the process of synchronization and show that the transition from unsynchronized to synchronized traffic is not one of gradual degradation but is instead a very abrupt 'phase transition': in general, the addition of a single router will convert a completely unsynchronized traffic stream into a completely synchronized one. They show that synchronization can be avoided by the addition of randomization to the traffic sources and quantify how much randomization is necessary. In addition, they argue that the inadvertent synchronization of periodic processes is likely to become an increasing problem in computer networks. >
265 citations
••
IBM1
TL;DR: De Bruijn graphs are proposed as logical topologies for multihop lightwave networks consisting of all-optical routing nodes interconnected by point-to-point fiber links and a physical topology based on a de Bruijn graph can support a large number of stations using a relatively small number of wavelengths.
Abstract: Proposes de Bruijn graphs as logical topologies for multihop lightwave networks. After deriving bounds on the throughput and delay performance of any logical topology, the authors compute the throughput and delay performance of de Bruijn graphs for two different routing schemes and compare it with their bounds and the performance of shufflenets. For a given maximum nodal in- and out-degree and average number of hops between stations, a logical topology based on a de Bruijn graph can support a larger number of stations than a shufflenet and this number is close to the maximum that can be supported by any topology. The authors also propose de Bruijn graphs as good physical topologies for wavelength routing lightwave networks consisting of all-optical routing nodes interconnected by point-to-point fiber links. The worst-case loss experienced by a transmission is proportional to the maximum number of hops (diameter). For a given maximum nodal in- and out-degree and diameter, a physical topology based on a de Bruijn graph can support a large number of stations using a relatively small number of wavelengths. >
174 citations
•
TL;DR: In this article, the authors formulate and solve a problem of allocating resources among competing services differentiated by user traffic characteristics and maximum end-to-end delay, and derive the network's adjustment scheme and the users' decision rule and establish their optimality.
Abstract: The authors formulate and solve a problem of allocating resources among competing services differentiated by user traffic characteristics and maximum end-to-end delay. The solution leads to an alternative approach to service provisioning in an ATM network, in which the network offers directly for rent its bandwidth and buffers and users purchase freely resources to meet their desired quality. Users make their decisions based on their own traffic parameters and delay requirements and the network sets prices for those resources. The procedure is iterative in that the network periodically adjusts prices based on monitored user demand, and is decentralized in that only local information is needed for individual users to determine resource requests. The authors derive the network's adjustment scheme and the users' decision rule and establish their optimality. Since the approach does not require the network to know user traffic and delay parameters, it does not require traffic policing on the part of the network. >
145 citations
••
TL;DR: This paper investigates a new cutoff priority cellular radio system that allows finite queueing of both new and handoff calls, and considers the reneging from the system of queued new calls due to caller impatience and the dropping of queuing handoffs by the system as they move out of a handoff area before being accomplished successfully.
Abstract: Queueing of new or handoff calls can minimize blocking probabilities or increase total carried traffic. This paper investigates a new cutoff priority cellular radio system that allows finite queueing of both new and handoff calls. We consider the reneging from the system of queued new calls due to caller impatience and the dropping of queued handoff calls by the system as they move out of a handoff area before being accomplished successfully. We use signal-flow graphs and Mason's formula to obtain the blocking probabilities of new and handoff calls and the average waiting times. Moreover, an optimal cutoff parameter and appropriate queue sizes for new and handoff calls are numerically determined so that a proposed overall blocking probability is minimized. >
140 citations
••
IBM1
TL;DR: It is demonstrated that the 2DRR scheduler can be implemented using simple logic components, thereby allowing a very high-speed implementation and its performance is compared with that of the input and output queueing configurations, showing that the scheme achieves the same saturation throughput asOutput queueing.
Abstract: Presents a new scheduler, the two-dimensional round-robin (2DRR) scheduler, that provides high throughput and fair access in a packet switch that uses multiple input queues. We consider an architecture in which each input port maintains a separate queue for each output. In an N/spl times/N switch, our scheduler determines which of the queues in the total of N/sup 2/ input queues are served during each time slot. We demonstrate the fairness properties of the 2DRR scheduler and compare its performance with that of the input and output queueing configurations, showing that our scheme achieves the same saturation throughput as output queueing. The 2DRR scheduler can be implemented using simple logic components, thereby allowing a very high-speed implementation. >
138 citations
••
TL;DR: This paper proposes to model a single video source as a Markov renewal process whose states represent different bit rates, and proposes two novel goodness-of-fit metrics which are directly related to the specific performance aspects that the model wants to predict.
Abstract: Models for predicting the performance of multiplexed variable bit rate video sources are important for engineering a network. However, models of a single source are also important for parameter negotiations and call admittance algorithms. In this paper we propose to model a single video source as a Markov renewal process whose states represent different bit rates. We also propose two novel goodness-of-fit metrics which are directly related to the specific performance aspects that we want to predict from the model. The first is a leaky bucket contour plot which can be used to quantify the burstiness of any traffic type. The second measure applies only to video traffic and measures how well the model can predict the compressed video quality. >
134 citations
••
TL;DR: The ability to synchronize over arbitrary topologies, the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture that permits the application to tailor synchronization calculations to its service requirements are presented.
Abstract: Presents an adaptive flow synchronization protocol that permits synchronized delivery of data to and from geographically distributed sites. Applications include inter-stream synchronization, synchronized delivery of information in a multisite conference, and synchronization for concurrency control in distributed computations. The contributions of this protocol in the area of flow synchronization are the ability to synchronize over arbitrary topologies, the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture that permits the application to tailor synchronization calculations to its service requirements. The authors take advantage of network protocols capable of maintaining network clock synchronization in the millisecond range. >
133 citations
••
TL;DR: A basic teletraffic model which has applications to integrated multirate services on ATM and wireless systems, and a uniform asymptotic approximation (UAA) for the blocking probabilities, which is uniformly effective for the complete range of loadings, simple to calculate and gives accurate results even for relatively small systems.
Abstract: We consider a basic teletraffic model, which has applications to integrated multirate services on ATM and wireless systems. In the "finite-sources" version of the model an unbuffered resource with C channels is shared by heterogeneous sources which alternate between arbitrarily distributed random periods in the on and off states, and in the on state require a fixed number of channels. If a source does not find enough free channels when it turns on, then it is blocked and the burst is lost. In the "infinite-sources" version of the model requests for connections form Poisson streams and connections hold fixed numbers of channels for random periods. The stationary distribution of the system has the product-form and the insensitivity property. Our main results for the finite-sources model are for the asymptotic scaling in which C and the number of sources of each type are large. The central result is a uniform asymptotic approximation (UAA) for the blocking probabilities. It is uniformly effective for the complete range of loadings, simple to calculate and gives accurate results even for relatively small systems. The UAA is also specialized to the overloaded, critical and underloaded regimes. For the admission control of the system we calculate its Erlang capacity, i.e., the set of combinations of sources of various types such that the blocking probabilities for all types do not exceed specified values. For the first two regimes we obtain the boundaries of the admissible sets in the form of hyperplanes, and thus the effective bandwidths of sources of each type. For the underloaded regime the boundary is nonlinear and we obtain a convenient parameterized characterization. Finally, various numerical results are presented. >
••
TL;DR: The authors consider all-to-all transmission schedules, which are defined to be ones that schedule a packet transmission between each input-output pair, and present upper and lower bounds for the minimum length of such schedules.
Abstract: Considers a broadcast-and-select, wavelength division multiplexed (WDM), optical communication network that is packet switched and time slotted. The amount of time it takes transmitters and receivers to tune from one wavelength to another is assumed to be T slots. The authors consider all-to-all transmission schedules, which are defined to be ones that schedule a packet transmission between each input-output pair. They present upper and lower bounds for the minimum length of such schedules. In particular, if each of N inputs has a tunable transmitter and each of N outputs has a tunable receiver then the minimum length is between (N+o(N))(/spl radic/T+1) and ((N+o(N))/spl radic/T. This provides some insight into the relationship between packet delay and T. The authors also consider schedules that do not allow packet transmissions while a transmitter or receiver is tuning from one wavelength to another. >
••
TL;DR: An efficient algorithm is presented that finds a system of VP routes for a given set of VP terminators and VP capacity demands and numerical results show that the proposed solution carries the potential for a near optimal allocation of VPs.
Abstract: The virtual path (VP) concept is known to be a powerful transport mechanism for ATM networks. This paper deals with the optimization of the virtual paths system from a bandwidth utilization perspective. While previous research on VP management has basically assumed that bandwidth in ATM networks is unlimited, emerging technologies and applications are changing this premise. In many networks, such as wireless, bandwidth is always at a premium. In wired networks, with increasing user access speeds, less than a dozen of broadband connections can saturate even a Gigabit link. We present an efficient algorithm that finds a system of VP routes for a given set of VP terminators and VP capacity demands. This solution is motivated by the need to minimize the load, or reduce congestion, generated by the VP's on individual links. A nontrivial performance guarantee is proven for the quality of the proposed solution and numerical results show that the proposed solution carries the potential for a near optimal allocation of VPs. >
••
TL;DR: Two metrics based on e/sub i/(P), the number of linear extensions of partial-order P in the presence of i lost objects, are proposed as complexity measures of different combinations of partial order and reliability.
Abstract: Investigates a partial-order connection (POC) service/protocol. Unlike classic transport services that deliver objects either in the exact order transmitted or according to no particular order, POC provides a partial-order service, i.e. a service that requires some, but not all objects to be received in the order transmitted. Two versions of POC are proposed: reliable, which requires that all transmitted objects are eventually delivered, and unreliable, which permits the service to lose a subset of the objects. In the unreliable version, objects are more finely categorized into one of three reliability classes depending on their temporal value. Two metrics based on e/sub i/(P), the number of linear extensions of partial-order P in the presence of i lost objects, are proposed as complexity measures of different combinations of partial order and reliability. Formulae for calculating e/sub i/(P) are derived when P is series-parallel. A formal specification of a POC protocol, written in Estelle, is presented and discussed. This specification was designed and validated using formal description tools and provides a basis for future implementations. >
••
TL;DR: The paper introduces the notion of cell-blocking, wherein a fuzzy thresholding function, based on Zadeh's (1965) fuzzy set theory, is utilized to deliberately refuse entry to a fraction of incoming cells from other switches.
Abstract: High-performance cell-based communications networks have been conceived to carry asynchronous traffic sources and support a continuum of transport rates ranging from low bit-rate to high bit-rate traffic. When a number of bursty traffic sources add cells, the network is inevitably subject to congestion. Traditional approaches to congestion management include admission control algorithms, smoothing functions, and the use of finite-sized buffers with queue management techniques. Most queue management schemes, reported in the literature, utilize "fixed" thresholds to determine when to permit or refuse entry of cells into the buffer. The aim is to achieve a desired tradeoff between the number of cells carried through the network, propagation delays of the cells, and the number of discarded cells. While binary thresholds are excessively restrictive, the rationale underlying the use of a large number of priorities appears to be ad hoc, unnatural, and unclear. The paper introduces the notion of cell-blocking, wherein a fuzzy thresholding function, based on Zadeh's (1965) fuzzy set theory, is utilized to deliberately refuse entry to a fraction of incoming cells from other switches. The blocked cells must be rerouted by the sending switch to other switches and, in the process, they may incur delays. The fraction of blocked cells is a continuous function of the current buffer occupancy level unlike the abrupt. The fuzzy cell-blocking scheme is simulated on a computer. Fuzzy queue management adapts superbly to sharp changes in cell arrival rates and maximum burstiness of bursty traffic sources. >
••
TL;DR: A novel extension of LOTOS, one of the two formal specification languages that were standardized by ISO, is presented, specifically conceived to integrate performance analysis and formal verification.
Abstract: Performance analysis and formal correctness verification of computer communication protocols and distributed systems have traditionally been considered as two separate fields. However, their integration can be achieved by using formal description techniques as paradigms for the development of performance models. This paper presents a novel extension of LOTOS, one of the two formal specification languages that were standardized by ISO. The extension is specifically conceived to integrate performance analysis and formal verification. The extended language syntax and semantics are formally defined, along with a mapping from extended specifications to performance models, The mapping preserves the specified observable behavior. Two simple examples, a stop-and-wait protocol and a time-sharing system, are used to concretely demonstrate the new approach and to validate it. >
••
TL;DR: Study the problem of optimal buffer space priority control in an ATM network node and three classes of policies that schedule the buffer allocation in some optimal manner are identified.
Abstract: Study the problem of optimal buffer space priority control in an ATM network node. The buffer of a transmission link is shared among the cells of several traffic classes waiting for transmission through the link. When the number of cells to be stored in the buffer exceeds the available buffer space, certain cells have to be dropped. Different traffic classes have different sensitivities to cell losses. By appropriately selecting the classes of cells which are dropped or blocked in case of overflow, one can have the more sensitive classes suffering smaller cell losses. Depending on the control that on the system, three classes of policies are distinguished. In each one, policies that schedule the buffer allocation in some optimal manner are identified. >
••
TL;DR: The authors propose a passive protected DCS self-healing network (PPDSHN) architecture using a passive protection cross-connect network for network protection that may apply to not only the centralized and distributed control DCS network architectures, but also asynchronous, SONET and ATM DCS networks.
Abstract: The self-healing mesh network architecture using digital cross-connect systems (DCSs) is a crucial part of an integrated network restoration system. The conventional DCS self-healing networks using logical channel protection may require a large amount of spare capacity for network components (such as DCSs) and may not restore services fast enough (e.g., within 2 s). The authors propose a passive protected DCS self-healing network (PPDSHN) architecture using a passive protection cross-connect network for network protection. For the PPDSHN architecture, network restoration is performed in the optical domain and is controlled by electronic working DCS systems. Some case studies have suggested that the proposed PPDSHN architecture may restore services within a two-second objective with less equipment cost than the conventional DCS self-healing network architecture in high-demand metropolitan areas for local exchange carrier networks. The proposed PPDSHN architecture may apply to not only the centralized and distributed control DCS network architectures, but also asynchronous, SONET and ATM DCS networks. Transparency of line rates and transmission formats makes the PPDSHN network even more attractive when network evolution is a key concern of network planning. >
••
TL;DR: The paper describes several improvements to a nonblocking copy network proposed previously for multicast packet switching that provide a complete solution to some system problems inherent in multicasting.
Abstract: The paper describes several improvements to a nonblocking copy network proposed previously for multicast packet switching. The improvements provide a complete solution to some system problems inherent in multicasting. The input fairness problem caused by overflow is solved by a cyclic running adder network (CRAN), which can calculate running sums of copy requests starting from any input port. The starting point can change adaptively in every time slot based on the overflow condition of the previous time slot. The CRAN also serves as a multicast traffic controller to regulate the overall copy requests. The throughput of a multicast switch can be improved substantially if partial service of copy request is implemented when overflow occurs. Call-splitting can also be implemented by the CRAN in a straightforward manner. Nonuniform distribution of replicated packets at outputs of the copy network may affect the performance of the following routing network. This output fairness problem due to underflow is solved by cyclically shifting the copy packets in every time slot. An approximate queueing model is developed to analyze the performance of this improved copy network. It shows that if the loading on each output of the copy network is maintained below 80%, the average packet delay in an input buffer would be less than two time slots. >
••
TL;DR: This paper describes techniques, and presents a general framework for gathering and harnessing widely distributed information in a diverse and growing Internet environment, that gathers information from 17 different types of sources, providing a particularly thorough demonstration of an information gathering architecture.
Abstract: The Internet is quickly becoming an indispensable means of communication and collaboration, based on applications such as electronic mail, remote information retrieval, and multimedia conferencing. A fundamental problem for such applications is supporting resource discovery in a fashion that keeps pace with the Internet's exponential growth in size and diversity. Netfind is a scalable tool that locates current electronic mail addresses and other information about Internet users. Since the time we first deployed Netfind in 1990, it has evolved considerably, making use of more types of information sources. As well as more sophisticated mechanisms to gather and cross-correlate information. In this paper, we describe these techniques, and present a general framework for gathering and harnessing widely distributed information in a diverse and growing Internet environment. At present, Netfind gathers information from 17 different types of sources, providing a particularly thorough demonstration of an information gathering architecture. >
••
TL;DR: The paper describes three models for the analysis of multistage banyan interconnection networks in which the switching elements are provided with a buffer shared among all the inlets and outlets of the element.
Abstract: The paper develops the analysis of multistage banyan interconnection networks in which the switching elements are provided with a buffer shared among all the inlets and outlets of the element. The packet transfer within the network takes place according to absence or presence of backpressure signals between adjacent stages. In this latter case four different modes for operating backpressure have been studied: local and global backpressure with acknowledgment or grant backward signaling. The paper describes three models for the analysis of these networks when loaded by a random traffic. These models are based on an increasing degree of accuracy (and hence of complexity) in the representation of the state of the generic switching elements. The accuracy of these models in evaluating the network performance is assessed in the paper also in comparison with the results given by previously proposed models. >
••
TL;DR: The main contribution of the paper is the identification and evaluation of buffering policies which preserve packet ordering and guarantee high priority packets performance (loss probability), irrespective of the traffic intensity and arrival patterns of low priority packets.
Abstract: Studies buffering policies which provide different loss priorities to packets/cells, while preserving packet ordering (space priority disciplines). These policies are motivated by the possible presence, within the same connection, of packets with different loss probability requirements or guarantees, e.g., voice and video coders or rate control mechanisms. The main contribution of the paper is the identification and evaluation of buffering policies which preserve packet ordering and guarantee high priority packets performance (loss probability), irrespective of the traffic intensity and arrival patterns of low priority packets. Such policies are termed protective policies. The need for such policies arises from the difficulty to accurately characterize and size low priority traffic, which can generate large and unpredictable traffic variations over short periods of time. The authors review previously proposed buffer admission policies and determine if they satisfy such "protection" requirements. Furthermore, they also identify and design new policies, which for a given level of protection maximize low priority throughput. >
••
TL;DR: A new family of error detection codes called weighted sum codes, which combine powerful error detection properties (as good as the CRC) with attractive implementation properties and offers commutative processing.
Abstract: Describes a new family of error detection codes called weighted sum codes. These codes are preferred over four existing codes (CRC, Fletcher checksum, Internet checksum, and XTP CXOR), because they combine powerful error detection properties (as good as the CRC) with attractive implementation properties. One variant, WSC-1, has efficient software and hardware implementations; while a second variant, WSC-2, is almost as efficient in software (still significantly better than CRC) and offers commutative processing (that enables efficient out-of-order, parallel, and incremental update processing). >
••
TL;DR: The authors present an efficient discrete optimization algorithm that meets these goals while incorporating the prevailing traffic conditions, and propose a new metric to evaluate the solution quality, the so called suboptimal quality, obtained by deriving the solution's relative position in the state space according to the performance order.
Abstract: One of the major challenges in the virtual topology design of a WDM star based system, is to incorporate in the optimization process both realistic objective functions and real system behavior. The authors present an efficient discrete optimization algorithm that meets these goals while incorporating the prevailing traffic conditions. They simulate the real system and then approximate the objective function by a short term simulation. The optimization process is based on an ordinal optimization approach, i.e., is insensitive to the approximation of the objective function obtained by short term simulation. Another crucial issue in virtual topology design is how to evaluate the quality of the solution obtained by the algorithm. They propose a new metric to evaluate the solution quality, the so called suboptimal quality, obtained by deriving the solution's relative position in the state space according to the performance order. The experiments presented in the paper attest to the quality (efficiency and robustness) of the optimization algorithm and its suitability to solve the wavelength assignment problem. >
••
TL;DR: The study indicates that, by selective packet discarding, one can properly change the distribution of the overall loss rate spectrum among the multiplexed traffic streams, which can significantly improve the transmission quality of individual services.
Abstract: Uses the well known technique of power spectral representation to characterize the second order statistics of packet loss during congestion at a statistical multiplexer. Unlike the steady state statistics, the second order statistics measures the time correlation behavior of loss. Typically, the low frequency loss power is associated with long periods of highly consecutive loss. The effect of the loss rate power spectrum on transmission qualities is service dependent. The study indicates that, by selective packet discarding, one can properly change the distribution of the overall loss rate spectrum among the multiplexed traffic streams, which can significantly improve the transmission quality of individual services. Moreover, with the design of multi-level buffer overload control, one can tune the loss rate steady state distribution of different priority streams to a piece-wise step function, which minimizes the packet loss impact on service qualities. >
••
TL;DR: A new measure of fault tolerance is described, along with techniques based on Markov chains to calculate upper and lower bounds on the fault tolerance of this network topology quickly and efficiently, and provide a more precise description of network fault tolerance than has been achieved with previously published techniques.
Abstract: The paper analyzes the fault tolerance of a class of double-loop networks referred to as forward-loop backward-hop (FLBH), in which each node is connected via unidirectional links to the node one hop in front of it and to the node S hops in back of it for some S. A new measure of fault tolerance is described, along with techniques based on Markov chains to calculate upper and lower bounds on the fault tolerance of this network topology quickly and efficiently. The results of these calculations provide a more precise description of network fault tolerance than has been achieved with previously published techniques. >
••
TL;DR: The paper develops the analysis of multistage banyan interconnection networks in which the switching elements are provided with a buffer shared among all the inlets and outlets of the element.
Abstract: For pt.I see ibid., vol.2, no.4, p.398-410 (1994). The paper develops the analysis of multistage banyan interconnection networks in which the switching elements are provided with a buffer shared among all the inlets and outlets of the element. Two different internal protocols are considered for the transfer of packets from stage to stage based on the presence or absence of interstage backpressure to signal the occurrence of buffer saturation conditions. The analytical model is based on the representation of the switching element state by means of only two variables. Two kinds of offered traffic patterns are considered, a bursty balanced pattern and an unbalanced uncorrelated pattern. In this latter case, the load above the average is supposed to be addressed either to a single network outlet, or to a given group of network outlets. >
••
TL;DR: An ATM call interworking with the public switched telephone network may experience echo problems due to impedance mismatch, so two echo control designs are proposed to assure the quality of voice services by reducing round-trip echo delay.
Abstract: An ATM call interworking with the public switched telephone network may experience echo problems due to impedance mismatch. The authors propose two echo control designs to assure the quality of voice services by reducing round-trip echo delay. Performance of these two echo control designs is then analyzed. >
••
TL;DR: The proposed Multinet switch is a self-routing multistage switch with partially shared internal buffers capable of achieving 100% throughput under uniform traffic and provides incoming ATM cells with multiple paths, thus eliminating the out-of-order cell sequence problem.
Abstract: A new ATM switch architecture is presented. Our proposed Multinet switch is a self-routing multistage switch with partially shared internal buffers capable of achieving 100% throughput under uniform traffic. Although it provides incoming ATM cells with multiple paths, the cell sequence is maintained throughout the switch fabric thus eliminating the out-of-order cell sequence problem. Cells contending for the same output addresses are buffered internally according to a partially shared queueing discipline. In a partially shared queueing scheme, buffers are partially shared to accommodate bursty traffic and to limit the performance degradation that may occur in a completely shared system where a small number of calls may hog the entire buffer space unfairly. Although the hardware complexity in terms of number of crosspoints is similar to that of input queueing switches, the Multinet switch has throughput and delay performance similar to output queueing switches. >
••
TL;DR: A theoretical analysis of the fault coverage of conformance test sequences for communication protocols specified as finite state machines and a technique is presented for generating test sequences which provides guaranteed maximal fault coverage for the conformance testing of communication protocols.
Abstract: A theoretical analysis of the fault coverage of conformance test sequences for communication protocols specified as finite state machines is presented. Faults of different types are considered, and their effect on testing is analyzed. The interaction between faults of different categories and the impact it has on conformance testing is investigated. Fault coverage is defined for the testing of both incompletely-specified machines (ISMs) and completely-specified machines (CSMs). An algorithm is presented to generate test sequences with maximal fault coverage for the testing of ISMs. It is then augmented for the testing of CSMs, and finally a technique is presented for generating test sequences which provides guaranteed maximal fault coverage for the conformance testing of communication protocols. >