scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 1995"


Journal ArticleDOI
TL;DR: It is found that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson.
Abstract: Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. We evaluate 24 wide area traces, investigating a number of wide area TCP arrival processes (session and connection arrivals, FTP data connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. We find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib interarrivals preserves burstiness over many time scales; and that FTP data connection arrivals within FTP sessions come bunched into "connection bursts", the largest of which are so large that they completely dominate FTP data traffic. Finally, we offer some results regarding how our findings relate to the possible self-similarity of wide area traffic. >

3,915 citations


Journal ArticleDOI
TL;DR: It is argued that controlled link-sharing is an essential component that can provide gateways with the flexibility to accommodate emerging applications and network protocols.
Abstract: Discusses the use of link-sharing mechanisms in packet networks and presents algorithms for hierarchical link-sharing. Hierarchical link-sharing allows multiple agencies, protocol families, or traffic types to share the bandwidth on a link in a controlled fashion. Link-sharing and real-time services both require resource management mechanisms at the gateway. Rather than requiring a gateway to implement separate mechanisms for link-sharing and real-time services, the approach in the paper is to view link-sharing and real-time service requirements as simultaneous, and in some respect complementary, constraints at a gateway that can be implemented with a unified set of mechanisms. While it is not possible to completely predict the requirements that might evolve in the Internet over the next decade, the authors argue that controlled link-sharing is an essential component that can provide gateways with the flexibility to accommodate emerging applications and network protocols. >

1,181 citations


Journal ArticleDOI
TL;DR: The authors derive an upper bound on the carried traffic of connections for any routing and wavelength assignment (RWA) algorithm in a reconfigurable optical network and quantifies the amount of wavelength reuse achievable in large networks as a function of the number of wavelengths, number of edges, and number of nodes for randomly constructed networks as well as de Bruijn networks.
Abstract: Considers routing connections in a reconfigurable optical network using WDM. Each connection between a pair of nodes in the network is assigned a path through the network and a wavelength on that path, such that connections whose paths share a common link in the network are assigned different wavelengths. The authors derive an upper bound on the carried traffic of connections (or equivalently, a lower bound on the blocking probability) for any routing and wavelength assignment (RWA) algorithm in such a network. The bound scales with the number of wavelengths and is achieved asymptotically (when a large number of wavelengths is available) by a fixed RWA algorithm. The bound can be used as a metric against which the performance of different RWA algorithms can be compared for networks of moderate size. The authors illustrate this by comparing the performance of a simple shortest-path RWA (SP-RWA) algorithm via simulation relative to the bound. They also derive a similar bound for optical networks using dynamic wavelength converters, which are equivalent to circuit-switched telephone networks, and compare the two cases. Finally, they quantify the amount of wavelength reuse achievable in large networks using the SP-RWA via simulation as a function of the number of wavelengths, number of edges, and number of nodes for randomly constructed networks as well as de Bruijn networks. They also quantify the difference in wavelength reuse between two different optical node architectures. >

1,046 citations


Journal ArticleDOI
TL;DR: Specific improvements developed for NTP Version 3 are described which have resulted in increased accuracy, stability and reliability in both local-area and wide-area networks and certain enhancements to the Unix operating system kernel software are described to realize submillisecond accuracies with fast workstations and networks.
Abstract: The Network Time Protocol (NTP) is widely deployed in the Internet to synchronize computer clocks to each other and to international standards via telephone modem, radio and satellite. The protocols and algorithms have evolved over more than a decade to produce the present NTP Version 3 specification and implementations. Most of the estimated deployment of 100000 NTP servers and clients enjoy synchronization to within a few tens of milliseconds in the Internet of today. This paper describes specific improvements developed for NTP Version 3 which have resulted in increased accuracy, stability and reliability in both local-area and wide-area networks. These include engineered refinements of several algorithms used to measure time differences between a local clock and a number of peer clocks in the network, as well as to select the best subset from among an ensemble of peer clocks and combine their differences to produce a local dock accuracy better than any in the ensemble. This paper also describes engineered refinements of the algorithms used to adjust the time and frequency of the local clock, which functions as a disciplined oscillator. The refinements provide automatic adjustment of algorithm parameters in response to prevailing network conditions, in order to minimize network traffic between clients and busy servers while maintaining the best accuracy. Finally, this paper describes certain enhancements to the Unix operating system kernel software in order to realize submillisecond accuracies with fast workstations and networks. >

313 citations


Journal ArticleDOI
TL;DR: A graph based network model is proposed that takes into account the dependencies among the different objects in the telecommunication environment and a novel approach to estimate the domain of an alarm and an algorithm for fault diagnosis is designed and analyzed.
Abstract: A single fault in a large communication network may result in a large number of fault indications (alarms) making the isolation of the primary source of failure a difficult task. The problem becomes worse in cases of multiple faults. In this paper we present an approach for modelling the problem of fault diagnosis. We propose a graph based network model that takes into account the dependencies among the different objects in the telecommunication environment and a novel approach to estimate the domain of an alarm. Based on that model, we design an algorithm for fault diagnosis and analyze its performance with respect to the accuracy of the fault hypotheses it provides. We also propose and analyze a fault diagnosis algorithm suitable for systems for which an independent failure assumption is valid. Finally, we examine the importance of the information of dependency between objects for the fault diagnosis process.

284 citations


Journal ArticleDOI
Scott Shenker1
TL;DR: It is shown that no service discipline can guarantee optimal efficiency, efficiency or fairness, and that the traditional FIFO service discipline guarantees none of these properties, but that a service discipline called fair share guarantees all of them.
Abstract: This paper discusses congestion control from a game-theoretic perspective. There are two basic premises: 1) Users are assumed to be independent and selfish. 2) Central administrative control is exercised only at the network switches. The operating points resulting from selfish user behavior depend crucially on the service disciplines implemented in network switches. This effect is investigated in a simple model consisting of a single exponential server shared by many Poisson sources. We discuss the extent to which one can guarantee, through the choice of switch service disciplines, that these selfish operating points will be efficient and fair. We also discuss to what extent the choice of switch service disciplines can ensure that these selfish operating points are unique and are easily and rapidly accessible by simple self optimization techniques. We show that no service discipline can guarantee optimal efficiency. As for the other properties, we show that the traditional FIFO service discipline guarantees none of these properties, but that a service discipline called fair share guarantees all of them. While the treatment utilizes game-theoretic concepts, no previous knowledge of game theory is assumed.

250 citations


Journal ArticleDOI
TL;DR: Surprisingly, the authors find that, for a wide range of parameters, the blocking performance of the lightwave network is almost the same as that of the ideal centralized switch.
Abstract: Presents a heuristic algorithm for effectively assigning a limited number of wavelengths among the access stations of a multihop network wherein the physical medium consists of optical fiber segments which interconnect wavelength-selective optical switches. Such a physical medium permits the limited number of wavelengths to be re-used among the various fiber links, thereby offering very high aggregate capacity. Although the optical connectivity among the access station can be altered by changing the states of the various optical switches, the resulting optical connectivity pattern is constrained by the limitation imposed at the physical level. The authors also study two routing schemes, used to route requests for virtual connections. The heuristic is tested on a realistic traffic model, and the call blocking performance of new requests for virtual connections is studied through extensive simulations and compared against the blocking performance of an ideal infinite capacity centralized switch (lowest possible call blocking caused exclusively by congestion on the finite capacity user input/output links, never by the switch fabric itself). Surprisingly, the authors find that, for a wide range of parameters, the blocking performance of the lightwave network is almost the same as that of the ideal centralized switch. From these results, they conclude that the heuristic algorithm is effective and the routing scheme is efficient. >

228 citations


Journal ArticleDOI
TL;DR: The authors investigate the problem of assigning orthogonal codes to stations so as to eliminate the hidden terminal interference, and show that this problem is NP-complete, and thus computationally intractable, even for very restricted but very realistic network topologies.
Abstract: Hidden terminal interference is caused by the (quasi-) simultaneous transmission of two stations that cannot hear each other, but are both received by the same destination station. This interference lowers the system throughput and increases the average packet delay. Some random access protocols that reduce this interference have been proposed, e.g., BTMA protocol. However, the hidden terminal interference can be totally avoided only by means of code division multiple access (CDMA) schemes. In the paper, the authors investigate the problem of assigning orthogonal codes to stations so as to eliminate the hidden terminal interference. Since the codes share the fixed channel capacity allocated to the network in the design stage, their number must not exceed a given bound. The authors seek assignments that minimize the number of codes used. They show that this problem is NP-complete, and thus computationally intractable, even for very restricted but very realistic network topologies. Then, they present optimal algorithms for further restricted topologies, as well as fast suboptimal centralized and distributed heuristic algorithms. The results of extensive simulation set up to derive the average performance of the proposed heuristics on realistic network topologies are presented. >

208 citations


Journal ArticleDOI
TL;DR: Numerical results for a simple one-dimensional mobility model show that the optimal scheme may provide significant savings when compared to the standard approach even when the latter is optimized by suitably choosing the registration area size on a per-user basis.
Abstract: In personal communications applications, users communicate via wireless with a wireline network. The wireline network tracks the current location of the user, and can therefore route messages to a user regardless of the user's location. In addition to its impact on signaling within the wireline network, mobility tracking requires the expenditure of wireless resources as well, including the power consumption of the portable units carried by the users and the radio bandwidth used for registration and paging. Ideally, the mobility tracking scheme used for each user should depend on the user's call and mobility pattern, so the standard approach, in which all cells in a registration area are paged when a call arrives, may be wasteful of wireless resources. In order to conserve these resources, the network must have the capability to page selectively within a registration area, and the user must announce his or her location more frequently. We propose and analyze a simple model that captures this additional flexibility. Dynamic programming is used to determine an optimal announcing strategy for each user. Numerical results for a simple one-dimensional mobility model show that the optimal scheme may provide significant savings when compared to the standard approach even when the latter is optimized by suitably choosing the registration area size on a per-user basis. Ongoing research includes computing numerical results for more complicated mobility models and determining how existing system designs might be modified to incorporate our approach.

199 citations


Journal ArticleDOI
TL;DR: It is shown that modulating the source rate of a video encoder based on congestion signals from the network has two major benefits: the quality of the video transmission degrades gracefully when the network is congested and the transmission capacity is used efficiently.
Abstract: We show that modulating the source rate of a video encoder based on congestion signals from the network has two major benefits: the quality of the video transmission degrades gracefully when the network is congested and the transmission capacity is used efficiently. Source rate modulation techniques have been used in the past in designing fixed rate video encoders used over telephone networks. In such constant bit rate encoders, the source rate modulation is done using feedback information about the occupancy of a local buffer. Thus, the feedback information is available instantaneously to the encoder. In the scheme proposed, the feedback may be delayed by several frames because it comes from an intermediate switching node of a packet switched network. The paper shows the proposed scheme performs quite well despite this delay in feedback. We believe the use of such schemes will simplify the architecture used for supporting real time video services in future nationwide gigabit networks.

188 citations


Journal ArticleDOI
TL;DR: The authors formulate the problem exactly as an integer programming problem and propose a heuristic solution for this problem and show that it performs extremely well.
Abstract: Considers a problem of network design of personal communication services (PCS). The problem is to assign cells to the switches of a PCS network in an optimum manner. The authors consider two types of costs. One is the cost of handoffs between cells. The other is the cost of cabling (or trunking) between a cell site and its associated switch. The problem is constrained by the call volume that each switch can handle. The authors formulate the problem exactly as an integer programming problem. They also propose a heuristic solution for this problem and show that it performs extremely well. >

Journal ArticleDOI
TL;DR: A new approach for spare-capacity assignment in mesh-type self-healing networks which use reconfigurable network elements as transmission hubs is presented to demonstrate, for cases where hop limits can feasibly be kept low, its superiority over other algorithms published in this area.
Abstract: This paper presents a new approach for spare-capacity assignment in mesh-type self-healing networks which use reconfigurable network elements as transmission hubs. Under this approach the process of reducing total weighted cost of spare capacity is obtained by taking into account all network's eligible restoration routes which do not violate a predetermined hop-limit value. The process derived considers a given set of possible failure scenarios, which include single-link, multi-link and node failures, and is adaptive to accommodate several practical considerations such as integrality of spare channels and modularity of transmission systems. This process is generally composed of two parts: part 1 relies on a linear-programming formulation (min-max) from which a lower bound solution of spare cost is found; part 2 rounds-up the solution of part 1 to a feasible solution and uses a series of max-flow tests aimed at tightening the rounded-up assignment to a practical optimum. For small and moderate size networks a mixed-integer-programming formulation of part 1 can be used to obtain optimal results. A network, which has already been studied in the literature, is analyzed to illustrate the approach developed and to demonstrate, for cases where hop limits can feasibly be kept low, its superiority over other algorithms published in this area.

Journal ArticleDOI
TL;DR: It is found that while different video sequences require different TD parameters, the following trends hold for all sequences examined: increasing the delay in the video system decreases the necessary peak rate and significantly increases the number of calls that can be carried by the network.
Abstract: This paper examines the problem of video transport over ATM networks using knowledge of both video system design and broadband networks. The following issues are addressed: video system delay caused by internal buffering, traffic descriptors (TD) for video, and call admission. We find that while different video sequences require different TD parameters, the following trends hold for all sequences examined. First, increasing the delay in the video system decreases the necessary peak rate and significantly increases the number of calls that can be carried by the network. Second, as an operational traffic descriptor for video, the leaky-bucket algorithm appears to be superior to the sliding-window algorithm. And finally, with a delay in the video system, the statistical multiplexing gain from VBR over CBR video is upper bounded by roughly a factor of four, and to obtain a gain of about 2.0 can require the operational traffic descriptor to have a window or bucket size on the order of a thousand cells. We briefly discuss how increasing the complexity of the video system may enable the size of the bucket or window to be reduced. >

Journal ArticleDOI
TL;DR: A delay guarantee for the virtual clock service discipline (inspired by time division multiplexing) is presented and proved and the concept of an active flow is introduced and formally stated as a theorem.
Abstract: In a packet switching network, each communication channel is statistically shared among many traffic flows that belong to different end-to-end sessions. We present and prove a delay guarantee for the virtual clock service discipline (inspired by time division multiplexing). The guarantee has several desirable properties, including the following firewall property: the guarantee to a flow is unaffected by the behavior of other flows sharing the same server. There is no assumption that sources are flow controlled or well behaved. We first introduce and define the concept of an active flow. The delay guarantee is then formally stated as a theorem. We show how to obtain delay bounds from the delay guarantee of a single server for different specifications.

Journal ArticleDOI
TL;DR: The paper argues that key distribution may require substantially different approaches in different network environments and shows that the proposed family of protocols offers a flexible palette of compatible solutions addressing many different networking scenarios.
Abstract: An essential function for achieving security in computer networks is reliable authentication of communicating parties and network components. Such authentication typically relies on exchanges of cryptographic messages between the involved parties, which in turn implies that these parties be able to acquire shared secret keys or certified public keys. Provision of authentication and key distribution functions in the primitive and resource-constrained environments of low-function networking mechanisms, portable, or wireless devices presents challenges in terms of resource usage, system management, ease of use, efficiency, and flexibility that are beyond the capabilities of previous designs such as Kerberos or X.509. This paper presents a family of light-weight authentication and key distribution protocols suitable for use in the low layers of network architectures. All the protocols are built around a common two-way authentication protocol. The paper argues that key distribution may require substantially different approaches in different network environments and shows that the proposed family of protocols offers a flexible palette of compatible solutions addressing many different networking scenarios. The mechanisms are minimal in cryptographic processing and message size, yet they are strong enough to meet the needs of secure key distribution for network entity authentication. The protocols presented have been implemented as part of comprehensive security subsystem prototype called KryptoKnight. >

Journal ArticleDOI
TL;DR: This study proposes a simple, effective method for link capacity allocation and network control using on-line observation of traffic flow in the low-frequency band and explores a new direction for measurement-based traffic control in high-speed networks.
Abstract: We study link capacity allocation for a finite buffer system to transmit multimedia traffic. The queueing process is simulated with real video traffic. Two key concepts are explored in this study. First, the link capacity requirement at each node is essentially captured by its low-frequency input traffic (filtered at a properly selected cut-off frequency). Second, the low-frequency traffic stays intact as it travels through a finite-buffer system without significant loss. Hence, one may overlook the queueing process at each node for network-wide traffic flow in the low-frequency band. We propose a simple, effective method for link capacity allocation and network control using on-line observation of traffic flow in the low-frequency band. The study explores a new direction for measurement-based traffic control in high-speed networks. >

Journal ArticleDOI
TL;DR: By allowing the length of messages to be variable, a long message can be scheduled with a single control packet transmission, thereby significantly reducing the overhead of control packet transmissions and improving the overall system performance.
Abstract: The design of a medium access control scheme for a single-hop, wavelength-division-multiplexing-(WDM) multichannel local lightwave network poses two major difficulties: relatively large transmitter/receiver tuning overhead and large ratio of propagation delay to packet transmission time. Most schemes proposed so far have ignored the tuning overhead, and they can only schedule fixed-length packet transmissions. To overcome these two difficulties, the authors propose several scheduling algorithms which can reduce the negative impact of tuning overhead and schedule variable-length messages. A separate channel (control channel) is employed for transmission of control packets, and a distributed scheduling algorithm is invoked at each node every time it receives a control packet. By allowing the length of messages to be variable, a long message can be scheduled with a single control packet transmission, instead of fragmenting it into many fixed-length packets, thereby significantly reducing the overhead of control packet transmissions and improving the overall system performance. Three novel scheduling algorithms are proposed, varying in the amount of global information and processing time they need. Two approximate analytical models are formulated to study the effect of tuning time and the effect of having a limited number of data channels. Extensive simulations are conducted. Average message delays are compared for all of the algorithms. >

Journal ArticleDOI
TL;DR: It is proved that a connection composed of virtual-clock servers provides an upper bound on delay for leaky bucket constrained sessions, i.e., sessions conforming to a token bucket filter, and it is the same upper Bound on delay given by PGPS.
Abstract: Proves that a connection composed of virtual-clock servers provides an upper bound on delay for leaky bucket constrained sessions, i.e., sessions conforming to a token bucket filter. This upper bound on delay is calculated, and it is the same upper bound on delay given by PGPS. The authors also prove that leaky bucket constrained sessions are the only type of sessions for which an upper bound on delay can be provided by servers with an upper bound on link capacity. >

Journal ArticleDOI
TL;DR: Techniques for, and measure the performance of, fast software implementation of the cyclic redundancy check (CRC), weighted sum codes (WSC), one's-complement checksum, Fletcher (1982) Checksum, CXOR checksum and block parity code are discussed.
Abstract: Software implementations of error detection codes are considered to be slow compared to other parts of the communication system. This is especially true for powerful error detection codes such as CRC. However, we have found that powerful error detection codes can run surprisingly fast in software. We discuss techniques for, and measure the performance of, fast software implementation of the cyclic redundancy check (CRC), weighted sum codes (WSC), one's-complement checksum, Fletcher (1982) checksum, CXOR checksum, and block parity code. Instruction count alone does not determine the fastest error detection code. Our results show the computer memory hierarchy also affects performance. Although our experiments were performed on a Sun SPARCstation LX, many of the techniques and conclusions will apply to other processors and error detection codes. Given the performance of various error detection codes, a protocol designer can choose a code with the desired speed and error detection power that is appropriate for his network and application.

Journal ArticleDOI
TL;DR: The authors define a wide variety of schedules and develop a general framework for analyzing their throughput performance for any number of available wavelengths, any tunability characteristics, and general (potentially nonuniform) traffic patterns.
Abstract: Considers single-hop lightwave networks with stations interconnected using wave division multiplexing. The stations are equipped with tunable transmitters and/or receivers. A predefined, wavelength-time oriented schedule specifies the slots and the wavelengths on which communication between any two pairs of stations is allowed to take place. The authors define a wide variety of schedules and develop a general framework for analyzing their throughput performance for any number of available wavelengths, any tunability characteristics, and general (potentially nonuniform) traffic patterns. They then consider the optimization of schedules given the traffic requirements and present optimization heuristics that give near-optimal results. They also investigate how the number of available wavelengths (channels) affects the system throughput, and develop techniques to efficiently share the available channels among the network stations. As a result, they obtain systems that are easy to scale while having very good performance. >

Journal ArticleDOI
TL;DR: A multi-hour, multi-traffic class network (capacity) design model for providing specified quality-of-service in such dynamically reconfigurable ATM networks based on the observation that statistical multiplexing of virtual circuits for a traffic class in a virtual path leads to decoupling of the network dimensioning problem into the bandwidth estimation problem and the combined virtual path routing and capacity design problem.
Abstract: The virtual path (VP) concept has been gaining attention in terms of effective deployment of asynchronous transfer mode (ATM) networks in recent years. In a recent paper, we outlined a framework and models for network design and management of dynamically reconfigurable ATM networks based on the virtual path concept from a network planning and management perspective. Our approach has been based on statistical multiplexing of traffic within a traffic class by using a virtual path for the class and deterministic multiplexing of different virtual paths, and on providing dynamic bandwidth and reconfigurability through virtual path concept depending on traffic load during the course of the day. In this paper, we discuss in detail, a multi-hour, multi-traffic class network (capacity) design model for providing specified quality-of-service in such dynamically reconfigurable networks. This is done based on the observation that statistical multiplexing of virtual circuits for a traffic class in a virtual path, and the deterministic multiplexing of different virtual paths leads to decoupling of the network dimensioning problem into the bandwidth estimation problem and the combined virtual path routing and capacity design problem. We discuss how bandwidth estimation can be done, then how the design problem can be solved by a decomposition algorithm by looking at the dual problem and using subgradient optimization. We provide computational results for realistic network traffic data to show the effectiveness of our approach. We show for the test problems considered, our approach does between 6% to 20% better than a local shortest-path heuristic. We also show that considering network dynamism through variation of traffic during the course of a day by doing dynamic bandwidth and virtual path reconfiguration can save between 10% and 14% in network design costs compared to a static network based on maximum busy hour traffic.

Journal ArticleDOI
TL;DR: The algorithm developed in Choudhury et al. (1994) for computing (exact) steady-state blocking probabilities for each class in product-form loss networks is extended to cover general state-dependent arrival and service rates, allowing to consider, for the first time, a wide variety of buffered and unbuffered resource-sharing models with non-Poisson traffic.
Abstract: The algorithm developed in Choudhury et al. (1994) for computing (exact) steady-state blocking probabilities for each class in product-form loss networks is extended to cover general state-dependent arrival and service rates. This generalization allows to consider, for the first time, a wide variety of buffered and unbuffered resource-sharing models with non-Poisson traffic, as may arise with overflows in the context of alternative routing. As before, the authors consider noncomplete-sharing policies involving upper-limit and guaranteed-minimum bounds for the different classes, but in the present paper both bounds are discussed simultaneously. These bounds are important for providing different grades of service with protection against overloads by other classes. The algorithm is based on numerically inverting the generating function of the normalization constant, which is derived in the present paper. Major features of the algorithm are: dimension reduction by elimination of nonbinding resources and by conditional decomposition based on special structure, an effective scaling algorithm to control errors in the inversion, efficient treatment of multiple classes with identical parameters and truncation of large sums. The authors show that the computational complexity of the inversion approach is usually significantly lower than the alternative recursive approach. >

Journal ArticleDOI
TL;DR: The authors discuss a class of WDMA networks that are homogeneous in the sense that each node contains both an input/output port and a switch and present a lower bound on the number of wavelengths required for permutation routing as a function of the size and degree of the network.
Abstract: All-optical networks are networks for which all data paths remain optical from input to output. With rapid development of optical technology, such networks are a viable choice for the high speed wide area networks of the future. Wavelength division multiple access (WDMA) currently provides the most mature technology for all-optical networks. The authors discuss a class of WDMA networks that are homogeneous in the sense that each node contains both an input/output port and a switch. They focus on the permutation routing problem and first, present a lower bound on the number of wavelengths required for permutation routing as a function of the size and degree of the network. They use particular topologies, including the multistage perfect shuffle, the Debruijn, and the hypercube, to find achievable upper bounds on the number of required wavelengths. >

Journal ArticleDOI
TL;DR: GEMNET can serve as a logical (virtual), packet-switched, multihop topology which can be employed for constructing the next generation of lightwave networks using wavelength-division multiplexing (WDM).
Abstract: GEMNET is a generalization of shuffle-exchange networks and it can represent a family of network structures (including ShuffleNet and de Bruijn graph) for an arbitrary number of nodes. GEMNET employs a regular interconnection graph with highly desirable properties such as small nodal degree, simple routing, small diameter, and growth capability (viz. scalability). GEMNET can serve as a logical (virtual), packet-switched, multihop topology which can be employed for constructing the next generation of lightwave networks using wavelength-division multiplexing (WDM). Various properties of GEMNET are studied. >

Journal ArticleDOI
David McMillan1
TL;DR: Results are provided which show how nonpreemptive priority in conjunction with channel reservation and hysteresis an provide for fast and efficient handover performance for cellular mobile networks.
Abstract: A nonpreemptive priority queueing system incorporating a channel reservation policy and a hysteresis mechanism is considered for use in cellular mobile networks. The aim is to provide for priority for handover attempts over new call. Attempts in such a way as to avoid the forced termination of calls in progress without unduly affecting the performance seen by new call attempts. The system model is found to have a matrix-geometric solution with a phase-type distribution for the handover delay and a matrix-exponential distribution for the delay seen by new call attempts. It is also shown that the system can be represented by an M/G/1 queue with multiple vacations. A new result is derived for the M/G/1 queue with multiple vacations and impatient customers and this allows for the analysis to be extended so that new call arrivals are subject to a fixed timeout in queue. Results are provided which show how nonpreemptive priority in conjunction with channel reservation and hysteresis an provide for fast and efficient handover performance for cellular mobile networks. >

Journal ArticleDOI
TL;DR: A test sequence generation method is proposed for testing the conformance of a protocol implementation to its specification in a remote testing system where both external synchronization and input/output operation costs are taken into consideration.
Abstract: A test sequence generation method is proposed for testing the conformance of a protocol implementation to its specification in a remote testing system where both external synchronization and input/output operation costs are taken into consideration. The method consists of a set of transformation rules that constructs a duplexU digraph from a given finite state machine (FSM) representation of a protocol specification; and an algorithm that finds a rural postman tour in the duplexU digraph to generate a synchronizable test sequence utilizing multiple UIO sequences. If the protocol satisfies a specific property, namely, the transitions to be tested and the UIO sequences to be employed form a weakly-connected subgraph of the duplexU digraph, the proposed algorithm yields a minimum-cost test sequence. X.25 DTE and ISO Class 0 transport protocols are shown to possess this property. Otherwise, the algorithm yields a test sequence whose cost is within a bound from the cost of the minimum-cost test sequence. The bound for the test sequence generated from the Q.931 network-side protocol is shown to be the cost sum of an input/output operation pair and an external synchronization operation. >

Journal ArticleDOI
TL;DR: A new hierarchical self-healing ring (HSHR) architecture for circuit-switched networks is proposed and the design of HSHR networks is considered and it is shown that the enumeration method, which finds the optimum configuration of HSI, can only be used for small networks due to the complexity.
Abstract: Traffic restoration in case of a failure in a circuit-switched telecommunications network involves finding alternate paths for all working paths that are severed by the failure, and rerouting affected traffic on these alternate paths. A new hierarchical self-healing ring (HSHR) architecture for circuit-switched networks is proposed and the design of HSHR networks is considered. A general cost model incorporating both the installation cost and the material cost is used. It is shown that the enumeration method, which finds the optimum configuration of HSHR, can only be used for small networks due to the complexity. Heuristic algorithms to find near-optimum HSHR configurations are presented. The routing and dimensioning of HSHR are also considered. Dimensioning of an HSHR is transformed into dimensioning of single self-healing rings inside the HSHR. Numerical results show that the performance of the heuristic is satisfactory.

Journal ArticleDOI
TL;DR: Numerical examples indicate that by appropriately selecting the partitions, the CSVP method may be used to provide differential cell loss rate requirements by the different traffic types.
Abstract: Buffer allocation to provide an efficient and fair use of the available buffer spaces is critically important for ATM networks. A complete sharing with virtual partition (CSVP) strategy for buffer management at a multiplexer or an output port of an output buffered switch is proposed and analyzed. The total buffer space is partitioned based on the relative traffic loads (measured or estimated). Virtual partition allows a newly arriving cell belonging to an oversubscribed type to occupy the spare space of an undersubscribed type, and to be overwritten when necessary. Using a fluid flow approach, a set of partial differential equations with a triangular stability region is established to characterize the dynamics of a system supporting two traffic flows. Under a buffer full condition, the system behavior is described by a set of non-homogeneous ordinary differential equations. The cell loss probability for each traffic type is obtained by solving the ordinary differential equations. Numerical examples indicate that by appropriately selecting the partitions, the CSVP method may be used to provide differential cell loss rate requirements by the different traffic types.

Journal ArticleDOI
TL;DR: The authors show that a switch designed to meet the performance requirement for unicast calls will also satisfy multicast calls' performance and propose and analyzes a recursive modular architecture for implementing a large-scale multicast output buffered ATM switch.
Abstract: Proposes and analyzes a recursive modular architecture for implementing a large-scale multicast output buffered ATM switch (MOBAS). A multicast knockout principle, an extension of the generalized knockout principle, is applied in constructing the MOBAS in order to reduce the hardware complexity (e.g., the number of switch elements and interconnection wires) by almost one order of magnitude. In the proposed switch architecture, four major functions of designing a multicast switch: cell replication, cell routing, cell contention resolution, and cell addressing, are all performed distributively so that a large switch size is achievable. The architecture of the MOBAS has a regular and uniform structure and, thus, has the advantages of: (1) easy expansion due to the modular structure, (2) high integration density for VLSI implementation, (3) relaxed synchronization for data and clock signals, and (4) building the center switch fabric (i.e., the multicast grouping network) with a single type of chip. A two-stage structure of the multicast output buffered ATM switch (MOBAS) is described. The performance of the switch fabric in cell loss probability is analyzed, and the numerical results are shown. The authors show that a switch designed to meet the performance requirement for unicast calls will also satisfy multicast calls' performance. A 16/spl times/16 ATM crosspoint switch chip based on the proposed architecture has been implemented using CMOS 2-/spl mu/m technology and tested to operate correctly. >

Journal ArticleDOI
TL;DR: An optimal synchronous bandwidth allocation (SBA) scheme named enhanced MCA (EMCA) for guaranteeing synchronous messages with deadlines equal to periods in length is proposed, an enhancement on the previously published MCA scheme.
Abstract: This paper investigates the inherent timing properties of the timed-token medium access control (MAC) protocol necessary to guarantee synchronous message deadlines in a timed token ring network such as, fiber distributed data interface (FDDI), where the timed-token MAC protocol is employed. As a result, an exact upper bound, tighter than previously published, on the elapse time between any number of successive token arrivals at a particular node has been derived. Based on the exact protocol timing property, an optimal synchronous bandwidth allocation (SBA) scheme named enhanced MCA (EMCA) for guaranteeing synchronous messages with deadlines equal to periods in length is proposed. This scheme is an enhancement on the previously published MCA scheme.