scispace - formally typeset
Search or ask a question

Showing papers on "Throughput published in 1993"


Journal ArticleDOI
Abhay Parekh1, Robert G. Gallager1
TL;DR: Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments.
Abstract: Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers. The inherent flexibility of the service discipline is exploited to analyze broad classes of networks. When only a subset of the sessions are leaky bucket constrained, we give succinct per-session bounds that are independent of the behavior of the other sessions and also of the network topology. However, these bounds are only shown to hold for each session that is guaranteed a backlog clearing rate that exceeds the token arrival rate of its leaky bucket. A much broader class of networks, called consistent relative session treatment (CRST) networks is analyzed for the case in which all of the sessions are leaky bucket constrained. First, an algorithm is presented that characterizes the internal traffic in terms of average rate and burstiness, and it is shown that all CRST networks are stable. Next, a method is presented that yields bounds on session delay and backlog given this internal traffic characterization. The links of a route are treated collectively, yielding tighter bounds than those that result from adding the worst-case delays (backlogs) at each of the links in the route. The bounds on delay and backlog for each session are efficiently computed from a universal service curve, and it is shown that these bounds are achieved by "staggered" greedy regimes when an independent sessions relaxation holds. Propagation delay is also incorporated into the model. Finally, the analysis of arbitrary topology GPS networks is related to Packet GPS networks (PGPS). The PGPS scheme was first proposed by Demers, Shenker and Keshav (1991) under the name of weighted fair queueing. For small packet sizes, the behavior of the two schemes is seen to be virtually identical, and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments. >

3,967 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: The requirements for a cross-domain transfer facility are outlined, the design of the fbuf mechanism that meets these requirements are described, and the impact of fbufs on network performance is experimentally quantified.
Abstract: We have designed and implemented a new operating system facility for I/O buffer management and data transferacross protection domain boundaries on shared memory machines. This facility, called fast buffers (fbufs), combines virtual page remapping with shared virtual memory, and exploits locality in I/O traffic to achieve high throughput without compromising protection, security, or modularity. goal is to help deliver the high bandwidth afforded by emerging high-speed networks to user-level processes, both in monolithic and microkernel-based operating systems.This paper outlines the requirements for a cross-domain transfer facility, describes the design of the fbuf mechanism that meets these requirements, and experimentally quantifies the impact of fbufs on network performance.

419 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: The authors propose a service discipline, called the rate-controlled static-priority (RCSP) queuing discipline, that provides throughput, delay, delay jitter, and loss free guarantees in a connection-oriented packet-switching network.
Abstract: The authors propose a service discipline, called the rate-controlled static-priority (RCSP) queuing discipline, that provides throughput, delay, delay jitter, and loss free guarantees in a connection-oriented packet-switching network. The RCSP queuing discipline avoids both time-framing and sorted priority queues; it achieves flexibility in the allocation of delay and bandwidth, as well as simplicity of implementation. The key idea is to separate rate-control and delay-control functions in the design of the server. Applying this separation of functions results in a class of service disciplines of which RCSP is an instance. >

377 citations


Journal ArticleDOI
Israel Cidon1, Yoram Ofek1
TL;DR: The design principles of a ring network with spatial bandwidth reuse, a reliable fairness mechanism, and the exploitation of advent in fiber-optic technology are described, which are the basis for the MetaRing network architecture.
Abstract: The design principles of a ring network with spatial bandwidth reuse are described. A distributed fairness mechanism for this architecture, which uses low latency hardware control signals, is presented. The basic fairness mechanism can be extended for implementing multiple priority levels and integration of asynchronous with synchronous traffic. The ring is full-duplex and has two basic modes of operation: buffer insertion mode for variable-size packets and slotted mode for fixed-size packets or cells. Concurrent access and spatial reuse allow simultaneous transmissions over disjoint segments of a bidirectional ring and can increase the effective throughput by a factor of four or more. The combination of a full-duplex ring, spatial reuse, a reliable fairness mechanism, and the exploitation of advent in fiber-optic technology are the basis for the MetaRing network architecture. >

269 citations


Proceedings ArticleDOI
29 Nov 1993
TL;DR: The results identify the mechanisms affecting end-to-end performance when retransmissions are used to get better error performance on a link, and quantify the increased load on the link due to competing retransmission strategies.
Abstract: Considers the performance of transport-layer protocols over networks where one of the links is wireless. A reliable transport protocol is responsible for end-to-end data integrity, and will retransmit lost or errored data. Compared to a link over copper or fiber, a wireless link will have a much higher error rate. Link-layer retransmissions can reduce the error rate on the radio link, but the interaction of link-layer retransmission with end-to-end retransmission can be complicated. The authors have investigated, via analytic, numerical and simulation techniques, the end-to-end effects of link-layer retransmissions in the presence of a reliable end-to-end transport protocol. The results identify the mechanisms affecting end-to-end performance when retransmissions are used to get better error performance on a link. They quantify the increased load on the link due to competing retransmission strategies, and, for a transport protocol modeled on TCP, they identify the region of loss rates where link-layer retransmissions have the undesirable effects of both reducing end-to-end throughput and increasing link utilization in the network segment where bandwidth is the most expensive. >

241 citations


Journal ArticleDOI
Limin Hu1
TL;DR: Simulations based on well-controlled topologies (sparse topologies) show that the pairwise code-assignment scheme requires much fewer codes than transmitter-based code assignment, while maintaining similar throughput performance.
Abstract: Code-division multi-access (CDMA) techniques allow many users to transmit simultaneously in the same band without substantial interference by using approximately orthogonal (low cross-correlation) spread-spectrum waveforms. Two-phase algorithms have been devised to assign and reassign spread-spectrum codes to transmitters, to receivers and to pairs of stations in a large dynamic packet radio network in polynomial times. The purpose of the code assignments is to spatially reuse spreading codes to reduce the possibility of packet collisions and to react dynamically to topological changes. These two-phase algorithms minimize the time complexity in the first phase and minimize the number of control packets needed to be exchanged in the second phase. Therefore, they can start the network operation in a short time, then switch to the second phase with the goal of adapting to topological changes. A pairwise code-assignment scheme is proposed to assign codes to edges. Simulations based on well-controlled topologies (sparse topologies) show that the scheme requires much fewer codes than transmitter-based code assignment, while maintaining similar throughput performance. >

203 citations


Patent
13 Jul 1993
TL;DR: In this article, a fast packet switch comprising one buffer directly connected between a plurality of input ports and output ports to effect rapid throughput of data packets is proposed. But the buffer manager does not allocate a pointer to a location in the buffer, and the pointer is allocated by a buffer manager upon receiving a packet at the receiving input port and the input port delivers the packet as it is received to the location designated by the pointer.
Abstract: A fast packet switch comprising one buffer directly connected between a plurality of input ports and a plurality of output ports to effect rapid throughput of data packets. A pointer to a location in the buffer is allocated by a buffer manager upon receipt of notification of an incoming packet at the receiving input port and the input port delivers the packet as it is received to the location designated by the pointer. After the data packet is received, the input port delivers the pointer and a destination address for the packet to a router, which selects one of the plurality of output ports based on the destination address. The router queues the pointer in a queue for the selected output port. The output port then retrieves the data packet from the buffer using the pointer to determine the location, and transmits the data packet. After the transmission is complete, the output port returns the pointer to the buffer manager. This packet switch may be pipelined to receive, route, and transmit simultaneously on adjacent data packets.

167 citations


Journal ArticleDOI
TL;DR: A widely-applicable technique for integrating protocols that not only improves performance, but also preserves the modularity of protocol layers by automatically integrating independently expressed protocols is introduced.
Abstract: Integrating protocol data manipulations is a strategy for increasing the throughput of network protocols. The idea is to combine a series of protocol layers into a pipeline so as to access message data more efficiently. This paper introduces a widely-applicable technique for integrating protocols. This technique not only improves performance, but also preserves the modularity of protocol layers by automatically integrating independently expressed protocols. The paper also describes a prototype integration tool, and studies the performance limits and scalability of protocol integration. >

159 citations


Journal ArticleDOI
TL;DR: It is shown that compared to conventional network architectures, the Lightnets offer substantial performance gains in terms of increased throughput and smaller buffering requirements.
Abstract: An inherent problem of conventional point to point WAN architectures is that they cannot translate optical transmis- sion bandwidth into comparable user available throughput due to the limiting electronic processing speed of the switching nodes. This paper presents a solution to WDM based WAN networks that addresses this limitation. The proposed Lightnet architecture trades the ample bandwidth obtained by using multiple wave- length for a reduction in the number of processing stages and a simplification of each switching stage, leading to substantially increased throughputs. The principle of the Lightnet architecture is the construction and use of a virtual topology network in the wavelength domain, embedded in the original network. This paper studies the em- bedding of virtual networks whose topologies are regular, using algorithms which provide bounds on the number of wavelengths, switch sizes, and average number of switching stages per packet transmission. Algorithms for the embedding of alternative regu- lar topologies are presented and their performance is evaluated. It is shown that compared to conventional network architectures, the Lightnets offer substantial performance gains in terms of increased throughput and smaller buffering requirements.

154 citations


Journal ArticleDOI
TL;DR: It is shown how an MBAA can be integrated into a single-hop slotted ALOHA packet radio system, and the resulting throughput is analyzed for both finite- and infinite-user populations.
Abstract: The authors consider the use of a multiple-beam adaptive array (MBAA) in a packet radio system. In an MBAA, a given set of antenna elements is used to form several antenna patterns simultaneously. When it is used in a packet radio system, an MBAA can successfully receive two or more overlapping packets at the same time. Each beam captures a different packet by automatically pointing its pattern toward one packet while nulling other, contending packets. It is shown how an MBAA can be integrated into a single-hop slotted ALOHA packet radio system, and the resulting throughput is analyzed for both finite- and infinite-user populations. >

141 citations


Journal ArticleDOI
TL;DR: Analysis of the design and performance of a workstation's network interface to the 100-Mb/s FDDI token ring reveals that providing a DMA engine for data movement provides significant improvements in throughput.
Abstract: Design issues that affect the performance of network input/output (I/O) are examined by analyzing the design and performance of a workstation's network interface to the 100-Mb/s FDDI token ring. Several design alternatives for partitioning functions between the network interface and the host software are evaluated. A simple model is proposed for looking at the performance of network I/O, and an effective analysis approach for predicting user-perceived throughput is demonstrated. The analysis reveals that, particularly for network interfaces that reside on an I/O bus, providing a DMA engine for data movement provides significant improvements in throughput. However, the designs for the receive and transmit sides are not necessarily symmetrical, and it is shown that host architecture considerations influence the design of each direction differently. The analysis is used to show the potential benefits of having all protocol functions on the network interface and also to point out the potential processing power needed on that network interface. >

Journal ArticleDOI
TL;DR: It is argued that the bandwidth of the CPU/memory data path on workstations will remain within the same order of magnitude as the network bandwidth delivered to the workstation, and it is essential that the number of times network data traverses theCPU/ memory data path be minimized.
Abstract: It is argued that the bandwidth of the CPU/memory data path on workstations will remain within the same order of magnitude as the network bandwidth delivered to the workstation. This makes it essential that the number of times network data traverses the CPU/memory data path be minimized. Evidence which suggests that the cache cannot be expected to significantly reduce the number of data movements over this path is reviewed. Hardware and software techniques for avoiding the CPU/memory bottleneck are discussed. It is concluded that naively applying these techniques is not sufficient for achieving good application-to-application throughput; they must also be carefully integrated. Various techniques that can be integrated to provide a high bandwidth data path between I/O devices and application programs are outlined. >

Journal ArticleDOI
TL;DR: Media access control protocols for an optically interconnected star-coupled system with preallocated wavelength-division multiple-access channels are discussed and semi-Markov analytic models are developed to investigate the performance of the two protocols.
Abstract: Media access control protocols for an optically interconnected star-coupled system with preallocated wavelength-division multiple-access channels are discussed. The photonic network is based on a passive star-coupled configuration in which high topological connectivity is achieved with low complexity and excellent fault tolerance. The channels are preallocated to the nodes with the proposed approach, and each node has a home channel it uses either for data packet transmission or data packet reception. The performance of a generalized random access protocol is compared to an approach based on interleaved time multiplexing. Semi-Markov analytic models are developed to investigate the performance of the two protocols. The analytic models are validated through extensive simulation. The performance is evaluated in terms of network throughput and packet delay with variations in the number of nodes, data channels, and packet generation rate. >

Journal ArticleDOI
D. Banks1, M. Prudence1
TL;DR: The authors discuss the design of a single-copy network architecture, where data is copied directly between the application buffer and the network interface, and report some early results that demonstrate twice the throughput of a conventional network architecture and significantly lower latency.
Abstract: With current low-cost high-performance workstations, application-to-application throughput is limited more by host memory bandwidth than by the cost of protocol processing. Conventional network architectures are inefficient in their use of this memory bandwidth, because data is copied several times between the application and the network. As network speeds increase further, network architectures must be developed that reduce the demands on host memory bandwidth. The authors discuss the design of a single-copy network architecture, where data is copied directly between the application buffer and the network interface. Protocol processing is performed by the host, and transport layer buffering is provided on the network interface. They describe a prototype implementation for the HP Apollo Series 700 workstation family that consists of an FDDI network interface and a modified 4.3BSD TCP/IP protocol stack, and report some early results that demonstrate twice the throughput of a conventional network architecture and significantly lower latency. >

Proceedings ArticleDOI
28 Mar 1993
TL;DR: The performance of a statistically multiplexed asynchronous transfer mode (ATM) network supporting a number of such VBR video sources is evaluated, and results confirm that ATM channel efficiencies of approximately 80-90% can be obtained at reasonable cell loss rate and delay levels.
Abstract: A variable-bit-rate (VBR) MPEG video compression encoder is introduced, and the performance of a statistically multiplexed asynchronous transfer mode (ATM) network supporting a number of such VBR video sources is evaluated. Bit-rate characteristics obtained from a detailed simulation are provided for a VBR MPEG encoder for CCIR601 video (operating in the 5-10 Mb/s regime) appropriate for medium-quality multimedia or broadcasting applications. The results presented include bit-rate traces and signal-to-noise-ratio data for typical test sequences, along with summary statistics such as the marginal distribution of frame rate. Data from a study of statistical multiplexing on an ATM network are also given. Simulation results for an ATM statistical multiplexer with N>>1 VBR MPEG sources are presented in terms of key performance measures such as cell loss rate and delay versus throughput. The results confirm that ATM channel efficiencies of approximately 80-90% can be obtained at reasonable cell loss rate and delay levels. >

Journal ArticleDOI
TL;DR: An architecture for the copy network that is an integral part of multicast ATM switches that makes use of the property that the broadcast banyan network (BBN) is nonblocking if the active inputs are cyclically concentrated and the outputs are monotone is described.
Abstract: An architecture for the copy network that is an integral part of multicast ATM switches is described. The architecture makes use of the property that the broadcast banyan network (BBN) is nonblocking if the active inputs are cyclically concentrated and the outputs are monotone. In the architecture, by employing a token ring reservation scheme, the outputs of the copy network are reserved before a multicast cell is replicated. By the copy principle, the number of copies requested by a multicast call is not limited by the size of the copy network so that very large multicast switches can be configured in a modular fashion. The sequence of cells is preserved in the structure. Though physically separated, buffers within the copy network are completely shared, so that the throughput can reach 100%, and the cell delay and the cell loss probability can be made to be very small. The cell delay is estimated analytically and by computer simulation, and the results of both are found to agree with each other. The relationship between the cell loss probability under various traffic parameters and buffer sizes is studied by computer simulation. >

Journal ArticleDOI
TL;DR: A media-access protocol for high-speed packet-switched multichannel networks that are based on a broadcast topology, such as optical passive star networks using wavelength-division multiple access, is described, which uses the bandwidth efficiently while keeping the processing requirements low.
Abstract: A media-access protocol for high-speed packet-switched multichannel networks that are based on a broadcast topology, such as optical passive star networks using wavelength-division multiple access, is described. The protocol supports connection-oriented traffic with or without bandwidth reservation, as well as datagram traffic to integrate transport-layer functions with the media-access layer. It uses the bandwidth efficiently while keeping the processing requirements low by requiring stations to compute their transmission and reception schedules only at the start and end of each connection. Analysis results show that low blocking probabilities for connections and high network throughput can be achieved. >

Proceedings ArticleDOI
01 Oct 1993
TL;DR: In this article, a distributed queueing random access protocol (DQRAP) is proposed for a broadcast channel with three control minislots, where each station maintains two distributed queues: the data transmission queue and the collision resolution queue.
Abstract: For decades there has been a search for a multiple access protocol for a broadcast channel that would provide a performance that approached that of the ideal M/D/1 queue. This ideal performance would provide immediate access at light loads and then seamlessly move to a reservation system at high offered loads. DQRAP (distributed queueing random access protocol) provides a performance which approaches this ideal. Furthermore it is accomplished using as few as three control minislots which suggests that, aside from establishing new theoretical bounds, DQRAP will be of great practical value.DQRAP requires that channel time be divided into slots each of which consists of one data slot and m control minislots, and that each station maintain two common distributed queues. One queue is called the data transmission queue, or simply TQ, used to organize the order of data transmission, the other queue is the collision resolution queue, or simply RQ, which is used to resolve the collisions and to prevent collisions by new arrivals. The protocol includes data transmission rules, request transmission rules and queuing discipline rules. Modelling and simulation indicate that DQRAP, using as few as 3 minislots, achieves a performance level which approaches that of a hypothetical perfect scheduling protocol, i.e., the M/D/1 system, with respect to throughput and delay. DQRAP could prove useful in packet radio, satellite, broadband cable, cellular voice, WAN, and passive optical networks.

Proceedings ArticleDOI
23 May 1993
TL;DR: Five different implementations of the XOR strategy and the pure selective repeat strategy are compared in terms of throughput and the analytical and simulative results show that all five XOR strategies yield remarkably better results than thepure selective repeat.
Abstract: Selective repeat automatic-repeat-request (ARQ) protocols for use in a point-to-multipoint communication over broadcast links are considered. In this context, a new approach called XOR strategy, based on the selective repeat strategy for capacity enhancement of a communication channel, is suggested. The idea of the XOR selective repeat strategy is to physically combine several blocks negatively acknowledged (NACKed) by different receivers by XORing (i.e., modulo 2 addition) to minimize the number of retransmissions and to increase throughput. Five different implementations of the XOR strategy and the pure selective repeat strategy are compared in terms of throughput. The XOR protocols differ in the applied algorithms for the XORing of NACKed data blocks. The analytical and simulative results show that, in terms of throughput, all five XOR strategies yield remarkably better results than the pure selective repeat. >

Journal ArticleDOI
TL;DR: The authors describe several methods for analyzing the queueing behavior of switching networks with flow control and shared buffer switches, and the best of the methods accurately predicts throughput for multistage networks constructed from large switches.
Abstract: The authors describe several methods for analyzing the queueing behavior of switching networks with flow control and shared buffer switches. They compare the various methods on the basis of accuracy and computation speed, where the performance metric of most concern is the maximum throughput. The best of the methods accurately predicts throughput for multistage networks constructed from large switches (>or=8 ports). >

Journal ArticleDOI
TL;DR: It is found that LRR is an effective way for dealing with mobile jamming in a frequency-hop packet radio network and significant increases in throughput and end-to-end probability of success are obtained with LRR.
Abstract: Research in adaptive, decentralized routing for frequency-hop packet radio networks with mobile partial-band jamming. A routing technique called least-resistance routing (LRR) is developed, and various versions of this routing method are examined. LRR uses a quantitative assessment of the interference environment experienced by a radio's receiver to determine a resistance value for that radio. Two components for the interference environment are considered: transmissions from other radios and partial-band jamming. The resistances for each of the radios in a particular path are combined to form the path resistance, and packets are forwarded on the path with the smallest resistance. Comparisons are made between different versions of LRR and between LRR and previously developed adaptive routing techniques. It is found that LRR is an effective way for dealing with mobile jamming in a frequency-hop packet radio network. Significant increases in throughput and end-to-end probability of success are obtained with LRR. >

Journal ArticleDOI
TL;DR: It is shown that this local fairness algorithm can exploit the throughput advantage offered by spatial bandwidth reuse better than a global fairness algorithm and achieves the optimal throughput result predicted by the known Max-Min fairness definition.
Abstract: The authors present an algorithm to provide local fairness for ring and bus networks with spatial bandwidth reuse. Spatial bandwidth reuse can significantly increase the effective throughput delivered by the network. The proposed algorithm can be applied to any dual ring or bus architecture such as MetaRing. In the dual bus configuration, when transporting ATM cells, the local fairness algorithm can be implemented using two generic flow control (GFC) bits in the ATM cell header. In the performance it is shown that this local fairness algorithm can exploit the throughput advantage offered by spatial bandwidth reuse better than a global fairness algorithm. This is accomplished because it ensures fair use of network resources among nodes that are competing for the same subset of links, while permitting free access to noncongested parts of the network. The performance advantage of the local fairness scheme is demonstrated by simulating the system under various traffic scenarios and comparing the results to that of the MetaRing SAT-based global fairness algorithm. It is also shown that under certain traffic patterns, the performance of this algorithm achieves the optimal throughput result predicted by the known Max-Min fairness definition. >

Journal ArticleDOI
TL;DR: The authors consider client-server systems in which a set of workstations access a file server over a local area network based on token ring network and CSMA/CD network, modelled by a class of stochastic Petri nets.
Abstract: A client-server system is a distributed system where a server station receives requests from its client stations, processes the requests and returns replies to the requesting stations. The authors consider client-server systems in which a set of workstations access a file server over a local area network. The systems are modelled by a class of stochastic Petri nets. The mean response time, the throughput and the parametric sensitivities are evaluated for a client-server system based on token ring network and a system based on CSMA/CD network. These models are different from the prevalent performance models of token ring or CSMA/CD network systems because of the message interdependencies introduced by the clients-server structure. An approximate analytic-numeric method rather than simulation is used to solve the models. The solution method and the accuracy of approximation are also discussed. >

Patent
21 Jan 1993
TL;DR: In this paper, a two-wire, multi-channel communication system capable of handling the high throughput necessary for effective communication between a central controller aboard a tow vessel and the many sensors deployed along the streamer is presented.
Abstract: For use with marine seismic streamers (22), a two-wire, multi-channel communication system capable of handling the high throughput necessary for effective communication between a central controller (38) aboard a tow vessel (20) and the many sensors (24,26,28) deployed along the streamer (22). The central controller (38) includes an intelligent modem with the capability of transmitting and receiving frequency-modulated message signals on one or more signal lines, such as conventional twisted-pair wires, over a number of individual inbound and outbound frequency channels. In the preferred embodiment, seventeen channels are spread over a frequency band ranging from about 20kHz to 100kHz, thereby making available for communication a bandwidth much wider than available in conventional single-channel streamer communication. In this way, many positioning sensors, such as compasses, depth sensors (24), cable-leveling birds (26), and acoustic-ranging transceivers (28), attached to the streamer (22) and each having a transmitter and receiver tuned to one of the modem's inbound and outbound channels, respectively, can be put in communication with the modem. To take advantage of its high throughput capability, the intelligent modem refers to a stored table of individual sensor parameters, such as sensor type, transmit channel, and receive channel, to schedule an efficient scan of the sensors. As a diagnostic tool, the communication system also monitors the quality and performance of the communication link by measuring and recording such parameters as the transmitted and received signal strengths, signal-to-noise ratios, and number of incorrectly received messages.

01 Jan 1993
TL;DR: This presentation focuses on the class of Conflict Resolution Algorithms, which exhibits very good performance characteristics for ‘‘bursty’’ computer communications traffic, including high capacity, low delay under light traffic conditions, and inherent stability.
Abstract: Multiple Access protocols are distributed algorithms that enable a set of geographically dispersed stations to communicate using a single, common, broadcast channel. We concentrate on the class of Conflict Resolution Algorithms. This class exhibits very good performance characteristics for ‘‘bursty’’ computer communications traffic, including high capacity, low delay under light traffic conditions, and inherent stability. One algorithm in this class achieves the highest capacity among all known multiple-access protocols for the infinite population Poisson model. Indeed, this capacity is not far from a theoretical upper bound. After surveying the most important and influential Conflict Resolution Algorithms, the emphasis in our presentation is shifted to methods for their analysis and results of their performance evaluation. We also discuss some extensions of the basic protocols and performance results for non-standard environments, such as Local Area Networks, satellite channels, channels with errors, etc., providing a comprehensive bibliography. 1. Conflict Resolution Based Random Access Protocols The ALOHA protocols were a breakthrough in the area of multiple access communications.1 They delivered, more or less, what they advertized, i.e., low delay for bursty, computer generated traffic. They suffer, however, from stability problems and low capacity.2 The next major breakthrough in the area of multiple access communications was the development of random access protocols that resolve conflicts algorithmically. The invention of Conflict Resolution Algorithms (CRAs) is usually attributed to Capetanakis [Capet78, Capet79, Capet79b], and, independently, to Tsybakov and Mikhailov [Tsyba78]. The same idea, but in a slightly different context, was also presented, earlier, by Hayes [Hayes78]. Later, it was recognized [Berge84, Wolf85] that the underlying idea had been known for a long time in the context of Group Testing [Dorfm43, Sobel59, Ungar60]. Group Testing was developed during World War II to speed up processing of syphilis blood tests. Since the administered test had high sensitivity, it was suggested [Dorfm43] that many blood samples could be pooled together. The result of the test would then be positive if, and only if, there was at least one diseased sample in the pool, in which case individual tests were administered to isolate the diseased samples. Later, it was suggested that, after the first diseased sample was isolated, the remaining samples could again be pooled for further testing. The beginning of a general theory of Group Testing can be found in [Sobel59], where, as pointed out in [Wolf85], a tree search algorithm is suggested, similar to the ones we present in section 1.2. The first application of Group Testing to communications arose when Hayes proposed a new, and more efficient, polling algorithm that he named probing [Hayes78]. Standard polling schemes are unacceptable for large sets of bursty stations because the overhead is proportional to the number of stations in the system, and independent from the amount of traffic. Hayes’ main idea was to shorten the polling cycle by having the central controller query subsets of the total population to discover if these subsets contain stations with waiting packets. If the response is negative, the total subset is ‘‘eliminated’’ in a single query. If the response is positive, the group is split into two subgroups and the process is continued, recursively, until a single active station is polled. This station is then allowed to transmit some data, which does not have to be in the form of constant size packets. Clearly, this is a reservation protocol. In subsequent papers Hayes has also considered direct transmission systems. Notice that the controller receives feedback in the form something — nothing (at least one station, or no station with waiting packets). 1.1. Basic Assumptions The protocols that will be presented in this section have been developed, and analyzed, on the basis of a set of common assumptions3 that describe a standard environment that is usually called an ALOHA-type channel. 1. Synchronous (slotted) operation: The common-receiver model of a broadcast channel is usually implicitly assumed. Furthermore, messages are split into packets of fixed size. All transmitters are (and remain) synchronized, and may initiate transmissions only at predetermined times, one packet transmission time apart. The time between two successive allowable packet transmission times is called a slot and is usually taken as the time unit. Thus, if more than one packet is transmitted during the same slot, they are ‘‘seen’’ by the receiver simultaneously, and therefore, overlap completely. 2. Errorless channel: If a given slot contains a single packet transmission, then the packet will be received correctly (by the common receiver). 1 For an introduction to the area of multiple access communications see the books by Bertsekas and Gallager [Berts92, chapter 4] and [Rom90]. Actually, chapter 4 of [Berts92] and chapter 5 of [Rom90] also present good expositions of Conflict Resolution Algorithms. 2 If no special control is exercised to stabilize the protocols, the term capacity must be taken in the ‘‘broader’’ sense of maximum throughput maintained during considerable periods of time, since the true capacity is zero [Fergu75, Fayol77, Aldou87]. However, having to stabilize the protocols detracts from their initial appeal that was mainly due to their simplicity. 3 Some of the protocols can operate with some of the assumptions weakened. When this is the case we point it out during their presentation. In section 6 we discuss protocols and analyses techniques that weaken or modify some of these assumptions.

Proceedings ArticleDOI
01 Sep 1993
TL;DR: A synchronous bandwidth allocation (SBA) scheme which calculates theynchronous bandwidth necessary for each application to satisfy its message-delivery delay requirement is developed and is complementary to the SBA protocol in the FDDI station management standard SMT 7.2.
Abstract: It is well known that an FDDI token ring network provides a guaranteed throughput for synchronous messages and a bounded medium access delay for each node/station. However, this fact alone cannot effectively support many real-time applications that require the timely delivery of each critical message. The reason for this is that the FDDI guarantees a medium access delay bound to nodes, but not to messages themselves. The message-delivery delays may exceed the medium-access delay bound even if a node transmits synchronous messages at a rate not greater than the guaranteed throughput. We solve this problem by developing a synchronous bandwidth allocation (SEA) scheme which calculates the synchronous bandwidth necessary for each application to satisfy its message-delivery delay requirement. The result obtained in this paper is essential for effective use of the FDDI token ring networks in supporting such real-time communication as digital video/audio transmissions, and distributed control/monitoring.

Journal ArticleDOI
Ilias Iliadis1, W.E. Denzel1
TL;DR: A modest amount of output queuing, in conjunction with appropriate switch speedup, provides significant delay and throughput improvements over pure input queuing.
Abstract: A single-stage nonblocking N*N packet switch with both output and input queuing is considered. The limited queuing at the output ports resolves output port contention partially. Overflow at the output queues is prevented by a backpressure mechanism and additional queuing at the input ports. The impact of the backpressure effect on the switch performance for arbitrary output buffer sizes and for N to infinity is studied. Two different switch models are considered: an asynchronous model with Poisson arrivals and a synchronous model with Bernoulli arrivals. The investigation is based on the average delay and the maximum throughput of the switch. Closed-form expressions for these performance measures are derived for operation with fixed size packets. The results demonstrate that a modest amount of output queuing, in conjunction with appropriate switch speedup, provides significant delay and throughput improvements over pure input queuing. The maximum throughput is the same for the synchronous and the asynchronous switch model, although the delay is different. >

Proceedings ArticleDOI
23 May 1993
TL;DR: The proposed scheme is shown to provide equitable access to channel resources for both types of users, yielding improvements in overall system performance while significantly increasing data throughput compared to a system without data packet reservation.
Abstract: The authors propose an integrated packet reservation multiple access (IPRMA) protocol for transmitting both speech and data information. While speech users are allowed to contend for reservation slots on a frame-by-frame basis, data users may reserve multiple slots across a frame to increase throughput. The protocol includes a priority mechanism which ensures that speech users have greater access to idle slots since speech packets have a more demanding delay constraint. The proposed scheme is shown to provide equitable access to channel resources for both types of users, yielding improvements in overall system performance while significantly increasing data throughput compared to a system without data packet reservation. >

Journal ArticleDOI
TL;DR: The performance of the SNR protocol is studied when it is implemented for end-to-end flow and error control and the efficiency with which this protocol uses the network bandwidth and its achievable throughput is evaluated as a function of certain network and protocol parameters.
Abstract: The performance of the SNR protocol of A. N. Netravali et al. (1990) is studied when it is implemented for end-to-end flow and error control. Using a combination of analysis and simulation, the efficiency with which this protocol uses the network bandwidth and its achievable throughput is evaluated as a function of certain network and protocol parameters. The protocol is enhanced by introducing two windows to decouple the two functions of receiver flow control and network congestion control. This enhancement and the original protocol are compared with go-back-N (GBN) and one-at-a-time-selective-repeat (OSR) retransmission procedures, are shown to have significantly higher throughput for a wide range of network conditions. As an example, for a virtual circuit with 60-ms roundtrip delay and 10/sup -8/ bit error rate, in order to deliver 500 Mb/s throughput, both the GBN and OSR require a raw transmission bandwidth of approximately 800 Mb/s, whereas SNR with two windows needs slightly higher than 500 Mb/s raw bandwidth. Periodic exchange of state can also provide a variety of measures for congestion control in a timely and accurate fashion. >

Proceedings ArticleDOI
29 Nov 1993
TL;DR: A surprising degradation of throughput is shown in unslotted deflection routing networks, compared to slotted networks, and situations where severe congestion occurs are revealed.
Abstract: When nodes of a communication network have identical input- and output-link capacities, it is possible to use as few as one packet buffer per link, if one is willing to deflect-or misroute-a subset of simultaneously arriving fixed-length packets from preferred to alternate output links. This scheme, known as deflection routing, can achieve very fast packet switching in regular networks and has been proposed as the basic routing and switching protocol of several all-optical networks. The performance models of deflection-routing networks that have appeared in the literature have assumed that time is slotted and packets arrive at nodes on time-slot boundaries. In practice, however, slotted operation is difficult to implement in all-optical networks. The present authors evaluate by simulation the performance of deflection routing in unslotted networks. The evaluations show a surprising degradation of throughput in unslotted deflection routing networks, compared to slotted networks, and reveal situations where severe congestion occurs. To overcome these limitations they propose the use of specific control mechanisms in unslotted networks that allow one to eliminate congestion and to improve substantially the network throughput. >