scispace - formally typeset
Search or ask a question

Showing papers on "Throughput published in 1991"


Journal ArticleDOI
TL;DR: A method to analyze the flow of data in a network consisting of the interconnection of network elements is presented and it is shown how regulator elements connected in series can be used to enforce general burstiness constraints.
Abstract: For pt.I see ibid., vol.37, no.1, p.114-31 (1991). A method to analyze the flow of data in a network consisting of the interconnection of network elements is presented. Assuming the data that enters the network satisfies burstiness constraints, burstiness constraints are derived for traffic flowing between network elements. These derived constraints imply bounds on network delay and buffering requirements. By example, it is shown that the use of regulator elements within the network can reduce maximum network delay. It is also found that such a use of regulator elements can enlarge the throughput region where finite bounds for delay are found. Finally, it is shown how regulator elements connected in series can be used to enforce general burstiness constraints. >

1,007 citations


Journal ArticleDOI
01 Apr 1991
TL;DR: This paper defines the notion of traffic phase in a packet-switched network and describesHow phase differences between competing traffic streams can be the dominant factor in relative throughput and suggests that simply coding a gateway to drop a random packet from its queue on overflow is often sufficient.
Abstract: Much of the traffic in existing packet networks is highly periodic, either because of periodic sources (e.g., real time speech or video, rate control) or because window flow control protocols have a periodic cycle equal to the connection roundtrip time (e.g., a network-bandwidth limited TCP bulk data transfer). Control theory suggests that this periodicity can resonate (i.e., have a strong, non-linear interaction) with deterministic estimation or control algorithms in network gateways.1 In this paper we define the notion of traffic phase in a packet-switched network and describe how phase differences between competing traffic streams can be the dominant factor in relative throughput. Drop Tail gateways in a TCP/IP network with strongly periodic traffic can result in systematic discrimination against some connections. We demonstrate this behavior with both simulations and theoretical analysis. This discrimination can be eliminated with the addition of appropriate randomization to the network. In particular, analysis suggests that simply coding a gateway to drop a random packet from its queue (rather than the tail) on overflow is often sufficient.We do not claim that Random Drop gateways solve all of the problems of Drop Tail gateways. Biases against bursty traffic and long roundtrip time connections are shared by both Drop Tail and Random Drop gateways. Correcting the bursty traffic bias has led us to investigate a different kind of randomized gateway algorithm that operates on the traffic stream, rather than on the queue. Preliminary results show that the Random Early Detection gateway, a newly developed gateway congestion avoidance algorithm, corrects this bias against bursty traffic. The roundtrip time bias (at least in TCP/IP networks) results from the TCP window increase algorithm, not from the gateway dropping policy, and we briefly discuss changes to the window increase algorithm that could eliminate this bias.

365 citations


Journal ArticleDOI
E.L. Hahne1
TL;DR: The results suggest that the transmission capacity not used by the small window session will be approximately fairly divided among the large window sessions, and the worst-case performance of round-robin scheduling with windows is shown to approach limits that are perfectly fair in the max-min sense.
Abstract: The author studies a simple strategy, proposed independently by E.L. Hahne and R.G. Gallager (1986) and M.G.H. Katevenis (1987), for fairly allocating link capacity in a point-to-point packet network with virtual circuit routing. Each link offers its packet transmission slots to its user sessions by polling them in round-robin order. In addition, window flow control is used to prevent excessive packet queues at the network nodes. As the window size increases, the session throughput rates are shown to approach limits that are perfectly fair in the max-min sense. If each session has periodic input (perhaps with jitter) or has such heavy demand that packets are always waiting to enter the network, then a finite window size suffices to produce perfectly fair throughput rates. The results suggest that the transmission capacity not used by the small window session will be approximately fairly divided among the large window sessions. The focus is on the worst-case performance of round-robin scheduling with windows. >

337 citations


Book
02 Jan 1991
TL;DR: In this article, an analysis of the performance of a packet switch based on a single-buffered Banyan network is presented, and the results of this model are combined with models of the buffer controller (finite and infinite buffers).
Abstract: Banyan networks are being proposed for interconnecting memory and processor modules in multiprocessor systems as well as for packet switching in communication networks. This paper describes an analysis of the performance of a packet switch based on a single-buffered Banyan network. A model of a single-buffered Banyan network provides results on the throughput, delay, and internal blocking. Results of this model are combined with models of the buffer controller (finite and infinite buffers). It is shown that for balanced loads, the switching delay is low for loads below maximum throughput (about 45 percent per input link) and the blocking at the input buffer controller is low for reasonable buffer sizes.

255 citations


Patent
02 Aug 1991
TL;DR: In this article, a buffer reservation and congestion control scheme for multi-cast ATM networks is proposed, which includes buffer reservation mechanism comprised of a state machine for association with each virtual circuit set up through the network, the state machine being adapted to monitor the number of available buffer slots at a data link and reading an encoding scheme for cells comprising a burst of data in order to control its switching from an active to an idle state to control the flow of data through the virtual circuit.
Abstract: A bandwidth management and congestion control scheme for a multi-cast ATM network which includes a buffer reservation mechanism comprised of a state machine for association with each virtual circuit set up through the network, the state machine being adapted to monitor the number of available buffer slots at a data link and reading an encoding scheme for cells comprising a burst of data in order to control its switching from an active to an idle state to thereby control the flow of data through the virtual circuit. A state dependent token pool mechanism is associated with each virtual circuit and generates tokens at varying rates which are "used" by transmitted data in order to monitor and control the average data rate passing through a data link over a virtual circuit. By thus monitoring and controlling the peak data rate and average data rate, the bandwidth for each data link is efficiently managed to maximize data throughput and minimize loss of data cells from data bursts. A novel means for determining the availability of capacity on a data link for establishing a virtual circuit is also disclosed which depends on a probability calculation expressed in terms of the average data rate and peak data rate through a network link. This information is available through the buffer reservation mechanism and the token pool mechanism to facilitate the fast calculation required to establish a virtual circuit "on the fly". Various implementation details are also provided.

245 citations


Journal ArticleDOI
TL;DR: A congestion management strategy for integrated services packet networks that is robust with regard to transmission speed and network size is proposed and is further developed to incorporate several frame sizes into the strategy, thereby providing flexibility in meeting throughput and delay requirements of different applications.
Abstract: A congestion management strategy for integrated services packet networks that is robust with regard to transmission speed and network size is proposed. The strategy supports several classes of services with zero loss and different delay bounds as well as services without stringent loss and delay guarantees. Loss-free and bounded-delay transmission is accomplished by means of an admission policy which ensure smoothness of the traffic at the network edge, and by a service discipline called stop-and-go queuing, which maintains the traffic smoothness throughout the network. Both the admission policy and the stop-and-go queuing are based on a time framing concept described elsewhere by the author (IEEE Trans. Commun., vol.39, Dec.1991). This concept is further developed to incorporate several frame sizes into the strategy, thereby providing flexibility in meeting throughput and delay requirements of different applications. Stop-and-go queueing is realizable with minor modification to a first-in first-out (FIFO) queueing structure. >

221 citations


Proceedings ArticleDOI
07 Apr 1991
TL;DR: Simulations based on well-controlled topologies show that the proposed pairwise code-assignment scheme requires much fewer codes than transmitter-based code assignment, while maintaining throughput performance.
Abstract: Two-phase algorithms are devised to assign and reassign spread-spectrum codes to transmitters, to receivers, and to pairs of stations in a large dynamic packet radio network (PRN) using code-division multiple access (CDMA). The algorithms minimize the time complexity in the first phase and minimize the number of control packets needed to be exchanged in the second phase. A new pairwise code-assignment scheme is proposed to assign codes to edges. Simulations based on well-controlled topologies (sparse topologies) show that the proposed scheme requires much fewer codes than transmitter-based code assignment, while maintaining throughput performance. >

138 citations


Proceedings ArticleDOI
13 Sep 1991
TL;DR: Three ways are demonstrated to extend bandwidth balancing to multi-priority traffic on a distributed-queue dual-bus network.
Abstract: Bandwidth balancing is a procedure that gradually achieves a fair allocation of bandwidth among simultaneous file transfers on a distributed-queue dual-bus (DQDB) network. Bandwidth balancing was originally designed for traffic of a single priority level. Three ways are demonstrated to extend this procedure to multi-priority traffic. >

136 citations


Journal ArticleDOI
TL;DR: This thesis examines the possibility of performing adaptive routing as an approach to further improving upon the performance and reliability of message-passing concurrent computers by exploiting the inherent path redundancy found in richly connected networks in order to perform fault-tolerant routing.
Abstract: Message-passing concurrent computers, also known as multicomputers, such as the Caltech Cosmic Cube [47] and its commercial descendents, consist of many computing nodes that interact with each other by sending and receiving messages over communication channels between the nodes. The communication networks of the second-generation machines, such as the Symult Series 2010 and the Intel iPSC2 [2], employ an oblivious wormhole-routing technique that guarantees deadlock freedom. The network performance of this highly evolved oblivious technique has reached a limit of being capable of delivering, under random traffic, a stable maximum sustained throughput of ~~45 to 50% of the limit set by the network bisection bandwidth, while maintaining acceptable network latency. This thesis examines the possibility of performing adaptive routing as an approach to further improving upon the performance and reliability of these networks. In an adaptive multipath routing scheme, message trajectories are no longer deterministic, but are continuously perturbed by local message loading. Message packets will tend to follow their shortest-distance routes to destinations in normal traffic loading, but can be detoured to longer but less-loaded routes as local congestion occurs. A simple adaptive cut-through packet-switching framework is described, and a number of fundamental issues concerning the theoretical feasibility of the adaptive approach are studied. Freedom of communication deadlock is achieved by following a coherent channel protocol and by applying voluntary misrouting as needed. Packet deliveries are assured by resolving channel-access conflicts according to a priority assignment. Fairness of network access is assured either by sending round-trip packets or by having each node follow a local injection-synchronization protocol. The performance behavior of the proposed adaptive cut-through framework is studied with stochastic modeling and analysis, as well as through extensive simulation experiments for the 2D and 3D rectilinear networks. Theoretical bounds on various average network-performance metrics are derived for these rectilinear networks. These bounds provide a standard frame of reference for interpreting the performance results. In addition to the potential gain in network performance, the adaptive approach offers the potential for exploiting the inherent path redundancy found in richly connected networks in order to perform fault-tolerant routing. Two convexity-related notions are introduced to characterize the conditions under which our adaptive routing formulation is adequate to provide fault-tolerant routing, with minimal change in routing hardware, The effectiveness of these notions is studied through extensive simulations, The 2D octagonal-mesh network is suggested; this displays excellent fault-tolerant potential under the adaptive routing framework. Both performance and reliability behaviors of the octagonal mesh are studied in detail. A number of

113 citations


Journal ArticleDOI
Jia Chen1, T.E. Stern2
TL;DR: The model is extended to cover a nonhomogeneous system, where traffic intensity at each input varies and destination distribution is not uniform and it is seen that input imbalance has a more adverse effect on throughput than output imbalance.
Abstract: A general model is presented to study the performance of a family of space-domain packet switches, implementing both input and output queuing and varying degrees of speedup. Based on this model, the impact of the speedup factor on the switch performance is analyzed. In particular, the maximum switch throughput, and the average system delay for any given degree of speedup are obtained. The results demonstrate that the switch can achieve 99% throughput with a modest speedup factor of four. Packet blocking probability for systems with finite buffers can also be derived from this model, and the impact of buffer allocation on blocking probability is investigated. Given a fixed buffer budget, this analysis obtains an optimal placement of buffers among input and output ports to minimize the blocking probability. The model is also extended to cover a nonhomogeneous system, where traffic intensity at each input varies and destination distribution is not uniform. Using this model, the effect of traffic imbalance on the maximum switch throughput is studied. It is seen that input imbalance has a more adverse effect on throughput than output imbalance. >

96 citations


Journal ArticleDOI
P.E. Green1
TL;DR: Applications, the era of single unrepeated links, the characteristics of fiber paths in networks, forms of addressing, overall network throughput capacity, technologies, protocol layers, and making the communication layers invisible are discussed.
Abstract: The author discusses research activities in the area of third-generation (all-optical) fiber-optic networks and where they are heading. Applications, the era of single unrepeated links, the characteristics of fiber paths in networks, forms of addressing, overall network throughput capacity, technologies, protocol layers, and making the communication layers invisible are discussed. >

Journal ArticleDOI
TL;DR: The authors derive the optimal locations for erasure node locations and show analytically, for uniform traffic, that only several erasure nodes are needed to achieve throughput close to twice the nominal network bandwidth.
Abstract: In dual unidirectional bus networks, packets usually occupy fixed-length slots form the sending station to the end of the network. An erasure node is a specialized station which recognizes packets which have passed their destination stations and releases the slots for subsequent use. The authors derive the optimal locations for erasure nodes and show analytically, for uniform traffic, that only several erasure nodes are needed to achieve throughput close to twice the nominal network bandwidth. The results are tested by simulation of the DQDB (distributed queue dual bus) protocol, which demonstrates a realistic improvement of 40% with only three erasure nodes. Fair access among the stations is improved as well. The authors generalize the analytic results by providing an algorithm for determining the optimal erasure node locations and the throughput improvement, given any arbitrary traffic pattern. The application of this methodology to the related problem of bridged subnetworks is briefly discussed. >

Proceedings ArticleDOI
02 Dec 1991
TL;DR: A conflict-free protocol for packet-switched wavelength division multiaccess networks with the use of a control channel, each station in the network can obtain packet backlog information of all the other stations, and so packet transmission can be scheduled to avoid destination conflicts.
Abstract: A conflict-free protocol for packet-switched wavelength division multiaccess networks is proposed. With the use of a control channel, each station in the network can obtain packet backlog information of all the other stations, and so packet transmission can be scheduled to avoid destination conflicts. A very fast scheduling algorithm is proposed. Simulation results show that a maximum throughput of 1 can be achieved, as compared to a maximum of 0.63 for protocols without transmission scheduling. This high throughput performance is obtained because the transmission, reception and processing of backlog information and the transmission and reception of data packets are all done simultaneously in a pipeline operation and all destination conflicts are avoided in every slot through scheduling. The packet delay is calculated to be only one slot (due to scheduling) larger than those protocols without transmission scheduling at low traffic conditions. >

Proceedings ArticleDOI
02 Dec 1991
TL;DR: A buffer management policy, called drop on demand, is proposed which yields a greater switch throughput and lower packet loss probability than previously proposed policies for all input traffic rates.
Abstract: An imbalanced traffic model is presented, and the performance of completely shared buffering and output queuing under imbalanced traffic is studied. It is found that shared buffering does not perform as well as than output queuing under this traffic condition. A buffer management policy, called drop on demand, is proposed which yields a greater switch throughput and lower packet loss probability than previously proposed policies for all input traffic rates. The optimal buffer management policy is studied for a class of dynamic allocation schemes with packet purging action. It is found that there exists an optimal stationary policy which can be obtained by solving a linear programming problem. >

Journal ArticleDOI
TL;DR: Diversity combining and majority-logic decoding are combined to create a simple but powerful hybrid automatic repeat request (ARQ) error control scheme and ideal choice for high-data-rate error control over both stationary and nonstationary channels.
Abstract: Diversity combining and majority-logic decoding are combined to create a simple but powerful hybrid automatic repeat request (ARQ) error control scheme. Forward-error-correcting (FEC) majority-logic decoders are modified for use in type-I hybrid-ARQ protocols through the identification of reliability information within the decoding process. Diversity combining is then added to reduce the number of retransmissions and their consequent impact on throughput performance. Packet combining has the added benefit of adapting the effective code rate to channel conditions. Excellent reliability performance coupled with a simple high-speed implementation makes the majority-logic system and ideal choice for high-data-rate error control over both stationary and nonstationary channels. >

Journal ArticleDOI
TL;DR: The authors pose resource allocation problems, present a sensitivity analysis, and provide a glimpse of the possible behavior of communication networks that integrate multiple services using multiple resources.
Abstract: Communication networks that integrate multiple services using multiple resources are considered. In particular, the authors pose resource allocation problems, present a sensitivity analysis, and provide a glimpse of the possible behavior of such networks. The simplest discipline is assumed: a service request is accepted if the necessary resources are available; otherwise it is rejected. Two results are obtained. The first gives the sensitivity of throughput of service requests of type i with respect to offered traffic and service rates of type j. The second result is that the set of vectors of achievable throughput rates is a convex polyhedron given by an explicit set of linear inequalities. >

Journal ArticleDOI
TL;DR: Error detection combined with automatic repeat request retransmission is used to provide reliable digital data transmission over a communication channel and results indicate that the throughput of go-back-N is only slightly inferior to that of selective-repeat, mainly due to the burstiness of the channel bit errors.
Abstract: Error detection combined with automatic repeat request retransmission is used to provide reliable digital data transmission over a communication channel. The throughput for a system using go-back-N or selective-repeat protocols with Rayleigh fading in both directions of transmission is approximated by using fade- and interfade-duration statistics of a multipath channel. Results indicate that for a slow-fading channel (e.g. fading rate=1.34 Hz), the throughput of go-back-N is only slightly inferior to that of selective-repeat, mainly due to the burstiness of the channel bit errors. >

Journal ArticleDOI
01 Jul 1991
TL;DR: The general aspects of timing and synchronization in digital communications networks are reviewed and the dominant type of architecture for network synchronization is based on the master-slave hierarchy principle.
Abstract: The general aspects of timing and synchronization in digital communications networks are reviewed. The properties of the links and nodes carrying and switching time-multiplexed digital signals determine the throughput and performance of a network. The continuity and integrity conditions for the digital information flow, where timing faults cause errors and loss of information, determine the requirements on accurate clock signal distribution. The dominant type of architecture for network synchronization is based on the master-slave hierarchy principle. >

Journal ArticleDOI
Jun Chen1, Roch Guerin1
TL;DR: An approximate analysis that is based on independence assumptions and uses an equivalent queueing system to estimate the service capability seen be each input is presented and an expression for the input queue length distribution is obtained.
Abstract: An N*N nonblocking packet switch with input queues and two priority classes that can be used to support traffic with different requirements is described. The switch operation is slotted and, at each time slot, fixed-size packets arrive at the inputs with distinct Bernoulli distributions for both the high- and low-priority classes. Two policies are discussed. In the first policy, packets of both priority classes are queued when waiting for service. In the second policy, only low-priority packets are queued, and high-priority packets not delivered at the first attempt are dropped from the system. Under both policies, high-priority packets prevail over low-priority packets at the inputs as well as the outputs. An approximate analysis that is based on independence assumptions and uses an equivalent queueing system to estimate the service capability seen be each input is presented. Using this approach, an expression for the input queue length distribution is obtained. The maximum system throughput is derived and shown to exceed that of a single priority switch. Numerical results are compared to simulations and are found to agree. >

Journal ArticleDOI
TL;DR: A reliability analysis shows a quantitative measurement of the improvement in fault tolerance as compared with previously presented fault-tolerant networks and a performance analysis and simulation results show that the proposed network has a high level of maximum throughput.
Abstract: The author proposes a self-routing fault-tolerant switching network for asynchronous transfer mode (ATM) switching systems. The network has many subswitches to enhance the fault tolerance of the conventional multistage interconnection network which only has a unique path. The subswitches provide large numbers of alternative paths between switching stages and allow the network to tolerate multiple paths. The routing algorithm is quite simple. The paths can also be used to route cells under the condition that internal cell contentions occur in switching elements. A reliability analysis shows a quantitative measurement of the improvement in fault tolerance as compared with previously presented fault-tolerant networks. A performance analysis and simulation results show that the proposed network has a high level of maximum throughput. In addition, that level of throughput is maintained with reasonable cell delay even though the number of faulty components increases in the network. >

Journal ArticleDOI
TL;DR: The performance of type-II hybrid automatic repeat request (ARQ) is compared to that of fixed-rate type-I hybrid ARQ for meteor-burst communications and it is shown that the throughput is larger for type- II hybrids ARQ than for either fixed- rate type-i hybrid ARZ or ARQ without forward-error-correction.
Abstract: The performance of type-II hybrid automatic repeat request (ARQ) is compared to that of fixed-rate type-I hybrid ARQ for meteor-burst communications. Maximum throughput is obtained for meteor-burst communications by using a transmission scheme for which the information rate of the code, varies in response to the fluctuations in the power received from a meteor trail. For type-II hybrid ARQ, a variation in the code rate is inherent in the coding scheme. On the first transmission that is made for a data block, a code of relatively high rate is used, but if an additional transmission is required, only redundant symbols are sent, and this reduces the overall rate of the code. The performance measure is the throughput per trail, which is defined as the expected number of successfully received information bits for a given meteor trail. The authors also develop an approximation for the average value of the throughput, averaged over the parameters of the meteor trail. Numerical results for Reed-Solomon codes are included to illustrate the relative performance of the various schemes. It is shown that the throughput is larger for type-II hybrid ARQ than for either fixed-rate type-I hybrid ARQ or ARQ without forward-error-correction. >

Dissertation
01 Feb 1991
TL;DR: A graphical method for displaying the packet trace is presented which greatly reduces the tediousness of examining a packet trace and the performance of two different implementations of TCP sending data across a particular network path is compared.
Abstract: Examination of a trace of packets collected from the network is often the only method available for diagnosing protocol performance problems in computer networks. This thesis explores the use of packet traces to diagnose performance problems of the transport protocol TCP. Unfortunately, manual examination of these traces can be so tedious that effective analysis is not possible. The primary contribution of this thesis is a graphical method for displaying the packet trace which greatly reduces the tediousness of examining a packet trace. The graphical method is demonstrated by the examination of some packet traces of typical TCP connections. The performance of two different implementations of TCP sending data across a particular network path is compared. Traces many thousands of packets long are used to demonstrate how effectively the graphical method simplifies examination of long complicated traces. In the comparison of the two TCP implementations, the burstiness of the TCP transmitter appeared to be related to the achieved throughput. A method of quantifying this burstiness is presented and its possible relevance to understanding the performance of TCP is discussed.

Journal ArticleDOI
TL;DR: A comprehensive analysis of a 1-persistent carrier-sense multi-access (CSMA) system using a radio channel with imperfect carrier sensing is presented and discussed and it is shown that a careful optimization of the channel state detector parameters must be performed in order to get good system performance.
Abstract: A comprehensive analysis of a 1-persistent carrier-sense multi-access (CSMA) system using a radio channel with imperfect carrier sensing is presented and discussed. It is shown that a careful optimization of the channel state detector parameters must be performed in order to get good system performance. If the threshold of the detector is too high, the system will tend to behave like an unslotted ALOHA; if the threshold is too low, the system throughput will be zero. It is also shown that for the larger average packet rate G the system throughput is decreased. This is because for the same probability of correct channel sensing, the probability of incorrect transmissions from the waiting mode is increased. >

Proceedings ArticleDOI
01 Apr 1991
TL;DR: A comparison with wormhole routing for various generalized generalized hyp ercubes and tori shows that scheduled routing is effective in providing a constant throughput when worm hole routing does not and enables pipelining at higher input arrival rates.
Abstract: This paper investigates communication in distributed memory multiprocessors to support tasklevel parallelism for real-time applications. It is shown that wormhole routing, used in second generation multicomputers, does not support task-level pipelining because its oblivious contention resolution leads to output inconsistency in which a constant throughput is not guaranteed. We propose scheduled routing which guarantees constant throughputs by integrating task specifications with flow-control. In this routing technique, communication processors provide explicit flowcontrol by independently executing switching schedules computed at compile-time. It is deadlock-free, contention-free, does not load the intermediate node memory, and makes use of the multiple equivalent paths between non-adjacent nodes. The resource allocation and scheduling problems resulting from such routing are formulated and related implementation issues are anal yzed. A comparison with wormhole routing for various generalized hyp ercubes and tori shows that scheduled routing is effective in providing a constant throughput when wormhole routing does not and enables pipelining at higher input arrival rates.

Proceedings ArticleDOI
20 May 1991
TL;DR: The slotted ring protocol is preferred over the token ring and token bus protocols for hard real-time systems because of its low maximum message delay and more predictable message delay.
Abstract: Simulation experiments show that the token ring protocol gave a lower average message delay at low transfer rates, but the token bus protocol gave a better overall performance for applications where only average delay is of interest. On the other hand, in hard real-time systems, the criterion of importance is not the average message delay, but the maximum message delay and the ability to meet deadlines. Slotted ring in this case is a much better protocol than the others because of its low maximum message delay and more predictable message delay. Because of this, and because the average performance of the slotted ring remains good as the size or the transfer rate of the network increases, the slotted ring protocol is preferred over the token ring and token bus protocols for hard real-time systems. >

Journal ArticleDOI
TL;DR: In this article, a simple and efficient system utilizing the class of Hamming codes in a cascaded manner is proposed to provide high throughput over a wide range of channel bit error probability.
Abstract: A simple and efficient system utilizing the class of Hamming codes in a cascaded manner is proposed to provide high throughput over a wide range of channel bit error probability. Comparisons with other adaptive schemes indicate that the proposed system is superior from the point of view of throughput, while still providing the same order of reliability as an ARQ (automatic repeat request) system. The main feature of this system is that the receiver uses the same decoder for decoding the received information after each transmission while the error-correcting capability of the code increases. As a result, the system is kept to the minimum complexity and the system performance is improved. >

Journal ArticleDOI
TL;DR: A switch architecture consisting of parallel plans of low-speed internally blocking switch networks, in conjunction with input and output buffering, desirable from the viewpoint of modularity and hardware cost, especially for large switches is considered.
Abstract: The telecommunications networks of the future are likely to be packet switched networks consisting of wide bandwidth optical fiber transmission media, and large, highly parallel, self-routing switches. Recent considerations of switch architectures have focused on internally nonblocking networks with packet buffering at the switch outputs. These have optimal throughput and delay performance. The author considers a switch architecture consisting of parallel plans of low-speed internally blocking switch networks, in conjunction with input and output buffering. This architecture is desirable from the viewpoint of modularity and hardware cost, especially for large switches. Although this architecture is suboptimal, the throughput shortfall may be overcome by adding extra switch planes. A form of input queuing called bypass queuing can improve the throughput of the switch and thereby reduce the number of switch planes required. An input port controller is described which distributes packets to all switch planes according to the bypass policy, while preserving packet order for virtual circuits. Some simulation results for switch throughput are presented. >

Journal ArticleDOI
TL;DR: Simulation is used to estimate the performance of media access control protocols derived from carrier-sense multiple access with collision detection and operating in local area networks comprising several parallel broadcast channels, and results indicate that the multichannel option provides reductions in both the packet delay average and variance.
Abstract: Simulation is used to estimate the performance of media access control (MAC) protocols derived from carrier-sense multiple access with collision detection (CSMA/CD), and operating in local area networks comprising several parallel broadcast channels. The influence of possible protocol and system alternatives on the network performance is discussed, based on results of the packet delay average, variance, mean square, coefficient of variation, and histogram, as well as the packet rejection probability due to lack of buffer space. The delay incurred by multipacket messages is estimated, comparing the single channel to the multichannel option. Numerical results indicate that the multichannel option provides reductions in both the packet delay average and variance, even when stations are only able to simultaneously receive from a subset of channels. >

Proceedings ArticleDOI
R.S. Dighe1, C.J. May1, G. Ramamurthy1
07 Apr 1991
TL;DR: The results demonstrate that it is possible to meet the differing needs of traffic types needing datagram transport and those needing virtual-circuit transport by using an intelligent scheduling strategy and a meaningful packet transport protocol.
Abstract: The authors examine the networking environment of the future, pose the technical questions that need to be addressed to provide service assurance, and propose a set of strategies for congestion avoidance. Congestion avoidance is recommended instead of congestion control, which has very limited value in a high-speed network, where the latency in detecting congestion and reacting to it may make the control ineffective. The proposed solution consists of enforcing rate-control at the network edges, bandwidth reservation for continuous bit oriented (CBO) traffic using a unique two-queue strategy, and a scheduler-based packet cross-connect system. Simulation results of a Q+ model of an access node are presented that quantify the performance of this solution and compare it to existing congestion control mechanisms. The results demonstrate that it is possible to meet the differing needs of traffic types needing datagram transport and those needing virtual-circuit transport by using an intelligent scheduling strategy and a meaningful packet transport protocol. >

Patent
John G. Waclawsky1, Kyra L. Marshall1
21 Feb 1991
TL;DR: In this article, an efficient method for predicting the performance of a data communications network operating under a window-based protocol is described, where a state characterizing the dynamic behavior of the data communications networks for consecutive operating cycles is computed.
Abstract: An efficient method is described for predicting the performance of a data communications network operating under a window-based protocol. A state characterizing the dynamic behavior of the data communications network for consecutive operating cycles is computed. A pattern for the state is then determined having a particular repetition period. The number of data packets which are transmitted during that repetition period are then used to characterize the throughput, transit time and other performance characteristics for the data communications network.