scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A high-speed router featuring minimal delay variation

29 May 2001-pp 312-316
TL;DR: This paper describes a technique for implementing the switch fabric of a high-speed router (with a throughput in excess of 600 Gb/s based on the current slate of the art), with the following properties: delay performance is virtually identical to that of a standard output-buffered switch, and the switch Fabric preserves the packet sequence, so that no resequencing is required for segmented packets.
Abstract: This paper describes a technique for implementing the switch fabric of a high-speed router (with a throughput in excess of 600 Gb/s based on the current slate of the art), with the following properties. Delay performance is virtually identical to that of a standard output-buffered switch, and the switch fabric preserves the packet sequence, so that no resequencing is required for segmented packets. Clock rates are moderate except at ingress and egress points. This is achieved by distributing traffic across a number of crossbar switches operating at a low bit rate. The techniques used to resolve contention in the crossbar switches are described, and the bottlenecks limiting the capacity of the switch are discussed.

Content maybe subject to copyright    Report

0-7803-6711-1/01/$10.00 (C) 2001 IEEE 312
Authorized licensed use limited to: DUBLIN CITY UNIVERSITY. Downloaded on July 19,2010 at 09:38:34 UTC from IEEE Xplore. Restrictions apply.

313
Authorized licensed use limited to: DUBLIN CITY UNIVERSITY. Downloaded on July 19,2010 at 09:38:34 UTC from IEEE Xplore. Restrictions apply.

314
Authorized licensed use limited to: DUBLIN CITY UNIVERSITY. Downloaded on July 19,2010 at 09:38:34 UTC from IEEE Xplore. Restrictions apply.

315
Authorized licensed use limited to: DUBLIN CITY UNIVERSITY. Downloaded on July 19,2010 at 09:38:34 UTC from IEEE Xplore. Restrictions apply.

316
Authorized licensed use limited to: DUBLIN CITY UNIVERSITY. Downloaded on July 19,2010 at 09:38:34 UTC from IEEE Xplore. Restrictions apply.
References
More filters
Journal ArticleDOI
TL;DR: It is found that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected and that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.
Abstract: It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.

434 citations

Proceedings ArticleDOI
29 Mar 1998
TL;DR: This work introduces a new algorithm called longest port first (LPF), which is designed to overcome the complexity problems of LQF, and can be implemented in hardware at high speed.
Abstract: Input queueing is becoming increasingly used for high-bandwidth switches and routers. In previous work, it was proved that it is possible to achieve 100% throughput for input-queued switches using a combination of virtual output queueing and a scheduling algorithm called LQF However, this is only a theoretical result: LQF is too complex to implement in hardware. We introduce a new algorithm called longest port first (LPF), which is designed to overcome the complexity problems of LQF, and can be implemented in hardware at high speed. By giving preferential service based on queue lengths, we prove that LPF can achieve 100% throughput.

342 citations

Proceedings ArticleDOI
05 Dec 1999
TL;DR: It is shown that, in the case of packet switches, input queuing architectures can provide advantages over output queuing Architecture, and novel extensions of known scheduling algorithms are proposed.
Abstract: Input queuing switch architectures must be controlled by a scheduling algorithm, which solves contentions in the transfer of data units from inputs to outputs. Several scheduling algorithms were proposed in the literature for switches operating on fixed-size data units. We consider the case of packet switches, i.e., devices operating on variable-size data units at their interfaces, but internally operating on fixed-size data units, and we propose novel extensions of known scheduling algorithms. We show that, in the case of packet switches, input queuing architectures can provide advantages over output queuing architectures.

24 citations

Book ChapterDOI
TL;DR: A criterion for a three-stage network to be strictly non-blocking is presented, which distinguishes between channel grouping and link speedup as methods of increasing the bandwidth available to calls.
Abstract: A criterion for a three-stage network to be strictly non-blocking is presented which is very general in its application. The criterion distinguishes between channel grouping and link speedup as methods of increasing the bandwidth available to calls. It may be applied to both circuit-switched and packet-switched networks. The non-blocking conditions for various networks are shown to be special cases of the condition presented here.

13 citations

Journal ArticleDOI
TL;DR: A method is described for performing routing in three-stage asynchronous transfer mode (ATM) switches which feature multiple channels between the switch modules in adjacent stages, which allows cell-level routing to be performed, whereby routes are updated in each time slot.
Abstract: A method is described for performing routing in three-stage asynchronous transfer mode (ATM) switches which feature multiple channels between the switch modules in adjacent stages. The method is suited to hardware implementation using parallelism to achieve a very short execution time. This allows cell-level routing to be performed, whereby routes are updated in each time slot. The algorithm allows a contention-free routing to be performed, so that buffering is not required in the intermediate stage. An algorithm with this property, which preserves the cell sequence, is referred to as a path allocation algorithm. A detailed description of the necessary hardware is presented. This hardware uses a novel circuit to count the number of cells requesting each output module, it allocates a path through the intermediate stage of the switch to each cell, and it generates a routing tag for each cell, indicating the path assigned to it. The method of routing tag assignment described employs a nonblocking copy network. The use of highly parallel hardware reduces the clock rate required of the circuitry, for a given-switch size. The performance of ATM switches using this path allocation algorithm has been evaluated by simulation, and is described.

3 citations