scispace - formally typeset
Search or ask a question

Showing papers on "Fast packet switching published in 1992"


Patent
08 Jun 1992
TL;DR: In this paper, an apparatus for forwarding a data packet from a first link to a second link is disclosed, coupled with a plurality of computer networks through ports on the apparatus, and the apparatus maintains a spanning tree list indicating which of the apparatus ports are active.
Abstract: An apparatus for forwarding a data packet from a first link to a second link is disclosed. The apparatus is coupled with a plurality of computer networks through ports on the apparatus. The apparatus maintains a spanning tree list indicating which of the apparatus ports are active. The apparatus receives a packet, and determines if the packet was received from a port that is active. If the packet was received from a port that is not active, the packet is discarded. If the packet is not discarded, the data link source address of the packet is stored in a database within the apparatus for the computer network coupled with the port from which the packet was received. The apparatus then decides, responsive to a contents of a data link destination address field in the packet, whether to forward the packet as a bridge or to forward the packet as a router. If the apparatus forwards the packet as a router, the apparatus sends a redirect message to update the data link layer destination address used by the originating station to contain the data link layer address of the destination station where the destination station is on a link remote from the link of the originating station. For the subsequent packets the apparatus then behaves as a bridge by forwarding the subsequent packets based upon parsing of only the Data Link Header. For forwarding of subsequent packets, the apparatus is advantageously fast, in accordance with bridge operation.

150 citations


Journal ArticleDOI
TL;DR: A growable switch architecture is presented that is based on three key principles: a generalized knockout principle exploits the statistical behaviour of packet arrivals and thereby reduces the interConnect complexity, output queuing yields the best possible delay/throughput performance, and distributed intelligence in routing packets through the interconnect fabric eliminates internal path conflicts.
Abstract: The problem of designing a large high-performance, broadband packet of ATM (asynchronous transfer mode) switch is discussed. Ways to construct arbitrarily large switches out of modest-size packet switches without sacrificing overall delay/throughput performance are presented. A growable switch architecture is presented that is based on three key principles: a generalized knockout principle exploits the statistical behaviour of packet arrivals and thereby reduces the interconnect complexity, output queuing yields the best possible delay/throughput performance, and distributed intelligence in routing packets through the interconnect fabric eliminates internal path conflicts. Features of the architecture include the guarantee of first-in-first-out packet sequence, broadcast and multicast capabilities, and compatibility with variable-length packets, which avoids the need for packet-size standardization. As a broadband ISDN example, a 2048*2048 configuration with building blocks of 42*16 packet switch modules and 128*128 interconnect modules, both of which fall within existing hardware capabilities, is presented. >

145 citations


Patent
28 Apr 1992
TL;DR: In this article, the authors present a method and apparatus for buffering data packets in a data communication controller, which is interfaced with a host processor and includes a control unit for accessing a communication medium.
Abstract: Method and apparatus are disclosed for buffering data packets in a data communication controller. The communication controller is interfaced with a host processor and includes a control unit for accessing a communication medium. Each data packet to be transmitted or received is assigned a packet number. Packet number assignment is carried out by a memory management unit within the communication controller which dynamically allocates to each assigned packet number one or more pages in a data packet buffer memory for the storage of the corresponding data packet. Upon issuing the assigned packet number, the physical addresses of the allocated pages of data packet buffer memory storage space are generated in a manner transparent to both the host processor and the control unit. Upon completion of each data packet loading operation, the corresponding packet number is stored in a packet number queue maintained for subsequent retrieval in order to generate the physical addresses at which the corresponding data packet has been stored. Also disclosed is a mechanism for automatically generating transmit interrupts to the host processor upon the completion of any preselected number of data packet transmissions determined by the host processor.

140 citations


Patent
18 Jun 1992
TL;DR: In this article, a network controller receives encrypted data packets in the form of interleaved streams of cells, and stores the received cells in a buffer until the end of each packet is received, at which time the complete packet is decrypted, error checked, and then transmitted to a host computer.
Abstract: A network controller receives encrypted data packets in the form of interleaved streams of cells, and stores the received cells in a buffer until the end of each packet is received, at which time the complete packet is decrypted, error checked, and then transmitted to a host computer. The network controller's buffer includes a data storage array in which data packets are stored as linked lists, and a packet directory having a entry for each data packet stored in the buffer. Each directory entry contains a pointer to the first and last location in the buffer where a corresponding data packet is stored, as well as status information for the data packet. When free space in the network controller's buffer falls below a specified threshold, the network controller transmits selected partial packets to the host computer without decrypting or error checking, and also stores in its packet directory entry for each transmitted partial packet a "partial transfer" status flag. Additional portions of the partial packets may be sent to the host computer with an indication of the packet to which they belong. Upon receiving the end of a data packet that was partially transferred to the host computer, the remainder of the data packet in the packet buffer is transmitted to the host computer, without decrypting or error checking the partial data packet. The host computer then transmits the complete packet through a loopback path in the network controller for decrypting and error checking.

136 citations


Proceedings ArticleDOI
01 May 1992
TL;DR: The authors propose a single way to dramatically improve the performance of input-queued ATM packet switches beyond the 82% saturation point obtained in previous work and yields a throughput improvement from 65% to 92% without speedup, trunking, or complicated hardware.
Abstract: The authors propose a single way to dramatically improve the performance of input-queued ATM packet switches beyond the 82% saturation point obtained in previous work. The method is an extension of the independent output-port schedulers technique and is based on the notion of recycled time slots, i.e. reusing time slots normally wasted due to scheduling conflicts. In contrast to previous results, the technique yields a throughput improvement from 65% to 92% without speedup, trunking, or complicated hardware. If input grouping with a group size of four is also employed, then the method can yield up to 95% throughput. >

105 citations


Patent
25 Jun 1992
TL;DR: In this article, a technique for reducing latencies in bridge operation, by facilitating cut-through transmission of a receive data packet while the packet is still being received, but without the need for starting or ending delimiters, or packet lengths, in the packet data.
Abstract: A technique for reducing latencies in bridge operation, by facilitating cut-through transmission of a receive data packet while the packet is still being received, but without the need for starting or ending delimiters,or packet lengths, in the packet data. The technique can be applied to packets inbound from a network, packets outbound to a network, or packets being looped back to a client to which the bridge is connected. In the technique of the invention, each received packet is stored in a buffer memory and a count is maintained of the number of bits in the received packet. A transmit operation is started as soon as possible, preferably while the packet is still being received, and bytes are retrieved from the buffer memory for transmission. The transmit operation is terminated when a transmit byte count reaches the packet length as determined by the receive byte count. For cut-through operations, the transmit operation is started without knowledge of the packet length, but the packet length is made available to the transmit operation upon completion of the receive operation. For store-and-forward operations, the packet length is stored with the packet in the buffer memory, and retrieved for use in the transmit operation.

104 citations


Patent
Toshiya Aramaki1
08 Jan 1992
TL;DR: In this paper, a fast packet switching system is proposed, where packet distributers are associated respectively with input ports and output terminals corresponding in number to the output ports for receiving successive packets therefrom and attaching a timeslot number to each of the received packets.
Abstract: In a fast packet switching system, packet distributers are associated respectively with input ports for receiving successive packets therefrom and attaching a timeslot number to each of the received packets, and uniformly distributing the packets to output terminals of each distributer. Packet switches are provided corresponding in number to the output terminals of each packet distributer. Each packet switch has input terminals corresponding in number to the packet distributers and output terminals corresponding in number to the output ports. The input terminals of each packet switch are coupled to respective output terminals of the distributers for switching a packet from one of its input terminals to one of its output terminals in accordance with a destination address contained in the packet. Packet sequencers are associated respectively with the output ports. Each packet sequencer has input terminals coupled to respective output terminals of the packet switches for examining the timeslot numbers attached to packets from its input terminals and delivering the packets to the associated output port in accordance with the examined timeslot numbers.

103 citations


Patent
14 Aug 1992
TL;DR: In this article, a self-routing switching element in a packet switch functions in packet synchronous mode in which a plurality of the incoming packet signals are switched by the switching element concurrently during a common time period.
Abstract: A self-routing switching element in a packet switch functions in a packet synchronous mode in which a plurality of the incoming packet signals are switched by the switching element concurrently during a common time period. For each incoming packet signal received during the common time period, the switching element detects that one of the inputs has an incoming packet signal for transmission to one of the outputs, determines if the one of the output modules will accept the incoming packet signals, and responsive to the determination, enables the acceptance of the incoming packet signal by the output module for transmission to the output. The control circuitry is distributed throughout the switching element in output modules, one module for each output from the switching element.

98 citations


Proceedings ArticleDOI
Roch Guerin1
01 May 1992
TL;DR: The authors present an approach to computing access control parameters as a function of both source characteristics and bandwidth allocation in the network, which relies on a fluid-flow model for the source, the leaky bucket rate control system, and the bandwidth allocation process.
Abstract: The authors present an approach to computing access control parameters as a function of both source characteristics and bandwidth allocation in the network. The technique relies on a fluid-flow model for the source, the leaky bucket rate control system, and the bandwidth allocation process. The methodology is suitable for any high-speed packet switching architecture. The approach allows for the real-time setting of the access control at a connection setup. It ensures that the access control mechanism is near transparent as long as connections behave as expected, while protecting the network from most misbehaving connections. Although some extreme cases have the potential to affect the network, they are essentially due to limitations inherent in the leaky bucket itself. >

96 citations


Patent
10 Dec 1992
TL;DR: In this paper, a telecommunication switching system having switching nodes that perform adaptive routing by utilizing the fact that the switching nodes are arranged in a first and a second hierarchy is described.
Abstract: A telecommunication switching system having switching nodes that perform adaptive routing by utilizing the fact that the switching nodes are arranged in a first and a second hierarchy. In addition, each switching node maintains routing information based on telephone and switching node numbers which identify the switching nodes. A destination switching node transfers its routing information back to an originating switching node which combines that routing information with its own in order to determine shorter call paths for subsequent call routing. The first hierarchy is a dialing plan hierarchy having groups of switching nodes at each dialing plan level. The second hierarchy is a switching node hierarchy based on the switching node number of each switching node with at least one switching node of the switching node hierarchy being at a different level in the dialing plan hierarchy. In order to route a call, a switching node first routes through levels of switching nodes in the dialing plan hierarchy until a second switching node is encountered which can determine the identification of the destination switching node based on a dialed telephone number. The second switching node then routes the call through the node hierarchy using the identified node number until a path is determined to the destination switching node.

80 citations


Patent
23 Mar 1992
TL;DR: In this paper, a packet reassembly hardware (214) in a packet switch is used to improve overall system throughput during the handling of transmission packets (310) that require reassembly.
Abstract: A packet switching system (100) employs packet reassembly hardware (214) in a packet switch (140) to improve overall system throughput during the handling of transmission packets (310) that require reassembly. In this manner, reassembly is accomplished with minimal processor (110) intervention and without having to duplicate the message data portion (312) of a transmission packet (310) into a different memory location prior to retransmission.

Journal ArticleDOI
TL;DR: A framing congestion control strategy based on a packet admission policy at the edges of the network and on a service discipline called stop-and-go queuing at the switching nodes is described, which provides bounded end-to-end delay and a small and controllable delay-jitter.
Abstract: The problem of congestion control in high-speed networks for multimedia traffic, such as voice and video, is considered. It is shown that the performance requirements of high-speed networks involve delay, delay-jitter, and packet loss. A framing congestion control strategy based on a packet admission policy at the edges of the network and on a service discipline called stop-and-go queuing at the switching nodes is described. This strategy provides bounded end-to-end delay and a small and controllable delay-jitter. The strategy is applicable to packet switching networks in general, including fixed cell length asynchronous transfer mode (ATM), as well as networks with variable-size packets. >

Patent
04 Jun 1992
TL;DR: In this paper, a credit manager circuit determines whether a stored packet complies with predetermined traffic parameters such as average arrival rate and maximum burst rate, based on a packet's arrival time and its virtual channel identifier (VCI).
Abstract: A method and system are provided for controlling user traffic to a fast packet switching system using the leaky bucket scheme. Each of the packets (53 byte length cell) originates at a source of packets and has a virtual channel identifier (VCI). The method includes the step of receiving the packets, each of the packets having associated therewith an arrival time. The packets are stored at an addressable location in a first memory. The first memory having a plurality of addressable locations. In a second memory, there are stored addresses corresponding to the addressable locations in the first memory in which a packet is not yet stored. The addresses stored in the second memory are utilized in the step of storing the received packets. A credit manager circuit determines whether a stored packet complies with predetermined traffic parameters such as average arrival rate and maximum burst rate. This determination is based on a packet's arrival time and its VCI to obtain a validated packet. The credit manager circuit retrieves the validated packet from the first memory and the retrieved packet is then transmitted to the packet switching system.

Proceedings ArticleDOI
01 May 1992
TL;DR: An upper bound of the minimum time division multiple access (TDMA) frame length of any collision-free node assignment protocol in a packet radio network in which a node has multiple reception capacity is derived.
Abstract: The authors derive an upper bound of the minimum time division multiple access (TDMA) frame length of any collision-free node assignment protocol in a packet radio network in which a node has multiple reception capacity. They also derive the optimum TDMA frame length for any fully connected network with large reception capacity. When the total number of nodes in the network is unknown, a heuristics to generate a TDMA protocol with frame length within some upper bound is presented for any network with large reception capacity. >

Patent
22 Oct 1992
TL;DR: In this article, a communication switching system has a plurality of switching nodes in which each switching node upon initialization completely establishes its own internal configuration including the number and type of switching modules and within each switching module: type of module control processor, type of internal switching network, printed board carriers, type and number of auxiliary circuits and type and link interfaces.
Abstract: A communication switching system having a plurality of switching nodes in which each switching node upon initialization completely establishes its own internal configuration. This establishment of internal configuration includes the number and type of switching modules and within each switching module: type of module control processor, type of internal switching network, printed board carriers, type and number of auxiliary circuits, and type and number of link interfaces. Each unit of a switching module reports relevant information to the module control processor; and in turn, the module control processor reports its own information and the information of the other units to a node processor (the main processor within a switching node). Further, each switching node and interfaces of the switching node automatically identifies the other switching nodes connected by communication links and switching paths to those switching nodes. Each interface upon being initialized automatically performs low level initialization operations over the connected link with the far end interface. The interfaces then report the completion of the initialization operations to the connected switching nodes. Upon receiving thesereports, each switching node becomes aware of the existence of the links to other switching nodes. Each switching node then exchanges node numbers with the far end switching node connected via interfaces and communication links to establishthe hierarchy of switching nodes in the switching system.

Journal ArticleDOI
01 Jan 1992
TL;DR: In this paper, a discrete-time multiserver queueing system with infinite waiting room and general independent arrivals is considered and explicit formulae are derived for such quantities as the probability generating function, the mean value and the variance of the delay.
Abstract: In this paper, a discrete-time multiserver queueing system with infinite waiting room and general independent arrivals is considered. The delay performance of such a system, under a first-come-first-served queueing discipline, is evaluated by means of a purely analytical technique. Specifically, explicit formulae are derived for such quantities as the probability generating function, the mean value and the variance of the delay. The results of the study are useful, for instance, in the context of fast packet switching with output queueing, where the switching elements have their output ports organized in groups, each group corresponding to a separate output queue.

Proceedings ArticleDOI
14 Jun 1992
TL;DR: A hybrid weighted round-robin and head-of-line priority discipline is shown to provide appropriate allocation of the internodal link bandwidth among different traffic classes, while allowing decay differentiation within a traffic class.
Abstract: The authors focus on the problem of internodal link queueing in integrated fast packet networks. The queueing discipline used to multiplex packets onto an internodal link is an important element in the overall bandwidth management of an integrated fast packet network. The characteristics of and quality of service required by different traffic classes are reviewed and used as the basis for comparing alternative queueing disciplines. A hybrid weighted round-robin and head-of-line priority discipline are shown to provide appropriate allocation of the internodal link bandwidth among different traffic classes, while allowing decay differentiation within a traffic class. >

Patent
23 Nov 1992
TL;DR: In this paper, the core and enhancement packets are transmitted in frame relay format and congestion forward (CF) and congestion backwards (CB) markers are used to feed back information of congestion conditions within a network to the packet assembler.
Abstract: A system in which core information, for example in the form of a core block or blocks (C), is transmitted in a core packet (PC), and at least some enhancement information, for example, in the form of enhancement blocks (E), is transmitted in an enhancement packet (PE) which is separate from the core packet (PC) and is discardable to relieve congestion. Preferably, the core and enhancement packets have headers (H) which include a discard eligible marker (DE) to indicate whether or not the associated packet can be discarded. The enhancement blocks (E) may be distributed between the core packet and enhancement packet in accordance with congestion conditions, or the enhancement blocks may be incorporated only in the enhancement packet, and the actual number of enhancement blocks included are varied depending on congestion conditions. Preferably, the packets are transmitted in frame relay format and congestion forward (CF) and congestion backwards (CB) markers are used to feed back information of congestion conditions within a network to the packet assembler (7) forming the core and enhancement packets.


Proceedings ArticleDOI
01 May 1992
TL;DR: The gamma network is enhanced to derive a balanced gamma network with the addition of an additional link and the performance of the proposed network is analyzed in comparison with the existing networks.
Abstract: The gamma network is enhanced to derive a balanced gamma network with the addition of an additional link. The performance of the proposed network is analyzed in comparison with the existing networks. The performance of replicated networks and of networks with one internal buffer are investigated. These networks are studied using two assumptions: the common assumption that each destination can accept only one packet in a given cycle and the assumption that any number of packets can be accepted by a destination. Balanced gamma networks exhibit good performance, enable simple routing schemes, and are modular. >

Proceedings ArticleDOI
14 Jun 1992
TL;DR: The implementation and performance of multicast packet switching in a broadband network environment are discussed and a novel scheme called revision scheduling is proposed to mitigate the head of line blocking effect by sequentially combining the one-shot scheduling and the call splitting disciplines.
Abstract: The implementation and performance of multicast packet switching in a broadband network environment are discussed. In terms of scheduling the transmission of the copies of the packet onto output ports, two basic service disciplines have been defined: (1) one-shot scheduling where all the copies are transmitted in the same time slot; and (2) call splitting with transmission over several time slots. As subcategories of call splitting strict-sense call splitting specifies that each packet can send at most one copy to the destination per time slot and wide-sense call splitting does not carry this restriction. A novel scheme called revision scheduling is proposed to mitigate the head of line blocking effect by sequentially combining the one-shot scheduling and the call splitting disciplines. Schematic structures are introduced for each category of scheduling, in the form of combinational logic circuits which are designed to resolve the output contentions corresponding to call scheduling disciplines, by incorporating a cyclic priority access policy. To compare the performances of various techniques, simulation studies were performed. >

Patent
23 Nov 1992
TL;DR: In this article, a packet transmission system comprising a network of one or more nodes (1) each comprising a number of inputs connected via packet switching means to outputs each of which has storage means associated with it in which to store a queue of packets to be outputted, characterized in that each node is provided with timer means (14) to measure the time each packet spends in a queue, and stamping means (12) to add to a time stamp field (TS) of each packet before it is outputted from the node, the time the packet has spent in a
Abstract: A packet transmission system comprising a network of one or more nodes (1) each comprising a number of inputs connected via packet switching means to a number of outputs each of which has storage means (11) associated with it in which to store a queue of packets to be outputted; characterized in that each node (1) is provided with timer means (14) to measure the time each packet spends in a queue of said storage means (11); and stamping means (12) to add to a time stamp field (TS) of each packet before it is outputted from the node, the time the packet has spent in a queue as measured by the timer means (14).

Proceedings ArticleDOI
06 Dec 1992
TL;DR: Inter-MAN handover protocols for the cellular packet switch are developed, based on interconnecting MANs via gateways, which greatly reduces the fixed-network signalling traffic while protecting information in data communications.
Abstract: Inter-MAN handover protocols for the cellular packet switch are developed, based on interconnecting MANs via gateways. Inter-MAN handover protocols utilize the gateway to maintain a connection as a user moves across MAN boundaries. This greatly reduces the fixed-network signalling traffic while protecting information in data communications. >

Proceedings ArticleDOI
01 May 1992
TL;DR: The authors present a queuing analysis and a simulation study of banyan switch fabrics based on 2*2 switching elements with crosspoint buffering and indicate that crosspointbuffering provides throughput approaching the offered load under uniform traffic conditions.
Abstract: The authors present a queuing analysis and a simulation study of banyan switch fabrics based on 2*2 switching elements with crosspoint buffering. In particular, the results apply to the PHOENIX switching element based banyan fabrics. The results indicate that crosspoint buffering provides throughput approaching the offered load under uniform traffic conditions. The effect of bursty traffic on the performance of the switch is studied. It is shown that a speedup factor of three or more is required to achieve acceptable delay and packet loss probability. It is also shown that the amount of buffer space required per port increases linearly with the burst size for a desired packet loss performance. For a given burst size the packet loss rate decreases exponentially as the buffer size is increased. The impact of crosspoint buffering and shared buffering in the switching elements on the performance of the banyan fabric is analyzed. >

Patent
07 Apr 1992
TL;DR: In this paper, a packet rate monitoring circuit is provided between the packet switching unit and the multiplexer, which includes a circuit for monitoring the cell arrival rate, circuit for detecting a packet cell violating the declared value which is transmitted from the terminal, and circuit for adding a mark representative of the violation cell to the violation packet which is detected by the detecting circuit.
Abstract: In a packet communication network in which a plurality of subscriber's lines are connected with a packet switching unit via a multiplexer and each of terminal units declares the packet rate on call request, a packet rate monitoring circuit is provided between the packet switching unit and the multiplexer. The packet rate monitoring circuit includes a circuit for monitoring the cell arrival rate, a circuit for detecting a packet cell violating the declared value which is transmitted from the terminal, a circuit for adding a mark representative of the violation cell to the violation packet which is detected by the detecting circuit, and a circuit for automatically modifying the declared parameter. The modifying circuit applies the declare parameter which was automatically modified in accordance with utilization condition (the number of multiplexings and the line utilization) of the multiplexer disposed between the subscriber's terminals and the rate monitoring circuit so that the detecting circuit detects the violation packets based upon the modified declared parameter.

Patent
Wolfgang Fischer1, Juergen Storm1
05 May 1992
TL;DR: In switching equipment, message cells of a message cell stream that has a transport bit rate higher by a multiple than the transmission bit rate of switching elements of the switching equipment are distributed over a plurality of switching network inputs corresponding in number to the multiple as discussed by the authors.
Abstract: In switching equipment, message cells of a message cell stream that has a transport bit rate higher by a multiple than the transmission bit rate of switching elements of the switching equipment are distributed over a plurality of switching network inputs corresponding in number to the multiple. The message cells have information attached to them that indicates all modules via which the respective message cells are through-connected to a respective output of the switching network.

Proceedings ArticleDOI
Zon-Yin Shae1, Mon-Song Chen1
06 Dec 1992
TL;DR: Video mixing, which is the simultaneous display of multiple motion videos received from multiple independent sources across a packet switched network, is addressed and a technique that performs direct mixing of JPEG compressed data is introduced.
Abstract: Video mixing, which is the simultaneous display of multiple motion videos received from multiple independent sources across a packet switched network, is addressed. A straightforward approach, which performs mixing at the pixel level, is analyzed and found undesirable because of the requirements of large amount of expensive memory and decompression capacity. A technique that performs direct mixing of JPEG compressed data is introduced. An efficient structure based on this technique, which needs only a frame worth of buffering and decompression regardless of the number of video windows and their sizes, is presented. Packet video playback is discussed, and two simple heuristics are proposed. >

Journal ArticleDOI
Jun Chen1, Roch Guerin1, T.E. Stern
TL;DR: The output queues of an M*N packet switch are studied using a Markov-modulated flow model that captures the dependency between arrivals at different outputs and reflects the fact that packet arrivals and departures are not instantaneous.
Abstract: The output queues of an M*N packet switch are studied using a Markov-modulated flow model. The switching element is a central server which sequentially routes packets from the inputs to the outputs. The focus is on systems in which the server speed is such that the bulk of the queuing takes place in the output queues. The conventional point process approach neglects the impact of switching and transmission time. An attempt is made to account for these finite system speeds by using a Markov-modulated continuous flow to approximate the arrival process to an output queue. This model captures the dependency between arrivals at different outputs and reflects the fact that packet arrivals and departures are not instantaneous. The output queue content distribution is obtained, for both infinite and finite buffer systems, from the spectral expansion of the solution of a system of differential equations. Numerical examples and comparisons with the results of an M/M/1 approximation are presented. >

Journal ArticleDOI
TL;DR: A layered packet video coding algorithm based on a progressive transmission scheme that provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence is presented.
Abstract: Some of the important characteristics and requirements of packet video are discussed. A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. A network simulator used in testing the scheme is introduced, and simulation results for various conditions are presented. >

Proceedings ArticleDOI
06 Dec 1992
TL;DR: A large-scale broadband self-routing switching network based on the three-stage construction of sort-banyan switch modules is proposed that is robust for all patterns of load on the ports.
Abstract: A large-scale broadband self-routing switching network based on the three-stage construction of sort-banyan switch modules is proposed. The switching network uses only one kind of module, preserves the cell sequencing of a service, and is robust for all patterns of load on the ports. A multistage three-phase algorithm is designed to control the delivery of cells. The switching network is input queued and delivers at most one cell in a cell time slot to each output port from one of the input ports requesting delivery to that output port. The maximum throughput of the switching network is shown to be 0.458, which is about 78% of that of a single-stage sort-banyan switching network. With a buffer size of 20 cells, one can achieve a 40% loading with almost no buffer overflow. Parallelism is easily achieved by having multiple switching planes. It is shown that, with four switching planes, the network is close to being output-queued. >