scispace - formally typeset
Search or ask a question

Showing papers on "Fast packet switching published in 1997"


Patent
24 Oct 1997
TL;DR: In this paper, a method for explicit data rate control is introduced into a packet communication environment which does not have data rate supervision by adding latency to the acknowledgement (ACK) packet and adjusting the size of the flow control window associated with the packet in order to directly control the data rate of the source data at the station originating the packet.
Abstract: A method for explicit data rate control is introduced into a packet communication environment (10) which does not have data rate supervision by adding latency to the acknowledgement (ACK) packet and by adjusting the size of the flow control window associated with the packet in order to directly control the data rate of the source data at the station (12 or 14) originating the packet.

336 citations


Patent
19 Mar 1997
TL;DR: In this paper, a system for correcting errors in the transmission of data packets between a source and a receiver is proposed, which uses the client unit and the server unit to send a repaired packet stream to a receiver when an error is detected.
Abstract: A system for correcting errors in the transmission of data packets between a source and a receiver. The source sends data packets to the client unit and server unit. The system uses the client unit and the server unit to send a repaired packet stream to a receiver when an error is detected. The client unit detects errors in the packet stream and sends retransmission requests of the lost data packets to the server unit. The server unit retransmits the lost data packet to the client unit, which then corrects the packet stream by inserting the lost packet into the proper time order and transmitting the repaired packet stream to the receiver.

274 citations


Proceedings ArticleDOI
27 May 1997
TL;DR: It is shown that the performance of TCP is sensitive to the packet size, and that significant performance improvements are obtained if a good packet size is used.
Abstract: Transmission Control Protocol (TCP) assumes a relatively reliable underlying network where most packet losses are due to congestion. In a wireless network, however, packet losses will occur more often due to unreliable wireless links than due to congestion. When using TCP over wireless links, each packet loss on the wireless link results in congestion control measures being invoked at the source. This causes severe performance degradation. In this paper, we study the effect of: burst errors on wireless links; packet size variation on the wired network; local error recovery by the base station; and explicit feedback by the base station, on the performance of TCP over wireless networks. It is shown that the performance of TCP is sensitive to the packet size, and that significant performance improvements are obtained if a good packet size is used. While local recovery by the base station using link-level retransmissions is found to improve performance, timeouts can still occur at the source, causing redundant packet retransmissions. We propose an explicit feedback mechanism, to prevent these timeouts during local recovery. Results indicate significant performance improvements when explicit feedback from the base station is used. A major advantage of our approaches over existing proposals is that no state maintenance is required at any intermediate host. Experiments are performed using the Network Simulator (NS) from Lawrence Berkeley Labs. The simulator has been extended to incorporate wireless link characteristics.

237 citations


01 Feb 1997
TL;DR: An overview of a novel approach to network layer packet forwarding, called tag switching, which is accomplished using simple label-swapping techniques, while the existing network layer routing protocols plus mechanisms for binding and distributing tags are used for control.
Abstract: This document provides an overview of a novel approach to network layer packet forwarding, called tag switching. The two main components of the tag switching architecture - forwarding and control - are described. Forwarding is accomplished using simple label-swapping techniques, while the existing network layer routing protocols plus mechanisms for binding and distributing tags are used for control. Tag switching can retain the scaling properties of IP, and can help improve the scalability of IP networks. While tag switching does not rely on ATM, it can straightforwardly be applied to ATM switches. A range of tag switching applications and deployment scenarios are described.

186 citations


Patent
30 Jun 1997
TL;DR: In this paper, a system and method for updating packet headers using hardware that maintains the high performance of the network element is presented. But it does not specify whether the packet can be routed in hardware.
Abstract: A system and method for updating packet headers using hardware that maintains the high performance of the network element. In one embodiment, the system includes an input port process (IPP) that buffers the input packet received and forwards header information to the search engine. The search engine searches a database maintained on the switch element to determine the type of the packet. In one embodiment, the type may indicate whether the packet can be routed in hardware. In another embodiment, the type may indicate whether the packet supports VLANs. The search engine sends the packet type information to the IPP along with the destination address (DA) to be updated if the packet is to be routed, or a VLAN tag if the packet has been identified to be forwarded to a particular VLAN. The IPP, during transmission of the packet to a packet memory selectively replaces the corresponding fields, e.g., DA field or VLAN tag field; the modified packet is stored in the packet memory. Associated with the packet memory are control fields containing control field information conveyed to the packet memory by the IPP. An output port process (OPP) reads the modified input packet and the control field information and selectively performs additional modifications to the modified input packet and issue control signals to the output interface (i.e., MAC). The MAC, based upon the control signals, replaces the source address field with the address of the MAC and generates a CRC that is appended to the end of the packet.

132 citations


Journal ArticleDOI
TL;DR: In this paper, an overview of the characteristics and challenges of optical packet switching is given, illustrating its potential advantages within future nodes and networks, describing basic system functionalities, and the opportunities introduced by the ACTS KEOPS project on all-optical packet-switching networks are highlighted.
Abstract: An overview of the characteristics and challenges of optical packet switching is given, illustrating its potential advantages within future nodes and networks, describing basic system functionalities. The opportunities introduced by the ACTS KEOPS project on all-optical packet-switching networks are highlighted, based partially on the outcome of the RACE ATMOS project, which is also considered in this article.

108 citations


Patent
Yoshihiro Ohba1
10 Sep 1997
TL;DR: In this paper, a packet scheduling scheme which is capable of improving the fairness characteristic in a short time scale by suppressing the burstiness of traffic compared with the conventional weight fair queueing algorithm such as DRR is proposed.
Abstract: A packet scheduling scheme which is capable of improving the fairness characteristic in a short time scale by suppressing the burstiness of traffic compared with the conventional weight fair queueing algorithm such as DRR. Packets are held in a plurality of packet queues by inputting arrived packets into the packet queues. Then, an output packet queue is sequentially selected from the packet queues, according to a prescribed criterion based on an amount of packets currently transmittable by each packet queue, such that the output packet queue is selected to be different from a previously selected output packet queue when there are more than one packet queues that satisfy the prescribed criterion. Then, a top packet is outputted from the sequentially selected output packet queue.

100 citations


Patent
20 Oct 1997
TL;DR: In this article, a packet processing and packet transfer scheme capable of reducing packet processing overhead by eliminating a need to decrypt and re-encrypt the entire packet at a time of relaying encrypted packets is proposed.
Abstract: A packet processing and packet transfer scheme capable of reducing the packet processing overhead by eliminating a need to decrypt and re-encrypt the entire packet at a time of relaying encrypted packets In a packet processing device for relaying encrypted packets, a packet transferred to the packet processing device is received, where the packet has a packet processing key to be used in a prescribed packet processing with respect to a data portion of the packet, and the packet processing key is encrypted by using a first master key shared between a last device that applied a cipher communication related processing to the packet and the packet processing device Then, the packet processing key in the received packet is decrypted, without carrying out the prescribed packet processing with respect to the data portion of the packet, and the decrypted packet processing key is re-encrypted by using a second master key shared between a next device to apply the cipher communication related processing to the packet and the packet processing device Then, the packet with the re-encrypted packet processing key encoded therein is transmitted toward a destination of the received packet

94 citations


Proceedings ArticleDOI
09 Apr 1997
TL;DR: It is shown that computing the maximum ergodic packet arrival rate is NP-hard and an upper bound on the maximum Ergodic throughput is given in terms of the eigenvalues of matrices related to the path-gain matrix.
Abstract: We consider schemes for reuse-efficient packet access in wireless data networks. We show that computing the maximum ergodic packet arrival rate is NP-hard. We give an upper bound on the maximum ergodic throughput in terms of the eigenvalues of matrices related to the path-gain matrix. We present simple, practical heuristic algorithms which exhibit good throughput and packet delay and report on results of preliminary simulations. More sophisticated algorithms that yield optimal throughput are also presented. A recent result of McKeown, Anantharam and Walrand (1996) on scheduling of input-queued switches is obtained as a by-product.

87 citations


Patent
20 Feb 1997
TL;DR: In this paper, a multiprocessor system includes a plurality of nodes and an interconnect that includes routers, each node includes a reliable packet mover and a fast frame mover.
Abstract: A multiprocessor system includes a plurality of nodes and an interconnect that includes routers. Each node includes a reliable packet mover and a fast frame mover. The reliable packet mover provides packets to the fast frame mover which adds routing information to the packet to form a frame. The route to each node is predetermined. The frame is provided to the routers which delete the route from the routing information. If the frame is lost while being routed, the router discards the frame. If the packet is received at a destination node, the reliable packet mover in that node sends an acknowledgement to the source node if the packet passes an error detection test. The reliable packet mover in the source node resends the packet if it does not receive an acknowledgement in a predetermined time. The fast frame mover randomly selects the route from a plurality of predetermined routes to the destination node according to a probability distribution.

82 citations


Patent
28 Mar 1997
TL;DR: In this article, a packet string having variable packet intervals is converted into that having even packet intervals with each of the packets being attached with a time stamp as information for reproducing original packet string.
Abstract: In transmitting a packet string having variable packet intervals by converting the packet string into that having even packet intervals with each of the packets being attached with a time stamp as information for reproducing original packet string, when the value of the time stamp, which is decided by adding a specified offset time to the synchronization time, is not smaller than the value of the time stamp attached to the previous packet, the time stamp is attached to the packet and the packet is transmitted. If the value of the time stamp becomes not more than the value of the time stamp attached to the previous packet as a result of providing shortened offset time due to increased bit rate of the original packet string, the packet is discarded so as to protect the transmission from being stopped.

Patent
Aimoto Takeshi1, Takeki Yazaki1
08 Dec 1997
TL;DR: In this article, a packet switch for setting a connection between a transmission source of a packet and a reception destination thereof so as to perform communication is described, which includes a packet buffer which includes at least one input port and a plurality of output ports, and a register which holds threshold value information for indicating an amount of use of the packet buffer that causes congestion.
Abstract: A packet switch for setting a connection between a transmission source of a packet and a reception destination thereof so as to perform communication. The invention includes a packet buffer which includes at least one input port and a plurality of output ports. An input packet from the input port is delivered to at least one output port in accordance with address information of the input packet and connection information having been set in the packet switch at the time of setting the connection between the transmission source and the reception destination. A bandwidth management packet for giving notice of a congested state of the packet switch is transferred on the connection. The invention further includes a register which holds threshold value information for indicating an amount of use of the packet buffer that causes congestion, a counter which provides a count representative of a current amount of use of the packet buffer, a comparator which compares the count from the counter and the threshold value information from the register and outputs a result of the comparison, and a congestion decision/notification circuit which writes congestion notification information into the bandwidth management packet based on a result of the comparison by the comparator.

Patent
09 Jan 1997
TL;DR: In this paper, a switch router is used for transmitting packetized data concurrently between a plurality of devices coupled to the switched router, which are then programmed to route packets of data from various source ports to several destination ports.
Abstract: A switched router for transmitting packetized data concurrently between a plurality of devices coupled to the switched router. The devices are coupled to the I/O ports of the switched router. The switched router is then programmed to route packets of data from various source ports to several destination ports. Different packets may be transmitted concurrently through the switched router. The packets are comprised of a command word containing information corresponding to packet routing, data format, size, and transaction identification. Furthermore, the command word may include a destination identification number for routing the packet to a destination device, a source identification number used by a destination device to send back responses, a transaction number to tag requests that require a response, and a packet type value indicating a particular type of packet. In addition, there may be bits within a packet used to indicate a coherent transaction, guarantee bandwidth, an error during transmission, or a sync barrier for write ordering. Other types of packets may include a fetch and operation packet with increment by one, a fetch and operation packet with decrement by one, a fetch and operation packet with clear, a store and operation packet with increment by one, a store and operation packet with decrement by one, a store and operation packet with a logical OR, and a store and operation packet with a logical AND.

Patent
18 Feb 1997
TL;DR: In this paper, a flow of packets being forwarded by a customer premises equipment to an endpoint of a fast packet switching network having a plurality of virtual connections provisioned from the endpoint to the plurality of destination endpoints is inhibited when the parameter violates a predetermined threshold.
Abstract: Methods and systems for controlling a flow of packets being forwarded by a customer premises equipment to an endpoint of a fast packet switching network having a plurality of virtual connections provisioned from the endpoint to a plurality of destination endpoints. A parameter indicative of bandwidth usage associated with the endpoint over at least two of the virtual connections is obtained. The flow of packets being forwarded to the endpoint is inhibited when the parameter violates a predetermined threshold.

Journal ArticleDOI
TL;DR: It is shown that, under the specific service discipline introduced here, there exists a set of control gains that result in asymptotic stability of the linearized network model, and that the resulting steady state rate allocation possesses the so-called max-min fairness property.
Abstract: In this paper, an analytical method for the design of a congestion control scheme in packet switching networks is presented. This scheme is particularly suitable for implementation in ATM switches, for the support of the available bit rate (ABR) service in ATM networks. The control architecture is rate-based with a local feedback controller associated with each switching node. The controller itself is a generalization of the standard proportional-plus-derivative controller, with the difference that extra higher-order derivative terms are involved to accommodate the delay present in high-speed networks. It is shown that, under the specific service discipline introduced here, there exists a set of control gains that result in asymptotic stability of the linearized network model. A method for calculating these gains is given. In addition, it is shown that the resulting steady state rate allocation possesses the so-called max-min fairness property. The theoretical results are illustrated by a simulation example, where it is shown that the controller designed, using the methods developed here, works well for both the service discipline introduced in this paper and for the standard FCFS scheme. © 1997 John Wiley & Sons, Ltd.

Patent
Takeshi Aimoto1, Takeki Yazaki1
03 Dec 1997
TL;DR: In this article, the authors proposed a packet level discard control table for packet-level discard priority in the presence of congestion state of an ATM switch in the upstream and a congestion control table in the downstream.
Abstract: An ATM switch having a packet level discard function includes an upstream congestion detection circuit for detecting a congestion state of an ATM switch provided in the upstream and a packet level discard control table for holding at every connection a packet level discard priority indicating whether the ATM switch provided in the upstream has the packet level discard function or not, and wherein cells transmitted via an ATM switch not having a packet level discard function or an ATM switch which is not in the congestion state are packet-level-discarded with a priority to other cells. Thus, it is possible to improve a goodput of the ATM network in which ATM switches having a packet level discard function and ATM switches not having a packet level discard function are provided in a mixed state.

Patent
24 Jun 1997
TL;DR: In this paper, a system and method for remotely waking up a device connected to a local area network (LAN) is disclosed, where a special data packet is disclosed wherein the destination address of the packet is embedded at least 16 consecutive times within the data field of packet.
Abstract: A system and method for remotely waking up a device connected to a local area network (LAN) is disclosed. A special data packet is disclosed wherein the destination address of the packet is embedded at least 16 consecutive times within the data field of the packet. When this particular type of packet is transmitted on the LAN, it is first decoded by the I/O subsystem of the device to determine whether or not it is a remote wake-up packet. After determining that the packet received is a remote wake-up packet, a wake-up enable line is activated thereby taking the system out of its low power mode, for providing further processing of future received packets.

Proceedings ArticleDOI
06 Oct 1997
TL;DR: Tag switching facilitates the development of a routing system that is functionally rich, scalable, suitable for high forwarding performance, and is capable of graceful evolution to address new and emerging requirements in a timely fashion.
Abstract: Tag switching is based on two key ideas: (1) single forwarding algorithm based on label (tag) swapping, and (2) support for a wide range of forwarding granularities that could be associated with a single tag. Combination of these two ideas facilitates the development of a routing system that is functionally rich, scalable, suitable for high forwarding performance, and is capable of graceful evolution to address new and emerging requirements in a timely fashion.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
Graeme Roy Smith1
05 Jun 1997
TL;DR: In this paper, a data unit receives data packets and delivers them to packet switching circuitry, and when each data packet is received, an entry corresponding to the packet concerned is made in that one of the receive queues (RQ) which corresponds to the intended destination of the packets.
Abstract: A data unit receives data packets and delivers them to packet switching circuitry. The data unit stores the received data packets in a memory. The memory has receive queues (RQ 0 to RQ 63 ) corresponding respectively to the different possible intended destinations of the receive packets in the packet switching circuitry, and when each data packet is received, an entry corresponding to the packet concerned is made in that one of the receive queues (RQ) which corresponds to the intended destination of the packets. A multicast handling section operates, when such a received data packet is a multicast packet having two or more intended destinations, to cause the packet registration means to make an entry corresponding to the multicast packet concerned in each receive queue corresponding to one of those destinations. An output section operates, for each receive queue, to read the entries in the queue concerned in the order in which they were made and, for each entry read, to read out the corresponding data packet form the memory and output it to the packet switching circuitry. Such a data unit is suitable for use in ATM switching apparatus in which the data packets each comprise one or more ATM cells, and can ensure cell sequence integrity for multicast and unicast cells having the same destination. Self-routing cross-connect switching devices for use in such ATM switching apparatus are also disclosed.

Proceedings ArticleDOI
04 May 1997
TL;DR: The results show that the proposed method for admission control and radio resource management in a multispot-beam satellite network can be implemented in a distributed way that take the user mobility in addition to the service specific parameters in account.
Abstract: This paper investigates the strategy that has to be adopted for connection admission control and handoff execution in dynamic satellite networks that use on board satellite fast packet switching as part of a mobile broadband ISDN network. A new method for admission control and radio resource management in a multispot-beam satellite network is proposed and its performance is examined by a simulation model. The results show that we can implement a flexible connection admission control algorithm in a distributed way that take the user mobility in addition to the service specific parameters in account.

Patent
16 Nov 1997
TL;DR: In this paper, the authors proposed a disclosed data packet switch, which consists of multiple independent stages where different stages of the switch operate without a common centralized controller, and the corresponding switching functions are locally performed by each stage based only on the information available locally.
Abstract: A data packet switch, in general, and an Asynchronous Transfer Mode switch, in particular employing a plurality of physically separate memory modules operates like a single shared memory switch by allowing sharing of all of the memory modules among all of the inputs and outputs of the switch. The disclosed switching apparatus consists of multiple independent stages where different stages of the switch operate without a common centralized controller. The disclosed switch removes performance bottleneck commonly caused by use of a centralized controller in the switching system. Incoming data packets are assigned routing parameters by a parameter assignment circuit based on packets' output destination and current state of the switching system. The routing parameters are then attached as an additional tag to input packets for their propagation through various stages of the switching apparatus. Packets with the attached routing parameters pass through different stages of the switching apparatus and the corresponding switching functions are locally performed by each stage based only on the information available locally. Memory modules along with their controllers use information available locally to perform memory operations and related memory management to realize overall switching function. The switching apparatus and the method facilitates sharing of physically separate memory modules without using a centralized memory controller. The switching apparatus and the method provide higher scalability, simplified circuit design, pipeline processing of data packets and the ability to realize various memory sharing schemes for a plurality memory modules in the switch.

Journal ArticleDOI
TL;DR: This work focuses on tail dropping (TD) and early packet discard (EPD) as selective cell discard schemes which enforce the switches to discard some of the arriving cells instead of relaying them and exactly analyzes the packet loss probability in a system applying these schemes.
Abstract: In transport-layer protocols such as TCP over ATM networks, a packet is discarded when one or more cells of that packet are lost, and the destination node then requires its source to retransmit the corrupted packet. Therefore, once one of the cells constituting a packet is lost, its subsequent cells of the corrupted packet waste network resources. Thus, discarding those cells will enable us to efficiently utilize network resources, and will improve the packet loss probability. We focus on tail dropping (TD) and early packet discard (EPD) as selective cell discard schemes which enforce the switches to discard some of the arriving cells instead of relaying them. We exactly analyze the packet loss probability in a system applying these schemes. Their advantages and limits are then discussed based on numerical results derived through the analysis.

Journal ArticleDOI
TL;DR: It is shown that 100% goodput can be achieved with substantially smaller buffers than predicted by the worst case analysis, although the required buffer space can be significant when the link speed is substantially higher than the rate of the individual virtual circuits.
Abstract: In a previous paper, one of the authors gave a worst case analysis for the early packet discard (EPD) technique for maintaining packet integrity during overload in ATM switches. This analysis showed that to ensure 100% goodput during overload under worst case conditions requires a buffer with enough storage for one maximum length packet from every active virtual circuit. This paper refines that analysis, using assumptions that are closer to what we expect to see in practice, and examines how EPD performs when the buffer is not large enough to achieve 100% goodput. We show that 100% goodput can be achieved with substantially smaller buffers than predicted by the worst case analysis, although the required buffer space can be significant when the link speed is substantially higher than the rate of the individual virtual circuits. We also show that high goodputs can be achieved with more modest buffer sizes, but that EPD exhibits anomalies with respect to buffer capacity, in that there are situations in which increasing the amount of buffering can cause the goodput to decrease. These results are validated by comparison with simulation.

Proceedings ArticleDOI
08 Jun 1997
TL;DR: An overview of the basic design principles and trade-offs of output-buffer ATM switching are given and bandwidth scheduling and memory bandwidth requirements are described.
Abstract: In this paper, we give an overview of the basic design principles and trade-offs of output-buffer ATM switching. Output-buffer switches give optimal performance in terms of offering bandwidth guarantees to individual flows. Bandwidth scheduling and memory bandwidth requirements are also described.

Patent
28 Oct 1997
TL;DR: In this paper, improved methods and apparatus to facilitate switching Asynchronous Transfer Mode (ATM) cells through an ATM switching circuit (300) are disclosed, which facilitate the implementation of per virtual connection buffering (302), per VM arbitration (304), and per VM connection back-pressuring to improve switching efficiency and reduce the complexity and/or costs.
Abstract: Improved methods and apparatus to facilitate switching Asynchronous Transfer Mode (ATM) cells through an ATM switching circuit (300) are disclosed. The improved methods and apparatus facilitate the implementation of per virtual connection buffering (302), per virtual connection arbitration (304) of ATM cells, and/or per virtual connection back-pressuring to improve switching efficiency and/or reduce the complexity and/or costs of the ATM switching circuit (300).

Patent
03 Jul 1997
TL;DR: In this article, the authors propose to simply and dynamically change packet distribution destinations and packet distribution conditions by realizing a single system image at network address level in a control method that distributes packets to processing nodes in a device that connects an external network to a cluster network having plural processing nodes.
Abstract: PROBLEM TO BE SOLVED: To simply and dynamically change packet distribution destinations and packet distribution conditions by realizing a single system image at a network address level in a control method that distributes packets to processing nodes in a device that connects an external network to a cluster network having plural processing nodes. SOLUTION: Upon the receipt of a packet reaching an address representing a cluster from a packet reception section 11, a pattern matching section 12 of the repeater 1 uses a distribution control table 10 to conduct pattern matching based on a packet transmission source address, a source port and a destination port or the like. A hash calculation section 13 obtains an argument of a hash function based on the matching result to conduct hash calculation. A destination node extract section 14 decides a packet destination node based on the hash result by using the distribution control table 10.

Patent
21 Aug 1997
TL;DR: In this article, a method of internal data communication for a packet-switching computer includes the step of configuring internal components of the computer and an internal packet switch bank to communicate using a fixed size local data packet.
Abstract: A method of internal data communication for a packet-switching computer includes the step of configuring internal components of the computer and an internal packet switch bank to communicate using a fixed size local data packet. The local data packet has a fixed size header portion and a fixed size data portion. The local data packet may be directly compatible with an ATM cell. Alternatively, only the data portion size of the local data packet may correspond to the size of the ATM cell data payload or an integer multiple thereof. In order to increase data throughput, the size of the header portion of the local data packet may be less than size of the ATM cell header.

Patent
12 Dec 1997
TL;DR: In this article, the authors propose to continue communication while eliminating route selection time even when the moving amount of a terminal is large by permitting data transmission even on the condition that the channel quality of a radio channel or the like is not fixed.
Abstract: PROBLEM TO BE SOLVED: To continue communication while eliminating route selection time even when the moving amount of a terminal is large by permitting data transmission even on the condition that the channel quality of a radio channel or the like is not fixed. SOLUTION: A transmission source terminal 1a transmits a packet B to which transmission source/destination information is set. Terminals 1b and 1c in a radio wave reaching area A1a receive the packet B, check whether that packet can be transferred or not and perform copy transfer as a first copy packet C. Terminals 1a, 1d and 1e receive the first copy packet C. Based on the transmission source information, the terminal 1a abandons the first copy packet C. The terminals 1d and 1e check the transmission destination information and whether the copy packet can be transferred or not and perform the copy transfer of the first copy packet C as a second copy packet D. Terminals 1b, 1c and 1d check the transmission destination information and whether the copy packet can be transferred or not and abandon the second copy packet D and a transmission destination terminal 1f fetches the second copy packet D based on the transmission destination information. COPYRIGHT: (C)1999,JPO

Journal ArticleDOI
TL;DR: An OTDM multihop prototype network, developed at Princeton University, is presented and 100 Gbit/s optical packet switching in an 8-node transparent shuffle network is demonstrated offering extremely high bandwidth and low latency.
Abstract: An OTDM multihop prototype network, developed at Princeton University, is presented. By employing a new self-routing scheme with special address coding suitable for optical packet switching, 100 Gbit/s optical packet switching in an 8-node transparent shuffle network is demonstrated offering extremely high bandwidth and low latency.

Book ChapterDOI
P. Gambini1
01 Jan 1997
TL;DR: A review of the status of research in the field of guided wave photonic packet switched networks is presented, as well as techniques for routing, contention resolution and packet synchronization in the switching nodes.
Abstract: A review of the status of research in the field of guided wave photonic packet switched networks is presented. Network configurations and packet formats proposed so far are described, as well as techniques for routing, contention resolution and packet synchronization in the switching nodes.