scispace - formally typeset
Search or ask a question

Showing papers on "Fast packet switching published in 2001"


Journal ArticleDOI
TL;DR: A survey of two new technologies which are still in the experimental stage-optical packet switching and optical burst switching-and comment on their suitability for transporting IP traffic.
Abstract: Wavelength-division multiplexing appears to be the solution of choice for providing a faster networking infrastructure that can meet the explosive growth of the Internet. Several different technologies have been developed so far for the transfer of data over WDM. We survey two new technologies which are still in the experimental stage-optical packet switching and optical burst switching-and comment on their suitability for transporting IP traffic.

413 citations


Patent
04 May 2001
TL;DR: In this article, a system and method for facilitating packet transformation of multi-protocol, multi-flow, streaming data is presented, where packet portions subject to change are temporarily stored, and acted upon through processing of protocoldependent instructions, resulting in a protocol-dependent modification of the temporarily stored packet information.
Abstract: A system and method for facilitating packet transformation of multi-protocol, multi-flow, streaming data. Packet portions subject to change are temporarily stored, and acted upon through processing of protocol-dependent instructions, resulting in a protocol-dependent modification of the temporarily stored packet information. Validity tags are associated with different segments of the temporarily-stored packet, where the state of each tag determines whether its corresponding packet segment will form part of the resulting modified packet. Only those packet segments identified as being part of the resulting modified packet are reassembled prior to dispatch of the packet.

378 citations


Journal ArticleDOI
TL;DR: This work presents enhancements to two commonly used IP signaling protocols, RSVP and LDP, to support GMPLS and discusses mechanisms for bidirectional LSP setup, and describes the applications of suggested labels.
Abstract: Generalized multiprotocol label switching (GMPLS), also referred to as multiprotocol lambda switching, is a multipurpose control plane paradigm that supports not only devices that perform packet switching, but also devices that perform switching in the time, wavelength, and space domains. The development of GMPLS necessitates enhancements to existing IP signaling and routing protocols. We present enhancements to two commonly used IP signaling protocols, RSVP and LDP, to support GMPLS. We illustrate the concept of hierarchical label switched path setup with an example, discuss mechanisms for bidirectional LSP setup, and describe the applications of suggested labels. We also discuss the important problem of protection and restoration in the GMPLS context. Finally, we describe how various recovery mechanisms can be implemented within the GMPLS framework.

349 citations


Journal ArticleDOI
TL;DR: This article describes how OBS can be applied to the next-generation optical Internet, and in particular how offset times and delayed reservation can help avoid the use of buffer, and support quality of service at the WDM layer.
Abstract: In an effort to eliminate the electronic bottleneck, new optical switches/routers (hardware) are being built for the next-generation optical Internet where IP runs over an all-optical WDM layer. However, important issues yet to be addressed in terms of protocols (software) are how to develop a new paradigm that does not require any buffer at the WDM layer, as in circuit switching, and elimination of any layers between which exist mainly due to historical reasons. At the same time, such a paradigm should also efficiently support bursty traffic with high resource utilization as in packet switching. This article surveys design issues related to a new switching paradigm called optical burst switching, which achieves a balance between circuit and packet switching while avoiding their shortcomings. We describe how OBS can be applied to the next-generation optical Internet, and in particular how offset times and delayed reservation can help avoid the use of buffer, and support quality of service at the WDM layer.

264 citations


Patent
21 Feb 2001
TL;DR: A group packet encapsulation and compression method is proposed in this article, where packets queued at a node configured in accordance with the present invention are classified, grouped, and encapsulated into a single packet as a function of having another such configured node in their path.
Abstract: A group packet encapsulation and (optionally) compression system and method, including an encapsulation protocol increases packet transmission performance between two gateways or host computers by reducing data-link layer framing overhead, reducing packet routing overhead in gateways, compressing packet headers in the encapsulation packet, and increasing loss-less data compression ratio beyond that otherwise achievable in typical systems. Packets queued at a node configured in accordance with the present invention are classified, grouped, and encapsulated into a single packet as a function of having another such configured node in their path. The nodes exchange encapsulation packets, even though the packets within the encapsulation packet may ultimately have different destinations. Compression within an encapsulation packet may be performed on headers, payloads, or both.

161 citations


Patent
20 Dec 2001
TL;DR: In this paper, a system and method are disclosed for transparently proxying a connection to a protected machine, which includes monitoring a communication packet on a network at a proxy machine.
Abstract: A system and method are disclosed for transparently proxying a connection to a protected machine. The method includes monitoring a communication packet on a network at a proxy machine. The communication packet has a communication packet source address, a communication packet source port number, a communication packet destination address, and a communication packet destination port number. The proxy determines whether to intercept the communication packet based on whether the communication packet destination address and the communication packet destination port number correspond to a protected destination address and a protected destination port number stored in a proxy list. The proxy then determines whether to proxy a proxied connection associated with the communication packet based on the communication packet source address and the communication packet source port number. A protected connection is terminated from the proxy machine to a protected machine. The protected machine corresponds to the communication packet destination address and the communication packet destination port number. A response is formed to the communication packet under a network protocol by sending a responsive packet from the proxy machine. The responsive packet has a header having a responsive packet source address and a responsive packet source port number such that the responsive packet source address and the responsive packet source port number are the same as to the communication packet destination source address and the communication packet destination port number.

151 citations


Journal ArticleDOI
TL;DR: In this article, the packet loss and delay performance of an arrayed-waveguide-grating-based (AWG) optical packet switch developed within the EPSRC-funded project WASPNET (wavelength switched packet network).
Abstract: This paper analyzes the packet loss and delay performance of an arrayed-waveguide-grating-based (AWG) optical packet switch developed within the EPSRC-funded project WASPNET (wavelength switched packet network). Two node designs are proposed based on feedback and feed-forward strategies, using sharing among multiple wavelengths to assist in contention resolution. The feedback configuration allows packet priority routing at the expense of using a larger AWG. An analytical framework has been established to compute the packet loss probability and delay under Bernoulli traffic, justified by simulation. A packet loss probability of less than 10/sup -9/ was obtained with a buffer depth per wavelength of 10 for a switch size of 16 inputs-outputs, four wavelengths per input at a uniform Bernoulli traffic load of 0.8 per wavelength. The mean delay is less than 0.5 timeslots at the same buffer depth per wavelength.

139 citations


Journal ArticleDOI
W. Bux1, W.E. Denzel, T. Engbersen, Andreas Herkersdorf, Ronald P. Luijten 
TL;DR: The state of the art and the future of packet processing and switching are reviewed, and architectural and design issues that must be addressed to allow the evolution of packet switch fabrics to terabit-per-second throughput performance are discussed.
Abstract: We provide a review of the state of the art and the future of packet processing and switching. The industry's response to the need for wire-speed packet processing devices whose function can be rapidly adapted to continuously changing standards and customer requirements is the concept of special programmable network processors. We discuss the prerequisites of processing tens to hundreds of millions of packets per second and indicate ways to achieve scalability through parallel packet processing. Tomorrow's switch fabrics, which will provide node-internal connectivity between the input and output ports of a router or switch, will have to sustain terabit-per-second throughput. After reviewing fundamental switching concepts, we discuss architectural and design issues that must be addressed to allow the evolution of packet switch fabrics to terabit-per-second throughput performance.

122 citations


Patent
26 Feb 2001
TL;DR: In this article, a sender converts a non-compressed packet, which is to be transmitted, into a full-header packet including a full header or a header compressed packet, and sends the converted packet to a receiver The receiver receives the packet transmitted from the sender, and converts the received packet into a decompressed packet.
Abstract: A sender converts a non-compressed packet, which is to be transmitted, into a full-header packet including a full header or a header-compressed packet including a compressed header, and sends the converted packet to a receiver The receiver receives the packet transmitted from the sender, and converts the received packet into a decompressed packet In cases the full-header packet or header-compressed packet is lost between the sender and receiver, the receiver keeps header-compressed packers received during an interval from the packet loss to the next earliest reception of a full-header packet, and decompresses the compressed headers of the kept header-compressed packets on the basis of contents of the full header of the full-header packet

109 citations


Journal ArticleDOI
TL;DR: Results show that the video quality can be substantially improved by utilizing the frame error information at UDP and application layer, and several maximal distance separable (MDS) code-based packet level error control coding schemes are proposed.
Abstract: Packet video will become a significant portion of emerging and future wireless/Internet traffic. However, network congestion and wireless channel error yields tremendous packet loss and degraded video quality. In this paper, we propose a new complete user datagram protocol (CUDP), which utilizes channel error information obtained from the physical and link layers to assist error recovery at the packet level. We propose several maximal distance separable (MDS) code-based packet level error control coding schemes and derive analytical formulas to estimate the equivalent video frame loss for different versions of user datagram protocol (UDP). We validate the proposed packet coding and CUDP protocol using MPEG-coded video under various Internet packet loss and wireless channel profiles. Theoretic and simulation results show that the video quality can be substantially improved by utilizing the frame error information at UDP and application layer.

108 citations


Journal ArticleDOI
TL;DR: This article analyses the rationale and technical solutions for the use of optical packet switching techniques for both backbone and metropolitan applications and provides information on state-of-the-art technologies available for medium-term product development.
Abstract: This article analyses the rationale and technical solutions for the use of optical packet switching techniques for both backbone and metropolitan applications. It also provides information on state-of-the-art technologies available for medium-term product development.

Patent
28 Sep 2001
TL;DR: In this article, a gate node reports current position information of a mobile terminal managed by the gate node itself to a communicating edge node which routes a packet destined for the mobile terminal to a resident edge node based on the reported cache information in place of the gate nodes.
Abstract: A gate node reports current position information of a mobile terminal managed by the gate node itself to a communicating edge node which routes a packet destined for the mobile terminal to the gate node. The communicating edge node carries out routing of the packet destined for the mobile terminal to a resident edge node based on the reported cache information in place of the gate node. This achieves high-speed handover and route optimization free from a limitation to a packet transfer route in a packet communication system.

Patent
James L. Jason1
22 Oct 2001
TL;DR: In this paper, a method of determining a maximum packet size for data packets sent along a network path is proposed, where a sending computer sends a packet to a receiving computer through a sending interface.
Abstract: A method of determining a maximum packet size for data packets sent along a network path. A sending computer sends a packet to a receiving computer through a sending interface. The packet is fragmented during transfer to a receiving interface. The fragments are analyzed at the receiving interface and their size determined. The size of a fragment is compared to a pre-determined maximum packet size, and in response to the comparison, the maximum packet size is changed. The change is then reported to the sending interface and stored in a memory. Subsequent communications from the sending interface to the receiving interface are sent in packets of the size stored in the memory. Because the maximum packet size of a network path can change over time, test packets can be sent periodically to determine the maximum packet size.


Patent
Shih-Hsiung Ni1
22 Oct 2001
TL;DR: In this article, the authors propose a method for prioritizing packet flows within a switching network, which includes the steps of receiving packets at an input port, stamping the packets with an arrival time, and classifying the packet into a flow, wherein the flow is determined based upon at least a class of service of the packet.
Abstract: The present invention provides a method for prioritizing packet flows within a switching network. The method includes the steps of receiving packets at an input port, stamping the packets with an arrival time, and classifying the packet into a flow, wherein the flow is determined based upon at least a class of service of the packet, assigning the packet to a queuing ring according to the flow of the packet, and maintaining a flow ratio pending within the switch based upon the classification of the packet.

Patent
Tetsumei Tsuruoka1, Yuji Kojima1
17 Sep 2001
TL;DR: A packet processing device capable of restraining overhead and processing packets at high speed is defined in this article, where the internal information handover section hands over internal information of a packet processor.
Abstract: A packet processing device capable of restraining overhead and processing packets at high speed. Packet input section is input a packet, and internal information handover section hands over internal information of a packet processor. Packet computing section computes the input packet in accordance with the internal information, and packet output section outputs the computed packet. A communication line connects such packet processors in series.

Patent
22 May 2001
TL;DR: In this article, the authors proposed a centralized control and signaling interworking function (CS-IWF) device that performs call control functions and adminstrative functions and is adapted to interface narrowband and broadband signaling for call processing and control within the ATM switching network.
Abstract: An Asynchronous Transfer Mode (ATM)-based distributed network switching system includes an ATM switching network (26) that dynamically sets up individual switched virtual connections. The system also includes multiple access interworking function (A-IWF) devices each operating as a gateway that enables customer premises devices to directly interface into the distributed ATM switching fabric. The system further includes a centralized control and signaling interworking function (CS-IWF) device that performs call control functions and adminstrative functions and is adapted to interface narrowband and broadband signaling for call processing and control within the ATM switching network (26). The CS-IWF device (30) may be a server farm.

Patent
09 Jul 2001
TL;DR: In this article, a method of transmitting packet data to a receiver in a communication system using a hybrid ARQ technique is disclosed, in which an index representative of how much the first packet being transmitted such as a signal-to-noise (Eb/No) ratio is compared to a predetermined threshold value when an error is occurred during the step of decoding the packet is disclosed.
Abstract: A method of transmitting packet data to a receiver in a communication system using a Hybrid ARQ technique is disclosed. In accordance with one illustrative embodiment of the present invention, an index representative of how much the first packet being transmitted such as a signal-to-noise (Eb/No) ratio is compared to a predetermined threshold value when an error is occurred during the step of decoding the packet. If the index is greater than or equal to the threshold value, the first packet transmitted is stored in a buffer and a receiver requests to transmit a second additional packet encoded with a lower code rate. On the other hand, if the index is less than the threshold value, the receiver requests to re-transmit the first packet encoded with the same code rate.

Patent
28 Jun 2001
TL;DR: In this paper, a device for switching packets in a network includes a switching core and a plurality of ports coupled to pass the packets from one to another through the switching core, with respect to each packet among the packets switched by the device, a receiving port coupled to receive the packet from a packet source, and a destination port to which the packet is passed for conveyance to a packet destination.
Abstract: A device for switching packets in a network includes a switching core and a plurality of ports, coupled to pass the packets from one to another through the switching core. The ports include, with respect to each packet among the packets switched by the device, a receiving port, coupled to receive the packet from a packet source, and a destination port, to which the packet is passed for conveyance to a packet destination. The ports also include one or more cache memories, respectively associated with one or more of the ports, each of the cache memories being configured to hold a forwarding database cache for reference by the receiving port with which the cache memory is associated in determining the destination port of the packet.

Proceedings ArticleDOI
25 Nov 2001
TL;DR: This paper intends to design a novel packet classification engine capable of simultaneously processing multi-field searching, especially for the IPv6 packets with relative longer addresses (128 bits).
Abstract: Typically, high-end routers/switches classify a packet by looking for multiple fields of the IP/TCP headers and recognize which flow the packet belongs to. Several packet classification algorithms to accelerate packet processing and reduce the memory requirement have been proposed. But it is not easy to implement these algorithms in hardware to lookup these multiple fields in the same time. This paper intends to design a novel packet classification engine capable of simultaneously processing multi-field searching, especially for the IPv6 packets with relative longer addresses (128 bits). To classify the IPv6 packets in wire-speed, the CLM (CAM-Like Memory)-based hardware architecture is considered and five fields (source IPv6 address, destination IPv6 address, source port, destination port, and protocol) are designed as the searching key. Evaluation results indicate that compared with the typical market leading delivering search engines, the proposed hardware architecture provides a 30% speed-up performance. A compact method is also provided to compress the bit-width required to represent the multi-field of an IPv6 packet. This saves the memory space required for the IPv6 rule table for about 20%.

Patent
Gerald Grand1, Niki Pantelias1, R. Jeff Lee1, Michael Zelnick1, Francisco J. Gomez1 
31 Dec 2001
TL;DR: In this article, a method and system for creating an ethernet-formatted packet from an upstream DOCSIS packet is presented, based on the packet characteristic data, which includes identifiers for the transmitting remote device and the channel over which the transmission is sent.
Abstract: A method and system for creating an ethernet-formatted packet from an upstream DOCSIS packet The upstream packet is first received along with packet characteristic data that is contained in physical layer prepend data and in the packet header A packet tag is then created, based on the packet characteristic data The packet characteristic data includes identifiers for the transmitting remote device and the channel over which the transmission is sent Packet characteristic data also includes information about the physical characteristics of the transmission signal, such as the power level and time offset The packet characteristic data also includes administrative information, such as the minislot count at which the packet is received and whether the packet was received in contention The packet tag is appended to the payload of the upstream packet Also appended to the payload is an encapsulation tag, and source and destination address headers The result is a packet in an ethernet format The resulting packet can therefore be sent using the ethernet protocol The packet includes information that characterizes a DOCSIS packet In a distributed cable modem termination system, this additional characterizing information can be used by processes further upstream, such as packet classification An analogous operation can take place with respect to packets going downstream Here, a DOCSIS packet is formed at an intermediate node, on the basis of a received ethernet-formatted packet

Proceedings ArticleDOI
V. Srinivasan1
22 Apr 2001
TL;DR: A new filter matching scheme called entry-pruned tuple search is presented and its advantages over previously presented algorithms are discussed, and an incremental update algorithm based on maintaining an event list that can be applied to many of the previously presented filter matching schemes which did not support incremental updates are presented.
Abstract: Packet classification and fast filter matching have been an important field of research. Several algorithms have been proposed for fast packet classification. We first present a new filter matching scheme called entry-pruned tuple search and discuss its advantages over previously presented algorithms. We then show how this algorithm blends very well with an earlier packet classification algorithm that uses markers and precomputation, to give a blended entry-pruned tuple search with markers and precomputation (EPTSMP). We present performance measurements using several real-life filter databases. For a large real-life database of 1777 filters, our preprocessing times were close to 9 seconds; a lookup takes about 20 memory accesses and the data structure takes about 500 K bytes of memory. Then, we present scenarios that will require various programs/modules to automatically generate and add filters to a filter processing engine. We then consider issues in enabling this. We need policies that govern what filters can be added by different modules. We present our filter policy management architecture. We then show how to support fast filter updates. We present an incremental update algorithm based on maintaining an event list that can be applied to many of the previously presented filter matching schemes which did not support incremental updates. We then describe the event list based incremental update algorithm as it applies to EPTSMP. To stress the generality of the approach, we also describe how our update technique can be used with the packet classification technique based on crossproducing. We conclude with an outline of a hardware implementation of EPTSMP that can handle OC192 rates with 40 byte minimum packet lengths.

Patent
Sung-Won Lee1, Soon-Young Yoon1, Seung-Joo Maeng1, Woo-June Kim1, Hong-Seong Chang1, Hoon Chang1 
01 Feb 2001
TL;DR: In this paper, the authors propose a method for assigning packet data to a radio packet data channel of a base station system in response to a packet traffic transmission request for a plurality of mobile stations in a mobile communication system.
Abstract: Disclosed is a method for assigning packet data to be transmitted to a radio packet data channel of a base station system in response to a packet traffic transmission request for a plurality of mobile stations in a mobile communication system. The method comprises collecting the packet traffic transmission requests of the radio packet data channel for the mobile stations; selecting at least one of the mobile stations from the collected packet traffic transmission requests; transmitting to the selected mobile station a channel assignment message including information about a data rate, data transmission durations of the radio packet data channel and start points of the data transmission durations for the selected mobile station; and transmitting the packet data to the selected mobile station at the start points of the data transmission durations at the data rate.

Patent
31 May 2001
TL;DR: In this paper, the authors describe a network switch architecture that switches between line interface cards across a meshed backplane in a protocol independent manner, and provide a non-blocking topology with both input and output queuing and per flow queuing at both ingress and egress.
Abstract: The network switch described herein provides a cell/packet switching architecture that switches between line interface cards across a meshed backplane. In one embodiment, the switching can be accomplished at, or near, line speed in a protocol independent manner. The protocol independent switching provides support for various applications including Asynchronous Transfer Mode (ATM) switching, Internet Protocol (IP) switching, Multiprotocol Label Switching (MPLS) switching, Ethernet switching and frame relay switching. The architecture allows the network switch to provision service on a per port basis. In one embodiment, the network switch provides a non-blocking topology with both input and output queuing and per flow queuing at both ingress and egress. Per flow flow-control can be provided between ingress and egress scheduling. Strict priority, round robin, weighted round robin and earliest deadline first scheduling can be provided.

Patent
03 Dec 2001
TL;DR: In this paper, a virtual time reference system (VTRS) is used to generate packet virtual time stamps associated with each packet traversing the network, which are then removed from the packets when they leave the network core through an edge conditioner.
Abstract: A method and apparatus for packet scheduling using a virtual time stamp for a high capacity combined input and output queued switching system. A network employs a virtual time reference system (VTRS) to generate packet virtual time stamps associated with each packet traversing the network. The VTRS includes edge conditioners located at the edge of the network that receive unregulated packet traffic and generate regulated packet traffic for a given flow. The edge conditioners also add a packet virtual time stamp to each incoming packet. Core routers within a network core reference the packet virtual time stamps to schedule packet flow. The core routers also update the packet virtual time stamps using virtual delays. The packet virtual time stamps are removed from the packets when the packets leave the network core through an edge conditioner.

Patent
28 Jun 2001
TL;DR: In this article, a packet scheduling apparatus for reducing a transmission delay and a transmission jitter, caused by transmitting a low priority packet, of a premium packet and for transmitting the low-priority packet efficiently, is presented.
Abstract: A packet scheduling apparatus for reducing a transmission delay and a transmission jitter, caused by transmitting a low-priority packet, of a premium packet and for transmitting the low-priority packet efficiently. The packet scheduling apparatus comprises a packet input unit (1), a packet queue group (2), a scheduler unit (3), a packet division unit (4), a packet output unit (5) and a packet buffer (6). The packet queue group (2) includes a premium packet queue (21) and a low-priority packet queue (22). The scheduler unit (3) includes a scheduling queue (31) and a scheduler (32). The 'low-priority packet' to influence the transmission of the 'premium packet' is divided by the packet division unit (4) into a plurality of packets having such a length that they can be within the transmission interval of the 'premium packet', and a schedule is dynamically made on the basis of the transmission interval or loading situation of the 'premium packet'.

Proceedings ArticleDOI
15 Oct 2001
TL;DR: It is shown that prediction algorithms in the least mean square error sense prove better in a burst rather than in a packet switching network, and linear prediction provides a significant improvement in end-to-end latency with low bandwidth waste.
Abstract: We show that prediction algorithms in the least mean square error sense prove better in a burst rather than in a packet switching network. For the latter, further information about the packet arrival distribution within the prediction interval is required. Regarding burst switching, we compare optical burst switching networks with and without linear prediction to conclude that linear prediction provides a significant improvement in end-to-end latency with low bandwidth waste.

Patent
Arne Simonsson1
05 Jul 2001
TL;DR: In this article, a power control algorithm uses the packet data load to determine a common or equal broadcast or transmitted power level for the channels in the cell in each cell of the radio network.
Abstract: Method and system are disclosed for improving the channel quality in a packet data radio network. In each cell of the radio network the packet data is measured based on channel utilization and/or packet queue measurements. A power control algorithm uses the packet data load to determine a common or equal broadcast or transmitted power level for the channels in the cell. The common broadcast or transmitted power may subsequently be adjusted on an individual channel basis for channels that fall outside a predefined quality window.

Patent
11 May 2001
TL;DR: In this paper, an optical packet switch (3, 4) is proposed that facilitates efficient provisioning of packet services through a predominantly circuit-switched optical transport network infrastructure (1).
Abstract: The present invention provides an optical packet switch (3, 4) that facilitates efficient provisioning of packet services through a predominantly circuit-switched optical transport network infrastructure (1). In particular, the optical packet switch (3, 4) fits within a network where circuit and packet-switched traffic are transported together through the optical transport network (1). Fast switching is provided for packet traffic where granularity below the wavelength level is required, while slow wavelength switching and routing is facilitated at the same time. Fast switching and packet traffic aggregation for efficient bandwidth utilisation is performed at the edge where the optical transport network (1) interfaces with the IP domain (6), where dynamic and fast wavelength allocation for packet traffic is required.

Proceedings ArticleDOI
30 Sep 2001
TL;DR: An optical packet switching matrix for multi-terabit-class routers/switches is prototyped based on novel integrated optical gate arrays and asynchronous packet-mode receivers and full asynchronous operation with 10 Gbit/s RZ optical packets is demonstrated for the first time.
Abstract: An optical packet switching matrix for multi-terabit-class routers/switches has been prototyped. It is based on novel integrated optical gate arrays and asynchronous packet-mode receivers. Full asynchronous operation with 10 Gbit/s RZ optical packets is demonstrated for the first time.