scispace - formally typeset
Search or ask a question
Topic

Fast packet switching

About: Fast packet switching is a research topic. Over the lifetime, 5641 publications have been published within this topic receiving 111603 citations.


Papers
More filters
Patent
20 Nov 2014
TL;DR: In this article, a system for matching data using flow-based packet data storage includes a communications interface and a processor, and the processor identifies a flow between the source and the destination based on the packet.
Abstract: A system for matching data using flow based packet data storage includes a communications interface and a processor. A communications interface receives a packet between a source and a destination. The processor identifies a flow between the source and the destination based on the packet. The processor determines whether some of packet data of the packet indicates a potential match to data in storage using hashes. The processor then stores the data from the most likely data match and second most likely data match without a packet header in a block of memory in the storage based on the flow.

34 citations

Journal ArticleDOI
TL;DR: In this paper, the wavelength independent all-optical packet header replacement at 1 Gb/s is demonstrated. But the wavelength of the new header is at exactly the same wavelength as that of the original payload.
Abstract: We experimentally demonstrate wavelength-independent all-optical packet header replacement at 1 Gb/s. The new header is modulated onto a relatively long continuous-wave region that was created by repeatedly replicating the string of "1's" in the original packet's flag. Consequently, the wavelength of the new header is at exactly the same wavelength as that of the original payload. A power penalty of /spl sim/1.5 dB at a bit-error rate =10/sup -9/ is measured when performing the header replacement.

34 citations

Proceedings ArticleDOI
01 Dec 2015
TL;DR: An incrementally deployable SDN-friendly packet forwarding mechanism called Path Switching that provides the same reduction in forwarding state as source routing while retaining the benefits and use of fixed size packet headers and existing protocols.
Abstract: The advent of virtualization, containerization and the Internet of Things (IoT) is leading to an explosive growth in the number of endpoints. Ideally with Software Defined Networking (SDN), one would like to customize packet handling for each of these endpoints or applications. However this typically leads to a large growth in forwarding state. This growth is avoided in current networks by using aggregation which trades off fine-grained control of micro-flows for reduced forwarding state. It is worthwhile to ask whether the benefits of micro-flow control can be retained without a large growth in forwarding state and without using aggregation. In this paper we describe an incrementally deployable SDN-friendly packet forwarding mechanism called Path Switching that achieves this by compactly encoding a packet's path through the network in the packet's existing address fields. Path Switching provides the same reduction in forwarding state as source routing while retaining the benefits and use of fixed size packet headers and existing protocols.We have extended Open vSwitch (OVS) to transparently support Path Switching as well as an inline service component for folding middlebox services into OVS. The extensions include advanced failover mechanisms like fast reroute. These extensions require no protocol changes as Path Switching leaves header formats unchanged.

34 citations

Patent
23 Nov 1992
TL;DR: In this paper, the core and enhancement packets are transmitted in frame relay format and congestion forward (CF) and congestion backwards (CB) markers are used to feed back information of congestion conditions within a network to the packet assembler.
Abstract: A system in which core information, for example in the form of a core block or blocks (C), is transmitted in a core packet (PC), and at least some enhancement information, for example, in the form of enhancement blocks (E), is transmitted in an enhancement packet (PE) which is separate from the core packet (PC) and is discardable to relieve congestion. Preferably, the core and enhancement packets have headers (H) which include a discard eligible marker (DE) to indicate whether or not the associated packet can be discarded. The enhancement blocks (E) may be distributed between the core packet and enhancement packet in accordance with congestion conditions, or the enhancement blocks may be incorporated only in the enhancement packet, and the actual number of enhancement blocks included are varied depending on congestion conditions. Preferably, the packets are transmitted in frame relay format and congestion forward (CF) and congestion backwards (CB) markers are used to feed back information of congestion conditions within a network to the packet assembler (7) forming the core and enhancement packets.

33 citations

Proceedings ArticleDOI
26 Mar 2007
TL;DR: This paper uses a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application and proposes an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps.
Abstract: Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.

33 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
88% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
87% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Wireless ad hoc network
49K papers, 1.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20191
20186
201749
201699
2015159