scispace - formally typeset
Search or ask a question
Topic

Fast packet switching

About: Fast packet switching is a research topic. Over the lifetime, 5641 publications have been published within this topic receiving 111603 citations.


Papers
More filters
Journal ArticleDOI
11 Dec 2014
TL;DR: In this article, a 16×16 mesh, 112b data, 256 voltage/clock domain NoC with source-synchronous operation, hybrid packet/circuit-switched flow control, and ultra-low-voltage optimizations is fabricated in 22nm tri-gate CMOS.
Abstract: Energy-efficient networks-on-chip (NoCs) are key enablers for exa-scale computation by shifting power budget from communication toward computation. As core counts scale into the 100s, on-chip interconnect fabrics must support increasing heterogeneity and voltage/clock domains. Synchronous NoCs require either a single clock distributed globally or clock-crossing data FIFOs between clock domains [1]. A global clock requires costly full-chip margining and significant power and area for clock distribution, while synchronizing data FIFOs add power, performance, and area overhead per clock crossing. Source-synchronous NoCs mitigate these penalties by forwarding a local clock along with each packet, but still suffer from high data storage power due to packet switching. Circuit switching removes intra-route data storage, but suffers from low network utilization due to serialized channel setup and data transfer [2]. Hybrid packet/circuit switching parallelizes these operations for higher network utilization. A 16×16 mesh, 112b data, 256 voltage/clock domain NoC with source-synchronous operation, hybrid packet/circuit-switched flow control, and ultra-low-voltage optimizations is fabricated in 22nm tri-gate CMOS [3] to enable: i) 20.2Tb/s total throughput at 0.9V, 25°C, ii) a 2.7× increase in bisection bandwidth to 2.8Tb/s and 93% reduction in circuit-switched latency at 407ps/hop through source-synchronous operation, iii) a 62% latency improvement and 55% increase in energy efficiency to 7.0Tb/s/W through circuit switching, iv) a peak energy efficiency of 18.3Tb/s/W for near-threshold operation at 430mV, 25°C, and v) ultra-low-voltage operation down to 340mV with router power scaling to 363μW.

69 citations

Journal ArticleDOI
TL;DR: The architecture of the torus-topology OPS and agile OCS intra-DC network is presented, together with a new flow management concept, where instantaneous optical path on-demand, so-called Express Path is established, and the power consumption and the throughput of a conventional fat-tree topology with the N-dimensional torus topology are compared.
Abstract: We review our work on an intra-data center (DC) network based on co-deployment of optical packet switching (OPS) and optical circuit switching (OCS), conducted within the framework of a five-year-long national R&D program in Japan (∼March 2016). For the starter, preceding works relevant to optical switching technologies in intra-DC networks are briefly reviewed. Next, we present the architecture of our torus-topology OPS and agile OCS intra-DC network, together with a new flow management concept, where instantaneous optical path on-demand, so-called Express Path is established. Then, our hybrid optoelectronic packet router (HOPR), which handles 100 Gbps (25 Gbps × 4-wavelength) optical packets and its enabling device and sub-system technologies are presented. The HOPR aims at a high energy-efficiency of 0.09 [W/Gbps] and low-latency of 100 ns regime. Next, we provide the contention resolution strategies in the OPS and agile OCS network and present the performance analysis with the simulation results. It is followed by the discussions on the power consumption of intra-DC networks. We compare the power consumption and the throughput of a conventional fat-tree topology with the N -dimensional torus topology. Finally, for further power saving, we propose a new scheme, which shuts off HOPR buffers according to the server operation status.

69 citations

Patent
Tsuyoshi Miura1, Shin Fujita1
25 May 2004
TL;DR: In this paper, a retransmission request controlling unit controlling the timing of transmission of a retracement request to a packet transmitting apparatus from a retraughming request transmitting unit according to whether an error correcting unit can restore the lost packet within a predetermined time period when loss of the packet is detected.
Abstract: A packet error correcting apparatus includes a retransmission request controlling unit controlling a timing of transmission of a retransmission request to a packet transmitting apparatus from a retransmission request transmitting unit according to whether an error correcting unit can restore the lost packet within a predetermined time period when loss of the packet is detected. In a packet receiving apparatus supporting both Forward Error Correction (FEC) and Automatic Repeat reQuest (ARQ), it is possible to control a timing of transmission of a retransmission request when packet loss occurs, thereby to regenerate video and/or voice with the most suitable delay time while suppressing transmission of unnecessary retransmission requests.

68 citations

Patent
07 May 1999
TL;DR: In this paper, the authors propose a method and apparatus to limit the throughput rate of non-adapting aggressive flows on a packet-by-packet basis, based on a subset of the packet's header data, giving an approximation of perflow management.
Abstract: A method and apparatus to limit the throughput rate of non-adapting aggressive flows on a packet-by-packet basis. Each packet of an input flow is mapped to an entry in a flow table for each output queue. The mapping is based on a subset of the packet's header data, giving an approximation of per-flow management. Each entry contains a credit value. On packet reception, the credit value is compared to zero; if there are no credits, the packet is dropped. Otherwise, the size of the packet is compared to the credit value. If sufficient credits exist (i.e., size is less than or equal to credits), the credit value is decremented by the size of the packet in cells and the processing proceeds according to conventional methods, including but not limited to those disclosed in the co-pending DBL Application, incorporated herewith by reference in its entirety. If, however, the size of the packet exceeds the available credits, the credit value is set to zero and the packet is dropped. A periodic task adds credits to each flow table entry up to a predetermined maximum. The processing rate of each approximated flow is thus maintained to the rate determined by the number of credits present at each enqueuing decision, up to the allowed maximum. The scheme operates independently of packet flow type, providing packet-specific means for rapidly discriminating well-behaved flows that adapt to congestion situations signaled by packet drop from aggressive, non-adapting flows and managing throughput bandwidth accordingly. Bandwidth is shared fairly among well-behaved flows, large and small, and time-critical (low latency) flows, thereby protecting all from non-adapting aggressive flows.

68 citations

Patent
30 Mar 1988
TL;DR: In this paper, a method and apparatus for allocating bandwidth in a broadband packet switching network are disclosed, which utilizes channel groups, which may be defined as a set of parallel packet channels that act as a single data link connection between packet switches.
Abstract: A method and apparatus for allocating bandwidth in a broadband packet switching network are disclosed. The invention utilizes channel groups (112) which may be defined as a set of parallel packet channels that act as a single data link connection between packet switches (110). In accordance with the invention, bandwidth is allocated in two steps. At virtual circuit setup time, bandwidth is reserved in particular channel groups. At transmission time packets are assigned to individual channels within the groups, illustratively, using a coordination mechanism in communication with the input ports of the appropriate packet switch. The bandwidth allocation technique, known as multichannel bandwidth allocation, leads to increased throughput and reduced packet loss probabilities.

68 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
88% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
87% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Wireless ad hoc network
49K papers, 1.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20191
20186
201749
201699
2015159