scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Journal ArticleDOI
TL;DR: The Knockout Switch as discussed by the authors uses a fully interconnected switch fabric topology (i.e., each input has a direct path to every output) so that no switch blocking occurs where packets destined for one output interfere with (block or delay) packets going to different Outputs.
Abstract: A new, high-performance packet-switching architecture, called the Knockout Switch, is proposed. The Knockout Switch uses a fully interconnected switch fabric topology (i.e., each input has a direct path to every output) so that no switch blocking occurs where packets destined for one output interfere with (i.e., block or delay) packets going to different Outputs. It is only at each output of the switch that one encounters the unavoidable congestion caused by multiple packets simultaneously arriving on different inputs all destined for the same output. Taking advantage of the inevitability of lost packets in a packet-switching network, the Knockout Switch uses a novel concentrator design at each output to reduce the number of separate buffers needed to receive simultaneously arriving packets. Following the concentrator, a shared buffer architecture provides complete sharing of all buffer memory at each output and ensures that all packets are placed on the output line on a first-in first-out basis. The Knockout Switch architecture has low latency, and is self-routing and nonblocking. Moreover, its Simple interconnection topology allows for easy modular growth along with minimal disruption and easy repair for any fault. Possible applications include interconnects for multiprocessing systems, high-speed local and metropolitan area networks, and local or toll switches for integrated traffic loads.

589 citations

Proceedings ArticleDOI
01 Oct 1997
TL;DR: This study investigates a novel multicast technique, called Skyscraper Broadcasting (SB), for video-on-demand applications, and is able to achieve the low latency of PB while using only 20% of the buffer space required by PPB.
Abstract: We investigate a novel multicast technique, called Skyscraper Broadcasting (SB), for video-on-demand applications. We discuss the data fragmentation technique, the broadcasting strategy, and the client design. We also show the correctness of our technique, and derive mathematical equations to analyze its storage requirement. To assess its performance, we compare it to the latest designs known as Pyramid Broadcasting (PB) and Permutation-Based Pyramid Broadcasting (PPB). Our study indicates that PB offers excellent access latency. However, it requires very large storage space and disk bandwidth at the receiving end. PPB is able to address these problems. However, this is accomplished at the expense of a larger access latency and more complex synchronization. With SB, we are able to achieve the low latency of PB while using only 20% of the buffer space required by PPB.

581 citations

Proceedings ArticleDOI
17 Aug 2015
TL;DR: TIMELY is the first delay-based congestion control protocol for use in the datacenter, and it achieves its results despite having an order of magnitude fewer RTT signals than earlier delay- based schemes such as Vegas.
Abstract: Datacenter transports aim to deliver low latency messaging together with high throughput. We show that simple packet delay, measured as round-trip times at hosts, is an effective congestion signal without the need for switch feedback. First, we show that advances in NIC hardware have made RTT measurement possible with microsecond accuracy, and that these RTTs are sufficient to estimate switch queueing. Then we describe how TIMELY can adjust transmission rates using RTT gradients to keep packet latency low while delivering high bandwidth. We implement our design in host software running over NICs with OS-bypass capabilities. We show using experiments with up to hundreds of machines on a Clos network topology that it provides excellent performance: turning on TIMELY for OS-bypass messaging over a fabric with PFC lowers 99 percentile tail latency by 9X while maintaining near line-rate throughput. Our system also outperforms DCTCP running in an optimized kernel, reducing tail latency by $13$X. To the best of our knowledge, TIMELY is the first delay-based congestion control protocol for use in the datacenter, and it achieves its results despite having an order of magnitude fewer RTT signals (due to NIC offload) than earlier delay-based schemes such as Vegas.

442 citations

Posted Content
01 Jan 2012
TL;DR: This paper presents a block cipher that is optimized with respect to latency when implemented in hardware and holds that decryption for one key corresponds to encryption with a related key, which is of independent interest and proves its soundness against generic attacks.
Abstract: This paper presents a block cipher that is optimized with respect to latency when implemented in hardware. Such ciphers are desirable for many future pervasive applications with real-time security needs. Our cipher, named PRINCE, allows encryption of data within one clock cycle with a very competitive chip area compared to known solutions. The fully unrolled fashion in which such algorithms need to be implemented calls for innovative design choices. The number of rounds must be moderate and rounds must have short delays in hardware. At the same time, the traditional need that a cipher has to be iterative with very similar round functions disappears, an observation that increases the design space for the algorithm. An important further requirement is that realizing decryption and encryption results in minimum additional costs. PRINCE is designed in such a way that the overhead for decryption on top of encryption is negligible. More precisely for our cipher it holds that decryption for one key corresponds to encryption with a related key. This property we refer to as α-reflection is of independent interest and we prove its soundness against generic attacks.

439 citations

Journal ArticleDOI
TL;DR: In an interactive VR gaming arcade case study, it is shown that a smart network design that leverages the use of mmWave communication, edge computing, and proactive caching can achieve the future vision of VR over wireless.
Abstract: VR is expected to be one of the killer applications in 5G networks. However, many technical bottlenecks and challenges need to be overcome to facilitate its wide adoption. In particular, VR requirements in terms of high throughput, low latency, and reliable communication call for innovative solutions and fundamental research cutting across several disciplines. In view of the above, this article discusses the challenges and enablers for ultra-reliable and low-latency VR. Furthermore, in an interactive VR gaming arcade case study, we show that a smart network design that leverages the use of mmWave communication, edge computing, and proactive caching can achieve the future vision of VR over wireless.

405 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227