scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Proceedings Article
01 Jan 2003
TL;DR: The truncation error for a two-pass decoder is analyzed in a problem of phonetic speech recognition for very demanding latency constraints and for applications where applications where look-ahead length < 100ms is required.
Abstract: The truncation error for a two-pass decoder is analyzed in a problem of phonetic speech recognition for very demanding latency constraints (look-ahead length < 100ms) and for applications where ...

18 citations

Journal ArticleDOI
TL;DR: A taxonomy as well as specific evaluation criteria are proposed to classify research across different domains addressing low latency service delivery, offering lessons learned and prospects on emerging use cases such as Extended Reality (XR), in which novel trends will play a major role.
Abstract: The advent of softwarized networks has enabled the deployment of chains of virtual network and service components on computational resources from the cloud up to the edge, creating a continuum of virtual resources. The next generation of low latency applications (e.g., Virtual Reality (VR), autonomous cars) adds even more stringent requirements to the infrastructure, calling for considerable advancements towards cloud-native micro-service-based architectures. This article presents a comprehensive survey on ongoing research aiming to effectively support low latency services throughout their execution lifetime in next-generation networks. The current state-of-the-art is critically reviewed to identify the most promising trends that will strongly impact the full applicability and high performance of low latency services. This article proposes a taxonomy as well as specific evaluation criteria to classify research across different domains addressing low latency service delivery. Current architectural paradigms such as Multi-access Edge Computing (MEC) and Fog Computing (FC) alongside novel trends on communication networks are discussed. Among these, the integration of Machine Learning (ML) and Artificial intelligence (AI) is introduced as a key research field in current literature towards autonomous network management. A discussion on open challenges and future research directions on low-latency service delivery leads to the conclusion, offering lessons learned and prospects on emerging use cases such as Extended Reality (XR), in which novel trends will play a major role.

18 citations

Proceedings ArticleDOI
10 Jun 2014
TL;DR: The effectiveness of the modular architecture is validated by showing that this scheduler, which is named Highth-roughput Twin Fair scheduler (HFS), outperforms one of the most accurate and efficient integrated Scheduler available in the literature.
Abstract: Providing QoS guarantees, boosting throughput and saving energy over wireless links is a challenging task, especially in emergency networks, where all of these features are crucial during a disaster event. A common solution is using a single, integrated scheduler that deals both with the QoS guarantees and the wireless link issues. Unfortunately, such an approach is not flexible and does not allow any of the existing high-quality schedulers for wired links to be used without modifications. We address these issues through a modular architecture which permits the use of existing packet schedulers for wired links over wireless links, as they are, and at the same time allows the flexibility to adapt to different channel conditions. We validate the effectiveness of our modular architecture by showing, through formal analysis as well as experimental results, that this architecture enables us to get a new scheduler with the following features, by just combining existing schedulers: execution time and energy consumption close to that of just a Deficit Round Robin, accurate fairness and low latency, possibility to set the desired trade-off between throughput-boosting level and granularity of service guarantees, by changing one parameter. In particular, we show that this scheduler, which we named Highth-roughput Twin Fair scheduler (HFS), outperforms one of the most accurate and efficient integrated schedulers available in the literature.

17 citations

Proceedings ArticleDOI
01 Dec 2017
TL;DR: This work presents a complete highly-optimized infrastructure that implements low-latency system components in C++ using High-Level Synthesis (HLS) and develops a framework that enables HFT algorithm developers to implement their trading algorithms in a high-level programming language and rapidly integrate it to the rest of the system.
Abstract: High-Frequency Trading (HFT) systems require extremely low latency in response to market updates This motivates the use of Field-Programmable Gate Arrays (FPGAs) to accelerate different system components such as the network stack, financial protocol parsing, order book handling and even custom trading algorithms However, the long cycle of developing and verifying FPGA designs makes it challenging for HFT software developers to deploy such highly-dynamic systems, especially with their limited hardware design expertise We present a complete highly-optimized infrastructure that implements low-latency system components in C++ using High-Level Synthesis (HLS) We also develop a framework that enables HFT algorithm developers to implement their trading algorithms in a high-level programming language and rapidly integrate it to the rest of the system We implemented our HLS-based system on a Xilinx Kintex Ultrascale FPGA running at 156 MHz Our on-board measurements show an end-to-end round-trip latency less than 870ns, which is comparable to that achieved by prior RTL-based implementations but requires reduced system development time and effort

17 citations

Journal ArticleDOI
TL;DR: A power-efficient control link algorithm is developed, which establishes low latency and high reliability of control links established by the algorithm, ultimately suggesting the feasibility, both technical and economical, of the software-defined LEO satellite network.
Abstract: The low earth orbit (LEO) satellite network can benefit from software-defined networking (SDN) by lightening forwarding devices and improving service diversity. In order to apply SDN into the network, however, reliable SDN control links should be associated from satellite gateways to satellites, with the wireless and mobile properties of the network taken into account. Since these characteristics affect both control link association and gateway power allocation, we define a new cross layer SDN control link problem. To the best of our knowledge, this is the first attempt to explore the cross layer control link problem for the software-defined satellite network. A logically centralized SDN control framework constrained by maximum total power is introduced to enhance gateway power efficiency for control link setup. Based on the power control analysis of the problem, a power-efficient control link algorithm is developed, which establishes low latency control links with reduced power consumption. Along with the sensitivity analysis of the proposed control link algorithm, numerical results demonstrate low latency and high reliability of control links established by the algorithm, ultimately suggesting the feasibility, both technical and economical, of the software-defined LEO satellite network.

17 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227