scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper comprehensively surveys the recent advances of the edge cache in RANs, including the key techniques and the corresponding performances, and an advanced hierarchical edge cache structure is presented.
Abstract: The edge cache is an effective way to reduce the heavy traffic load and the end-to-end latency in radio access networks (RANs) for supporting a number of critical Internet of Things (IoT) services and applications. It has been verified to provide high spectral efficiency (SE), high energy efficiency (EE), and low latency. Along with several key techniques that have been applied, such as device-to-device communication and predictive caching, the edge cache techniques in RANs for IoT are becoming diversified. This paper comprehensively surveys the recent advances of the edge cache in RANs, including the key techniques and the corresponding performances. In particular, the key techniques are presented from the viewpoints of the deployment location of edge caches, content placement strategy, and coded caching. An advanced hierarchical edge cache structure is presented, and the main impacts on SE, EE, and latency of the key techniques are mainly summarized. Several open issues and challenges are identified as well to spur future investigations, in which the joint optimization of radio and cache resources, the edge cache with mobile edge computing and network intelligence, privacy, and security are discussed.

59 citations

Proceedings ArticleDOI
24 May 2015
TL;DR: The design of a 1mW, 10ns-latency mixed signal system in 0.18μm CMOS which enables filtering out uncorrelated background activity in event-based neuromorphic sensors and targets embedded neuromorphic visual and auditory systems, where low average power consumption and low latency are critical.
Abstract: This paper reports the design of a 1mW, 10ns-latency mixed signal system in 0.18μm CMOS which enables filtering out uncorrelated background activity in event-based neuromorphic sensors. Background activity (BA) in the output of dynamic vision sensors is caused by thermal noise and junction leakage current acting on switches connected to floating nodes in the pixels. The reported chip generates a pass flag for spatiotemporally correlated events for post-processing to reduce communication/computation load and improve information rate. A chip with 128×128 array with 20×20μm2 cells has been designed. Each filter cell combines programmable spatial subsampling with a temporal window based on current integration. Power-gating is used to minimize the power consumption by only activating the threshold detection and communication circuits in the cell receiving an input event. This correlation filter chip targets embedded neuromorphic visual and auditory systems, where low average power consumption and low latency are critical.

59 citations

Journal ArticleDOI
TL;DR: An optimization problem is formulated that maximizes the number of URLLC services supported by the system by optimizing time and frequency resources and the prediction horizon, and shows that the tradeoff between user experienced delay and reliability can be improved significantly via prediction and communication co-design.
Abstract: Ultra-reliable and low-latency communications (URLLC) are considered as one of three new application scenarios in the fifth generation cellular networks. In this work, we aim to reduce the user experienced delay through prediction and communication co-design, where each mobile device predicts its future states and sends them to a data center in advance. Since predictions are not error-free, we consider prediction errors and packet losses in communications when evaluating the reliability of the system. Then, we formulate an optimization problem that maximizes the number of URLLC services supported by the system by optimizing time and frequency resources and the prediction horizon. Simulation results verify the effectiveness of the proposed method, and show that the tradeoff between user experienced delay and reliability can be improved significantly via prediction and communication co-design. Furthermore, we carried out an experiment on the remote control in a virtual factory, and validated our concept on prediction and communication co-design with the practical mobility data generated by a real tactile device.

58 citations

Journal ArticleDOI
TL;DR: In this paper, a novel experienced deep reinforcement learning (deep-RL) framework is proposed to provide model-free resource allocation for ultra reliable low latency communication (URLLC-6G) in the downlink of a wireless network.
Abstract: In this paper, a novel experienced deep reinforcement learning (deep-RL) framework is proposed to provide model-free resource allocation for ultra reliable low latency communication (URLLC-6G) in the downlink of a wireless network. The goal is to guarantee high end-to-end reliability and low end-to-end latency, under explicit data rate constraints, for each wireless user without any models of or assumptions on the users’ traffic. In particular, in order to enable the deep-RL framework to account for extreme network conditions and operate in highly reliable systems, a new approach based on generative adversarial networks (GANs) is proposed. This GAN approach is used to pre-train the deep-RL framework using a mix of real and synthetic data, thus creating an experienced deep-RL framework that has been exposed to a broad range of network conditions. The proposed deep-RL framework is particularly applied to a multi-user orthogonal frequency division multiple access (OFDMA) resource allocation system. Formally, this URLLC-6G resource allocation problem in OFDMA systems is posed as a power minimization problem under reliability, latency, and rate constraints. To solve this problem using experienced deep-RL, first, the rate of each user is determined. Then, these rates are mapped to the resource block and power allocation vectors of the studied wireless system. Finally, the end-to-end reliability and latency of each user are used as feedback to the deep-RL framework. It is then shown that at the fixed-point of the deep-RL algorithm, the reliability and latency of the users are near-optimal. Moreover, for the proposed GAN approach, a theoretical limit for the generator output is analytically derived. Simulation results show how the proposed approach can achieve near-optimal performance within the rate-reliability-latency region, depending on the network and service requirements. The results also show that the proposed experienced deep-RL framework is able to remove the transient training time that makes conventional deep-RL methods unsuitable for URLLC-6G. Moreover, during extreme conditions, it is shown that the proposed, experienced deep-RL agent can recover instantly while a conventional deep-RL agent takes several epochs to adapt to new extreme conditions.

58 citations

Journal ArticleDOI
TL;DR: This work demonstrates a novel optical time division multiplexing packet-level system-synchronization and address-comparison technique, which relies on cascaded semiconductor-based optical logic gates operating at 50-Gb/s line rates.
Abstract: We demonstrate a novel optical time division multiplexing packet-level system-synchronization and address-comparison technique, which relies on cascaded semiconductor-based optical logic gates operating at 50-Gb/s line rates. Synchronous global clock distribution is used to achieve fixed length packet-synchronization that is resistant to channel-induced timing delays, and straightforward to achieve using a single optical logic gate. Four-bit address processing is achieved using a pulse-position modulated header input to a single optical logic gate, which provides Boolean XOR functionality, low latency, and stability over >1 h time periods with low switching energy <100 fJ.

58 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227