scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
20 May 2019
TL;DR: This paper proposes a preamble reservation based 2-step access scheme where grouped devices gain network access via a designated group leader using reserved preambles and demonstrates that the proposed scheme enables ultra-reliable and low latency access for grouped devices while improving the performance of non-grouped devices through proper configuration of parameters.
Abstract: Ultra-reliable low latency communication (URLLC) and massive machine type communications (mMTC) are two of the three major technological pillars in 5G. For medium access in mMTC scenarios, e.g., smart cities, a major bottleneck for achieving reliable access is channel congestion due to LTE-A based random access. Hence, priority-based access schemes are preferred in order to provide reliable and low latency access for mMTC devices in 5G networks. In this paper, we categorize the devices covered inside a cell into grouped and non-grouped sets and propose a preamble reservation based 2-step access scheme where grouped devices gain network access via a designated group leader using reserved preambles. Through analysis and simulations, we demonstrate that the proposed scheme enables ultra-reliable and low latency access for grouped devices while improving the performance of non-grouped devices through proper configuration of parameters.

24 citations

Journal ArticleDOI
TL;DR: In this article, a low-latency edge caching method is proposed to reduce user access latency and more effectively cache diverse content in the edge network, and the cache model based on base station cooperation is established and the delay in different transmission modes is considered.
Abstract: With the increase of mobile terminal equipment and network mass data, users have higher requirements for delay and service quality. To reduce user access latency and more effectively cache diverse content in the edge network, a low-latency edge caching method is proposed. The cache model based on base station cooperation is established and the delay in different transmission modes is considered. Finally, the problem of minimizing latency is transformed into a problem of maximizing cache reward, and a greedy algorithm based on the original dual interior point is used to obtain the strategy of the original problem. Meanwhile, in order to improve service quality and balance communication overhead and migration overhead, a migration method based on balanced communication overhead and migration overhead is proposed. The model that balances communication overhead and migration overhead is established, and the reinforcement learning method is used to obtain a migration scheme that maximizes accumulated revenue. Comparison results show that our caching method can enhance the cache reward and reduce delay. Meanwhile, the migration algorithm can increase service migration revenue and reduce communication overhead.

24 citations

Journal ArticleDOI
Ilias Iliadis1, Cyriel Minkenberg1
TL;DR: An analytical model is presented to investigate the efficiency of the speculative transmission scheme employed in a non-blocking N times NR input-queued crossbar switch with R receivers per output and shows that the control-path latency can be almost entirely eliminated for loads up to 50%.
Abstract: Low latency is a critical requirement in some switching applications, specifically in parallel computer interconnection networks. The minimum latency in switches with centralized scheduling comprises two components, namely, the control-path latency and the data-path latency, which in a practical high-capacity, distributed switch implementation can be far greater than the cell duration. We introduce a speculative transmission scheme to significantly reduce the average control-path latency by allowing cells to proceed without waiting for a grant, under certain conditions. It operates in conjunction with any centralized matching algorithm to achieve a high maximum utilization and incorporates a reliable delivery mechanism to deal with failed speculations. An analytical model is presented to investigate the efficiency of the speculative transmission scheme employed in a non-blocking N times NR input-queued crossbar switch with R receivers per output. Using this model, performance measures such as the mean delay and the rate of successful speculative transmissions are derived. The results demonstrate that the control-path latency can be almost entirely eliminated for loads up to 50%. Our simulations confirm the analytical results.

24 citations

Journal ArticleDOI
TL;DR: In this paper, the basic concepts and the potential applications of ultra-reliable and low latency communication (URLLC) are first introduced, and then the state-of-the-art research of URLLC in the physical layer, link layer and the network layer are overviewed.
Abstract: In the upcoming 5G and beyond systems, ultra-reliable and low latency communication (URLLC) has been considered as the key enabler to support diverse mission-critical services, such as industrial automation, remote healthcare, and intelligent transportation. However, the two stringent requirements of URLLC: extremely low latency and ultra-strict reliability have posed great challenges in system designing. In this article, the basic concepts and the potential applications of URLLC are first introduced. Then, the state-of-the-art research of URLLC in the physical layer, link layer and the network layer are overviewed. In addition, some potential research topics and challenges are also identified.

24 citations

Journal ArticleDOI
TL;DR: This paper considers the downlink of a vehicle-to-infrastructure (V2I) system conceived for URLLC based on idealized perfect and realistic imperfect CSI, and shows that the proposed resource allocation scheme significantly reduces the maximum transmission latency, and it is not sensitive to the fluctuation of road-traffic density.
Abstract: To efficiently support safety-related vehicular applications, the ultra-reliable and low-latency communication (URLLC) concept has become an indispensable component of vehicular networks (VNETs). Due to the high mobility of VNETs, exchanging near-instantaneous channel state information (CSI) and making reliable resource allocation decisions based on such short-term CSI evaluations are not practical. In this paper, we consider the downlink of a vehicle-to-infrastructure (V2I) system conceived for URLLC based on idealized perfect and realistic imperfect CSI. By exploiting the benefits of the massive MIMO concept, a two-stage radio resource allocation problem is formulated based on a novel twin-timescale perspective for avoiding the frequent exchange of near-instantaneous CSI. Specifically, based on the prevalent road-traffic density, Stage 1 is constructed for minimizing the worst-case transmission latency on a long-term timescale. In Stage 2, the base station allocates the total power at a short-term timescale according to the large-scale fading CSI encountered for minimizing the maximum transmission latency across all vehicular users. Then, a primary algorithm and a secondary algorithm are conceived for our V2I URLLC system to find the optimal solution of the twin-timescale resource allocation problem, with special emphasis on the complexity imposed. Finally, our simulation results show that the proposed resource allocation scheme significantly reduces the maximum transmission latency, and it is not sensitive to the fluctuation of road-traffic density.

24 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227