scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
03 Apr 2018
TL;DR: An overview of 5G technologies is presented and issues and future scope in 5G are highlighted, which will take mobile communication to a new standard.
Abstract: Fifth generation (5G) mobile networks have revolutionized the scenario of communication but are not yet realized globally. As mobile generations have advanced with years, features and performance of mobile communication has improved. 5G promises to provide higher data rates, higher capacity, low latency, massive device connectivity, consistent QoE (Quality of experience), reduced cost as compared with 4G and take mobile communication to a new standard. This article presents overview of 5G technologies. This paper also highlights issues and future scope in 5G.

17 citations

Journal ArticleDOI
TL;DR: Concepts for a flexible and low-latency-enabling mobile network architecture are presented, along with strategies for staying efficient, and future visions for network management architectures and 5G's impact on economic aspects are discussed.
Abstract: The fifth generation (5G) of mobile networks is envisioned to support new applications having demanding requirements, such as low latency and high reliability, which is the focus of this article along with enhanced traditional mobile broadband and massive sensing. Different approaches have already been proposed to achieve low latency while guaranteeing high reliability. However, the challenge of efficient resource utilization remains. In this article, concepts for a flexible and low-latency-enabling mobile network architecture are presented, along with strategies for staying efficient. The work is put in perspective with respect to ongoing standardization activities. Finally, future visions for network management architectures and 5G's impact on economic aspects are discussed.

17 citations

Journal ArticleDOI
25 Jun 2021
TL;DR: In this article, the authors present key technology enablers for deployment in ultra-re-liable and low-latency communications for 6G networks and discuss potential issues related to system security and spectrum management.
Abstract: With the fifth generation communication system in the phase of deployment, the research community has focused its attention toward a higher frequency spectrum, data rate in terabits per second, latency lower than 1 ms, and ultrahigh reliability. Communication systems having such characteristics are expected to be rolled out in the sixth generation. This article presents key technology enablers for deployment in ultra-re-liable and low-latency communications for 6G networks. In addition, the potential issues related to system security and spectrum management in 6G networks are also discussed. The significant technological enablers include machine learning, channel estimation, intelligent reflecting surfaces, terahertz communication, spectrum sharing, and system security, which possess the potential to revolutionaize the future of wireless communication networks.

17 citations

Patent
23 Jul 2007
TL;DR: In this article, a message is stored in non-volatile, low-latency memory with associated destination list and other meta data, and the message is only removed from this lowlatency nonvolatile storage when an acknowledgement has been received from each destination indicating that the message has been successfully received, or if the message was in such memory for a period exceeding a time threshold or if memory resources are running low.
Abstract: A method of providing assured message delivery with low latency and high message throughput, in which a message is stored in non-volatile, low latency memory with associated destination list and other meta data. The message is only removed from this low-latency non-volatile storage when an acknowledgement has been received from each destination indicating that the message has been successfully received, or if the message is in such memory for a period exceeding a time threshold or if memory resources are running low, the message and associated destination list and other meta data is migrated to other persistent storage. The data storage engine can also be used for other high throughput applications.

17 citations

Journal ArticleDOI
TL;DR: This work proposes novel DRAM set mapping policies that simultaneously reduce D-MR (via high associativity) and D-HL (via improved row buffer hit rates) and presents a novel DTC insertion policy that also increases the DTC hit rate.
Abstract: On-chip dynamic random access memory (DRAM) cache has been recently employed in the memory hierarchy to mitigate the widening latency gap between high-speed cores and off-chip memory. Two important parameters are the DRAM cache miss rate (D$-MR) and the DRAM cache hit latency (D$-HL), as they strongly influence the performance. These parameters depend upon the DRAM set mapping policy. Recently proposed DRAM set mapping policies are predominantly optimized for either D$-MR or D$-HL. We propose novel DRAM set mapping policies that simultaneously reduce D$-MR (via high associativity) and D$-HL (via improved row buffer hit rates). To further improve the D$-HL, we propose a small and low latency DRAM Tag cache (DTC) structure that can quickly determine whether an access to the DRAM cache will be a hit or a miss. The performance of the proposed DTC depends upon the DTC hit rate. To increase it, we present a novel DTC insertion policy that also increases the DTC hit rate. We investigate the latency and miss rate tradeoffs when designing a DRAM cache hierarchy and analyze the effects of different policies on the overall performance. We evaluate our policies on a wide variety of workloads and compare its performance with three recent proposals for on-chip DRAM caches. For a 16-core system, our set mapping policy along with our DTC and its adaptive DTC insertion policy improve the harmonic mean instruction per cycle throughput by 25.4%, 15.5%, and 7.3% compared to state-of-the-art, while requiring 55% less storage overhead for DRAM cache hit/miss prediction.

17 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227