scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
25 Mar 2016
TL;DR: This paper proposes to address the challenge of fine-grain synchronization in shared-memory multiprocessing by using on-chip wireless communication, and shows that WiSync speeds-up synchronization substantially.
Abstract: In shared-memory multiprocessing, fine-grain synchronization is challenging because it requires frequent communication. As technology scaling delivers larger manycore chips, such pattern is expected to remain costly to support. In this paper, we propose to address this challenge by using on-chip wireless communication. Each core has a transceiver and an antenna to communicate with all the other cores. This environment supports very low latency global communication. Our architecture, called WiSync, uses a per-core Broadcast Memory (BM). When a core writes to its BM, all the other 100+ BMs get updated in less than 10 processor cycles. We also use a second wireless channel with cheaper transfers to execute barriers efficiently. WiSync supports multiprogramming, virtual memory, and context switching. Our evaluation with simulations of 128-threaded kernels and 64-threaded applications shows that WiSync speeds-up synchronization substantially. Compared to using advanced conventional synchronization, WiSync attains an average speedup of nearly one order of magnitude for the kernels, and 1.12 for PARSEC and SPLASH-2.

35 citations

Journal ArticleDOI
TL;DR: A novel K-nested layered look-ahead method and its corresponding architecture, which combine K-trellis steps into one trellis step (where K is the encoder constraint length), are proposed for implementing low-latency high-throughput rate Viterbi decoders.
Abstract: In this paper, a novel K-nested layered look-ahead method and its corresponding architecture, which combine K-trellis steps into one trellis step (where K is the encoder constraint length), are proposed for implementing low-latency high-throughput rate Viterbi decoders. The proposed method guarantees parallel paths between any two-trellis states in the look-ahead trellises and distributes the add-compare-select (ACS) computations to all trellis layers. It leads to regular and simple architecture for the Viterbi decoding algorithm. The look-ahead ACS computation latency of the proposed method increases logarithmically with respect to the look-ahead step (M) divided by the encoder constraint length (K) as opposed to linearly as in prior work. For a 4-state (i.e., K=3) convolutional code, the decoding latency of the Viterbi decoder using proposed method is reduced by 84%, at the expense of about 22% increase in hardware complexity, compared with conventional M-step look-ahead method with M=48 (where M is also the level of parallelism). The main advantage of our proposed design is that it has the least latency among all known look-ahead Viterbi decoders for a given level of parallelism.

35 citations

Journal ArticleDOI
TL;DR: ElasticFog, which runs on top of the Kubernetes platform and enables real-time elastic resource provisioning for containerized applications in Fog computing, collects network traffic information in real time and allocates computational resources proportionally to the distribution of network traffic.
Abstract: The recent increase in the number of Internet of Things (IoT) devices has led to the generation of a large amount of data. These data are generally processed by cloud servers because of their high scalability and ability to provide resources on demand. However, processing large amounts of data in the cloud is an impractical solution for the strict requirements of IoT services, such as low latency and high bandwidth. Fog computing, which brings computational resources closer to the IoT devices, has emerged as a suitable solution to mitigate these problems. Resource provisioning and application orchestration are two of the key challenges when running IoT applications in a Fog computing environment. In this article, we present ElasticFog, which runs on top of the Kubernetes platform and enables real-time elastic resource provisioning for containerized applications in Fog computing. Specifically, ElasticFog collects network traffic information in real time and allocates computational resources proportionally to the distribution of network traffic. The experimental results prove that ElasticFog achieves a significant improvement in terms of throughput and network latency compared with the default mechanism in Kubernetes.

35 citations

Proceedings ArticleDOI
01 Aug 2018
TL;DR: In this paper, the authors highlight the requirements and mechanisms that are necessary for URLLC in LTE and propose DCI Duplication method to increase LTE control channel reliability for the IMT-2020 submission of LTE.
Abstract: 5G is envisioned to support three broad categories of services: eMBB, URLLC, and mMTC. URLLC services refer to future applications which require reliable data communications from one end to another, while fulfilling ultra-low latency constraints. In this paper, we highlight the requirements and mechanisms that are necessary for URLLC in LTE. Design challenges faced when reducing the latency in LTE are shown. The performance of short processing time and frame structure enhancements are analyzed. Our proposed DCI Duplication method to increase LTE control channel reliability is presented and evaluated. The feasibility of achieving low latency and high reliability for the IMT-2020 submission of LTE is shown. We further anticipate the opportunities and technical design challenges when evolving 3GPP's LTE and designing the new 5G NR standard to meet the requirements of novel URLLC services.

35 citations

Proceedings ArticleDOI
15 Jun 2010
TL;DR: BEAM (Burst-aware Energy-efficient Adaptive MAC) is designed, which aims to close the gap between optimizations for energy-efficiency and performance parameters are contradicting.
Abstract: Low latency for packet delivery, high throughput, good reactivity, and energy-efficient operation are key challenges that MAC protocols for Wireless Sensor Networks (WSNs) have to meet. Since traffic patterns as well as network load may change during network lifetime, adaptability of the protocol stack, e.g. in terms of duty cycling, and the integration of reliable transport mechanisms are mandatory. So far, given that optimizations for energy-efficiency and performance parameters are contradicting, most MAC protocols proposed have concentrated on either one or the other. In order to close this gap, we designed BEAM (Burst-aware Energy-efficient Adaptive MAC).

34 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227