scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Journal ArticleDOI
TL;DR: The cloud radio access network (C-RAN) constitutes a promising architecture for next-generation systems and is an ideal platform for supporting network function virtualization (NFV), softwaredefined networking (SDN), and artificial intelligence (AI).
Abstract: The cloud radio access network (C-RAN) constitutes a promising architecture for next-generation systems. Beneficial centralized signal processing techniques can be realized under the C-RAN framework. Furthermore, given the recent rapid development of cloud computing, this architecture is an ideal platform for supporting network function virtualization (NFV), softwaredefined networking (SDN), and artificial intelligence (AI).

33 citations

Proceedings ArticleDOI
06 Jun 2021
TL;DR: FastEmit as mentioned in this paper applies latency regularization directly on per-sequence probability in training transducer models, and does not require any alignment, which is more suitable to the sequence-level optimization of transducers for streaming ASR.
Abstract: Streaming automatic speech recognition (ASR) aims to emit each hypothesized word as quickly and accurately as possible. However, emitting fast without degrading quality, as measured by word error rate (WER), is highly challenging. Existing approaches including Early and Late Penalties [1] and Constrained Alignments [2], [3] penalize emission delay by manipulating per-token or per-frame probability prediction in sequence transducer models [4]. While being successful in reducing delay, these approaches suffer from significant accuracy regression and also require additional word alignment information from an existing model. In this work, we propose a sequence-level emission regularization method, named FastEmit, that applies latency regularization directly on per-sequence probability in training transducer models, and does not require any alignment. We demonstrate that FastEmit is more suitable to the sequence-level optimization of transducer models [4] for streaming ASR by applying it on various end-to-end streaming ASR networks including RNN-Transducer [5], Transformer-Transducer [6], [7], ConvNet-Transducer [8] and Conformer-Transducer [9]. We achieve 150 ~ 300ms latency reduction with significantly better accuracy over previous techniques on a Voice Search test set. FastEmit also improves streaming ASR accuracy from 4.4%/8.9% to 3.1%/7.5% WER, meanwhile reduces 90th percentile latency from 210ms to only 30ms on LibriSpeech.

33 citations

Proceedings ArticleDOI
07 Aug 2002
TL;DR: This paper presents an implementation of a convolutional turbo codec core based on innovative solutions for broadband turbo coding, implemented in a CMOS 0.18 /spl mu/m technology, and yields a final throughput up to 80.7 Mb/s.
Abstract: Turbo coding has reached the step in which its astonishing coding gain is already being proven in real applications. Moreover, its applicability to future broadband communications systems is starting to be investigated. In order to be useful in this domain, special turbo codec architectures that cope with low latency, high throughput, low power consumption and high flexibility are needed. This paper presents an implementation of a convolutional turbo codec core based on innovative solutions for those requirements. The combination of a systematic data storage and transfer optimization with high and low level architectural solutions yields a final throughput up to 80.7 Mb/s, a decoding latency of 10 /spl mu/s and a power consumption of less than 50 nJ/bit. The 14.7 mm/sup 2/ full-duplex full-parallel core, implemented in a CMOS 0.18 /spl mu/m technology, is a complete flexible solution for broadband turbo coding.

33 citations

Journal ArticleDOI
TL;DR: This article presents the design of a deterministic IIoT core network consisting of many simple deterministic packet switches configured by an SDN control plane, and argues that a convergence should occur, and that a future converged Industrial Internet of Things should support both best effort and deterministic services, with very low latency and jitter.
Abstract: A convergence is occurring in the networking world. Industrial networks currently provide deterministic services in robotic factories and aircraft, while the best effort Internet of Things provides best effort services for consumers. We argue that a convergence should occur, and that a future converged Industrial Internet of Things (IIoT) should support both best effort and deterministic services, with very low latency and jitter. This article presents the design of a deterministic IIoT core network consisting of many simple deterministic packet switches configured by an SDN control plane. The use of deterministic communications can reduce router buffer sizes by a factor of ≥ 1000, and can reduce end-to-end latencies to the speed of light in fiber. A speed-of-light deterministic core network can have a profound impact on virtually all consumer services such as multimedia distribution, e-Commerce, and cloud computing or gaming systems. Highly aggregated video streams can be delivered over a deterministic virtual network with very high link utilization (≤ 100 percent), very low packet jitter (≤ 10 μs), and zero congestion. In addition to improving consumer services, a converged deterministic IIoT core network can save billions of dollars per year as a result of significantly improved network utilization and energy efficiency.

33 citations

Proceedings ArticleDOI
28 Jun 2009
TL;DR: Cyclo-Join this paper is an efficient join algorithm based on continuously pumping data through a ring-structured network, which is capable of exploiting the resources of all CPUs and distributed main-memory available in the network for processing queries of arbitrary shape and datasets of arbitrary size.
Abstract: By leveraging modern networking hardware (RDMA-enabled network cards), we can shift priorities in distributed database processing significantly. Complex and sophisticated mechanisms to avoid network traffic can be replaced by a scheme that takes advantage of the bandwidth and low latency offered by such interconnects.We illustrate this phenomenon with cyclo-join, an efficient join algorithm based on continuously pumping data through a ring-structured network. Our approach is capable of exploiting the resources of all CPUs and distributed main-memory available in the network for processing queries of arbitrary shape and datasets of arbitrary size.

33 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227