Topic
Latency (engineering)
About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.
Papers published on a yearly basis
Papers
More filters
••
01 Jan 201716 citations
••
27 May 2020TL;DR: A new adaptive bitrate (ABR) scheme, Stallion, for STAndard Low-LAtency vIdeo cONtrol, which shows 1.8x increase in bitrate, and 4.3x reduction in the number of stalls.
Abstract: As video traffic continues to dominate the Internet, interest in near-second low-latency streaming has increased. Existing low-latency streaming platforms rely on using tens of seconds of video in the buffer to offer a seamless experience. Striving for near-second latency requires the receiver to make quick decisions regarding the download bitrate and the playback speed. To cope with the challenges, we design a new adaptive bitrate (ABR) scheme, Stallion, for STAndard Low-LAtency vIdeo cONtrol. Stallion uses a sliding window to measure the mean and standard deviation of both the bandwidth and latency. We evaluate Stallion and compare it to the standard DASH DYNAMIC algorithm over a variety of networking conditions. Stallion shows 1.8x increase in bitrate, and 4.3x reduction in the number of stalls.
16 citations
••
19 May 2013TL;DR: A fully parallel 64K point radix-44 FFT processor that shows significant reduction in intermediate memory but with increased hardware complexity and reduced latency with comparable throughput and area is proposed.
Abstract: In this paper we propose a fully parallel 64K point radix-44 FFT processor. The radix-44 parallel unrolled architecture uses a novel radix-4 butterfly unit which takes all four inputs in parallel and can selectively produce one out of the four outputs. The radix-44 block can take all 256 inputs in parallel and can use the select control signals to generate one out of the 256 outputs. The resultant 64K point FFT processor shows significant reduction in intermediate memory but with increased hardware complexity. Compared to the state-of-art implementation [5], our architecture shows reduced latency with comparable throughput and area. The 64K point FFT architecture was synthesized using a 130nm CMOS technology which resulted in a throughput of 1.4 GSPS and latency of 47.7μs with a maximum clock frequency of 350MHz. When compared to [5], the latency is reduced by 303μs with 50.8% reduction in area.
16 citations
••
TL;DR: Simulation results show that DEFT can provide a significant goodput gain and application-level E2E delay reduction for the range of interest.
Abstract: Multipath TCP (MPTCP) is a promising solution that can provide high end-to-end throughput in fifth generation (5G) mobile networks. Many next-generation applications will require high throughput and low latency simultaneously, but the current MPTCP congestion control algorithms cannot reliably satisfy this requirement. In this paper, a novel MPTCP congestion control scheme named delay-equalized FAST (DEFT) is proposed to achieve high throughput and low end-to-end (E2E) delay in 5G networks. First, in order to achieve high throughput, DEFT includes a novel window control algorithm that shows fast responsiveness when the state of the millimeter-wave (mmWave) link changes from line-of-sight (LOS) to non-LOS (NLOS) and vice versa. Second, in order to achieve low E2E delay, DEFT includes a delay-equalizing algorithm which minimizes additional reordering delay in the receive buffer. The performance of DEFT was evaluated based on ns-3 simulation and was compared with wVegas, Balia, and delay-adapted LIA. Simulation results show that DEFT can provide a significant goodput gain and application-level E2E delay reduction for the range of interest.
16 citations
••
01 Aug 2019TL;DR: This paper captures the packets from the Narrow Band-Internet of Things (NB-IoT) transmission, Unmanned Aerial Vehicle (UAV) control, 4K video and Facebook access for emulating mMTC, URLLC, eMBB and Internet traffic in 5G.
Abstract: 5G supports more new services, including enhanced Mobile Broadband (eMBB), Ultra-reliable and Low Latency Communications (URLLC) and massive Machine Type Communications (mMTC). The Quality of Service (QoS) requirements of these 5G service types are different. In this paper, we capture the packets from the Narrow Band-Internet of Things (NB-IoT) transmission, Unmanned Aerial Vehicle (UAV) control, 4K video and Facebook access for emulating mMTC, URLLC, eMBB and Internet traffic in 5G. With the captured packets, we investigate using the machine learning technology to classify the packets based on the payload information. Specifically, the Convolutional Neural Network (CNN) model is performed to classify the application packets into suitable groups. In addition, this paper studies the effects of various parameters such as the kernel number, kernel size, pooling window size, the dropout rate and the payload length to find the optimal values for high accuracy and low latency.
16 citations