Topic
Latency (engineering)
About: Latency (engineering) is a research topic. Over the lifetime, 7278 publications have been published within this topic receiving 115409 citations. The topic is also known as: lag.
Papers published on a yearly basis
Papers
More filters
••
21 Oct 2019TL;DR: An overview of grant-free random access in 5G New Radio is provided, and two reliability-enhancing solutions are presented that result in significant performance gains, in terms of reliability as well as resource efficiency.
Abstract: Ultra-reliable low latency communication requires innovative resource management solutions that can guarantee high reliability at low latency. Grant-free random access, where channel resources are accessed without undergoing assignment through a handshake process, is proposed in 5G New Radio as an important latency reducing solution. However, this comes at an increased likelihood of collisions resulting from uncoordinated channel access. Novel reliability enhancement techniques are therefore needed. This article provides an overview of grant-free random access in 5G New Radio focusing on the ultra-reliable low latency communication service class, and presents two reliability-enhancing solutions. The first proposes retransmissions over shared resources, whereas the second proposal incorporates grant-free transmission with non-orthogonal multiple access where overlapping transmissions are resolved through the use of advanced receivers. Both proposed solutions result in significant performance gains, in terms of reliability as well as resource efficiency. For example, the proposed non-orthogonal multiple access scheme can support a normalized load of more than 1.5 users/slot at packet loss rates of ~ 10−5 a significant improvement over conventional grant-free schemes like slotted-ALOHA.
95 citations
••
TL;DR: A novel effect of dopamine on saccadic latency, implying that it influences the neural decision process itself, is demonstrated, and the effects of l-dopa on neural decision making are discussed, where it is postulated to increase the criterion level of evidence required before the decision to move is made.
Abstract: Parkinson's disease (PD) is associated with a loss of central dopaminergic pathways in the brain leading to an abnormality of movement, including saccades. In PD, analysis of saccadic latency distributions, rather than mean latencies, can provide much more information about how the neural decision process that precedes movement is affected by disease or medication. Subject to the constraints of intersubject variation and reproducibility, latency distribution may represent an attractive potential biomarker of PD. Here we report two studies that provide information about these parameters, and demonstrate a novel effect of dopamine on saccadic latency, implying that it influences the neural decision process itself. We performed a detailed cross-sectional study of saccadic latency distributions during a simple step task in 22 medicated patients and 27 age-matched controls. This revealed high intersubject variability and an overlap of PD and control distributions. A second study was undertaken on a different population specifically to investigate the effects of dopamine on saccadic latency distributions in 15 PD patients. L-dopa was found to prolong latency, although the magnitude of the effect varied between subjects. The implications of these observations for the use of saccadic latency distributions as a potential biomarker of PD are discussed, as are the effects of L-dopa on neural decision making, where it is postulated to increase the criterion level of evidence required before the decision to move is made.
95 citations
•
26 Feb 2019TL;DR: It is demonstrated that Shinjuku provides significant tail latency and throughput improvements over IX and ZygOS for a wide range of workload scenarios and achieves up to 6.6× higher throughput and 88% lower tail latency.
Abstract: The recently proposed dataplanes for microsecond scale applications, such as IX and ZygOS, use nonpreemptive policies to schedule requests to cores. For the many real-world scenarios where request service times follow distributions with high dispersion or a heavy tail, they allow short requests to be blocked behind long requests, which leads to poor tail latency.
Shinjuku is a single-address space operating system that uses hardware support for virtualization to make preemption practical at the microsecond scale. This allows Shinjuku to implement centralized scheduling policies that preempt requests as often as every 5µsec and work well for both light and heavy tailed request service time distributions. We demonstrate that Shinjuku provides significant tail latency and throughput improvements over IX and ZygOS for a wide range of workload scenarios. For the case of a RocksDB server processing both point and range queries, Shinjuku achieves up to 6.6× higher throughput and 88% lower tail latency.
94 citations
••
03 Nov 2014TL;DR: This paper describes PriorityMeister -- a system that employs a combination of per-workload priorities and rate limits to provide tail latency QoS for shared networked storage, even with bursty workloads.
Abstract: Meeting service level objectives (SLOs) for tail latency is an important and challenging open problem in cloud computing infrastructures. The challenges are exacerbated by burstiness in the workloads. This paper describes PriorityMeister -- a system that employs a combination of per-workload priorities and rate limits to provide tail latency QoS for shared networked storage, even with bursty workloads. PriorityMeister automatically and proactively configures workload priorities and rate limits across multiple stages (e.g., a shared storage stage followed by a shared network stage) to meet end-to-end tail latency SLOs. In real system experiments and under production trace workloads, PriorityMeister outperforms most recent reactive request scheduling approaches, with more workloads satisfying latency SLOs at higher latency percentiles. PriorityMeister is also robust to mis-estimation of underlying storage device performance and contains the effect of misbehaving workloads.
94 citations
••
19 Mar 2014TL;DR: By leveraging the server push feature in HTTP 2.0, this work is able to avoid the request explosion problem while lowering latency by reducing the segment duration, and implements this server push based low latency mechanism in a MPEG Dynamic Adaptive Streaming over HTTP (DASH) prototype.
Abstract: Hypertext Transfer Protocol (HTTP) has been widely adopted as a scalable and efficient protocol for streaming video content over the Internet. HTTP streaming clients receive a manifest file, download the referred video segments over HTTP, and play them back seamlessly emulating video streaming. This introduces at least one segment duration latency making HTTP streaming unsuitable for live video streaming use cases that require low latencies. The straightforward solution to lower live latency that reduces segment duration leads to an explosion in the number of HTTP requests, as well as inefficient deployment of assets in HTTP caches. To solve this problem, we develop a low latency live video streaming technique over HTTP 2.0. In particular, we employ the new server push feature in HTTP 2.0 to stream the live video actively from the web server to the client, as soon as the video segments become available. We implement this server push based low latency mechanism in a MPEG Dynamic Adaptive Streaming over HTTP (DASH) prototype. Our experimental results indicate performance gains in live latency using the server push scheme. More importantly, by leveraging the server push feature in HTTP 2.0, we are able to avoid the request explosion problem while lowering latency by reducing the segment duration.
94 citations