scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 7278 publications have been published within this topic receiving 115409 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
18 Apr 2016
TL;DR: This paper proposes PSLO, a framework supporting the Xth percentile latency and throughput SLOs under consolidated VM environment by precisely coordinating the level of IO concurrency and arrival rate for each VM issue queue, and designs and implements a PSLO prototype in the real VM consolidation environment created by Xen.
Abstract: It is desirable but challenging to simultaneously support latency SLO at a pre-defined percentile, i.e., the Xth percentile latency SLO, and throughput SLO for consolidated VM storage. Ensuring the Xth percentile latency contributes to accurately differentiating service levels in the metric of the application-level latency SLO compliance, especially for the application built on multiple VMs. However, the Xth percentile latency SLO and throughput SLO enforcement are the opposite sides of the same coin due to the conflicting requirements for the level of IO concurrency. To address this challenge, this paper proposes PSLO, a framework supporting the Xth percentile latency and throughput SLOs under consolidated VM environment by precisely coordinating the level of IO concurrency and arrival rate for each VM issue queue. It is noted that PSLO can take full advantage of the available IO capacity allowed by SLO constraints to improve throughput or reduce latency with the best effort. We design and implement a PSLO prototype in the real VM consolidation environment created by Xen. Our extensive trace-driven prototype evaluation shows that our system is able to optimize the Xth percentile latency and throughput for consolidated VMs under SLO constraints.

45 citations

Journal ArticleDOI
TL;DR: This study proposes a novel method of GC based on reinforcement learning that significantly reduces the long-tail latency by 29--36% at the 99th percentile compared to state-of-the-art schemes.
Abstract: NAND flash memory is widely used in various systems, ranging from real-time embedded systems to enterprise server systems. Because the flash memory has erase-before-write characteristics, we need flash-memory management methods, i.e., address translation and garbage collection. In particular, garbage collection (GC) incurs long-tail latency, e.g., 100 times higher latency than the average latency at the 99th percentile. Thus, real-time and quality-critical systems fail to meet the given requirements such as deadline and QoS constraints. In this study, we propose a novel method of GC based on reinforcement learning. The objective is to reduce the long-tail latency by exploiting the idle time in the storage system. To improve the efficiency of the reinforcement learning-assisted GC scheme, we present new optimization methods that exploit fine-grained GC to further reduce the long-tail latency. The experimental results with real workloads show that our technique significantly reduces the long-tail latency by 29--36% at the 99.99th percentile compared to state-of-the-art schemes.

45 citations

Proceedings ArticleDOI
27 Jun 2014
TL;DR: This paper investigates the effects of profiling and redundancy on latency when a client has a choice of multiple servers to connect to, using measurements from real experiments and simulations to help designers determine how many servers and which servers to select to reduce latency.
Abstract: As servers are placed in diverse locations in networked services today, it becomes vital to direct a client's request to the best server(s) to achieve both high performance and reliability. In this distributed setting, non-negligible latency and server availability become two major concerns, especially for highly-interactive applications. Profiling latencies and sending redundant data have been investigated as solutions to these issues. The notion of a cloudlet in mobile-cloud computing is also relevant in this context, as the cloudlet can supply these solution approaches on behalf of the mobile. In this paper, we investigate the effects of profiling and redundancy on latency when a client has a choice of multiple servers to connect to, using measurements from real experiments and simulations. We devise and test different server selection and data partitioning strategies in terms of profiling and redundancy. Our key findings are summarized as follows. First, intelligent server selection algorithms help find the optimal group of servers that minimize latency with profiling. Second, we can achieve good performance with relatively simple approaches using redundancy. Our analysis of profiling and redundancy provides insight to help designers determine how many servers and which servers to select to reduce latency.

45 citations

Proceedings ArticleDOI
05 Jun 2006
TL;DR: A new sleep schedule (Q-MAC) is proposed for query based sensor networks that provides minimum end-to-end latency with energy efficient data transmission and reduces energy consumption by activating the neighbor nodes only when packets (query and data) are transmitted.
Abstract: Energy management in sensor networks is crucial to prolong the network lifetime. Though existing sleep scheduling algorithms save energy, they lead to a large increase in end-to-end latency. We propose a new Sleep schedule (Q-MAC) for Query based sensor networks that provides minimum end-to-end latency with energy efficient data transmission. Whenever there is no query, the radios of the nodes sleep more using a static schedule. Whenever a query is initiated, the sleep schedule is changed dynamically. Based on the destination’s location and packet transmission time, we predict the data arrival time and retain the radio of a particular node, which has forwarded the query packet, in the active state until the data packets are forwarded. Since our dynamic schedule alters the active period of the intermediate nodes in advance by predicting the packet arrival time, data is transmitted to the sink with low end-to-end latency.The objectives of our protocol are to (1) minimize the end-to-end latency by alerting the intermediate nodes in advance using the dynamic schedule (2) reduce energy consumption by activating the neighbor nodes only when packets (query and data) are transmitted. Simulation results show that Q-MAC performs better than S-MAC by reducing the latency up to 80% with minimum energy consumption.

45 citations

Journal ArticleDOI
TL;DR: Results show that the latency of AP as a function of SPL remains constant regardless of the pattern and degree of hair‐cell loss in the cochlea, and they consequently prove that the hypothesis of this experiment cannot be rejected.
Abstract: An experimental hypothesis was derived based on the findings of the clinical simultaneous binaural median plane lateralization test and the literature review of the latency of action potentials (AP). It states that the latency of action potentials as a function of absolute signal intensity (SPL) in cochleae with different degrees of hair‐cell losses is identical to that in normal cochleae at suprathreshold levels for signals with a given spectral content and a fixed rise time. The pathological cochleae were created by injections of kanamycin in guinea pigs and were monitored histopathologically, utilizing the surface preparation technique. The results show that the latency of AP as a function of SPL remains constant regardless of the pattern and degree of hair‐cell loss in the cochlea, and they consequently prove that the hypothesis of this experiment cannot be rejected. For a signal with a given frequency and a fixed rise time, the latency of the onset AP apparently depends only on the magnitude of the displacement of the basilar membrane regardless of the overall magnitudes of CM or AP.

44 citations


Network Information
Related Topics (5)
The Internet
213.2K papers, 3.8M citations
75% related
Node (networking)
158.3K papers, 1.7M citations
75% related
Wireless
133.4K papers, 1.9M citations
74% related
Server
79.5K papers, 1.4M citations
74% related
Network packet
159.7K papers, 2.2M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
2021485
2020529
2019533
2018500
2017405