scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 7278 publications have been published within this topic receiving 115409 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
01 Apr 2014
TL;DR: TOFEC as discussed by the authors uses erasure coding, parallel connections to storage cloud and limited chunking together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage.
Abstract: Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5× lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3× as many requests. Index Terms—FEC, Cloud storage, Queueing, Delay

93 citations

Journal ArticleDOI
26 Feb 2015-Cell
TL;DR: It is proposed that latency is an evolutionary "bet-hedging" strategy whose frequency has been optimized to maximize lentiviral transmission by reducing viral extinction during mucosal infections.

93 citations

Journal ArticleDOI
01 Jan 2011-Methods
TL;DR: This work describes in detail an experimental protocol for the generation of HIV-1 latency using human primary CD4(+) T cells, and presents the salient points of other latency models in the field, along with key findings arising from each model.

93 citations

Proceedings ArticleDOI
26 May 2008
TL;DR: The latency for information dissemination in large-scale mobile wireless networks is analyzed and results from percolation theory are used to show that under a constrained i.i.d. mobility model, the scaling behavior of the latency falls into two regimes.
Abstract: In wireless networks, node mobility may be exploited to assist in information dissemination over time. We analyze the latency for information dissemination in large-scale mobile wireless networks. To study this problem, we map a network of mobile nodes to a network of stationary nodes with dynamic links. We then use results from percolation theory to show that under a constrained i.i.d. mobility model, the scaling behavior of the latency falls into two regimes. When the network is not percolated (subcritical), the latency scales linearly with the initial Euclidean distance between the sender and the receiver; when the network is percolated (supercritical), the latency scales sub-linearly with the distance.

93 citations

Proceedings ArticleDOI
24 Aug 2015
TL;DR: Her Hermes, a novel fully polynomial time problem approximation scheme (FPTAS) algorithm, is proposed to solve the problem to minimize the latency while meeting prescribed resource utilization constraints.
Abstract: With mobile devices increasingly able to connect to cloud servers from anywhere, resource-constrained devices can potentially perform offloading of computational tasks to either improve resource usage or improve performance. It is of interest to find optimal assignments of tasks to local and remote devices that can take into account the application-specific profile, availability of computational resources, and link connectivity, and find a balance between energy consumption costs of mobile devices and latency for delay-sensitive applications. Given an application described by a task dependency graph, we formulate an optimization problem to minimize the latency while meeting prescribed resource utilization constraints. Different from most of existing works that either rely on an integer linear programming formulation, which is NP-hard and not applicable to general task dependency graph for latency metrics, or on intuitively derived heuristics that offer no theoretical performance guarantees, we propose Hermes, a novel fully polynomial time problem approximation scheme (FPTAS) algorithm to solve this problem. Hermes pros vides a solution with latency no more than (1 + e) times of the minimum while incurring complexity that is an polynomial in problem size and //e We evaluate the performance by using real data set collected from several benchmarks, and show that Hermes improves the latency by 16% (36% for larger scale application) compared to a previously published heuristic and increases CPU computing time by only 0.4% of overall latency.

92 citations


Network Information
Related Topics (5)
The Internet
213.2K papers, 3.8M citations
75% related
Node (networking)
158.3K papers, 1.7M citations
75% related
Wireless
133.4K papers, 1.9M citations
74% related
Server
79.5K papers, 1.4M citations
74% related
Network packet
159.7K papers, 2.2M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
2021485
2020529
2019533
2018500
2017405