scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 7278 publications have been published within this topic receiving 115409 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2018
TL;DR: This work proposes a score-based edge service scheduling algorithm that evaluates both network and computational capabilities of edge nodes and outputs the maximum scoring mapping between services and resources.
Abstract: Edge computing is an emerging technology that aims to include latency-sensitive and data-intensive applications such as mobile or IoT services, into the cloud ecosystem by placing computational resources at the edge of the network. Close proximity to producers and consumers of data brings significant benefits in latency and bandwidth. However, edge resources are, by definition, limited in comparison to cloud counterparts, thus, a trade-off exists between deploying a service closest to its users and avoiding resource overload. We propose a score-based edge service scheduling algorithm that evaluates both network and computational capabilities of edge nodes and outputs the maximum scoring mapping between services and resources. Our extensive simulation based on a live video streaming service, demonstrates significant improvements in both network delay and service time. Additionally, we compare edge computing technology with the state-of-the-art cloud computing and content delivery network solutions within the context of latency-sensitive and data-intensive applications. Our results show that edge computing enhanced with suggested scheduling algorithm is a viable solution for achieving high quality of service and responsivity in deploying such applications.

35 citations

Journal ArticleDOI
26 Mar 2019-Mbio
TL;DR: The QUECEL model closely mimics RNA induction profiles seen in cells from well-suppressed HIV patient samples using the envelope detection of in vitro transcription sequencing (EDITS) assay and is a robust and reproducible tool to study the molecular mechanisms underlying HIV latency.
Abstract: The latent HIV reservoir is generated following HIV infection of activated effector CD4 T cells, which then transition to a memory phenotype. Here, we describe an ex vivo method, called QUECEL (quiescent effector cell latency), that mimics this process efficiently and allows production of large numbers of latently infected CD4+ T cells. Naive CD4+ T cells were polarized into the four major T cell subsets (Th1, Th2, Th17, and Treg) and subsequently infected with a single-round reporter virus which expressed GFP/CD8a. The infected cells were purified and coerced into quiescence using a defined cocktail of cytokines, including tumor growth factor beta, interleukin-10 (IL-10), and IL-8, producing a homogeneous population of latently infected cells. Flow cytometry and transcriptome sequencing (RNA-Seq) demonstrated that the cells maintained the correct polarization phenotypes and had withdrawn from the cell cycle. Key pathways and gene sets enriched during transition from quiescence to reactivation include E2F targets, G2M checkpoint, estrogen response late gene expression, and c-myc targets. Reactivation of HIV by latency-reversing agents (LRAs) closely mimics RNA induction profiles seen in cells from well-suppressed HIV patient samples using the envelope detection of in vitro transcription sequencing (EDITS) assay. Since homogeneous populations of latently infected cells can be recovered, the QUECEL model has an excellent signal-to-noise ratio and has been extremely consistent and reproducible in numerous experiments performed during the last 4 years. The ease, efficiency, and accuracy of the mimicking of physiological conditions make the QUECEL model a robust and reproducible tool to study the molecular mechanisms underlying HIV latency.IMPORTANCE Current primary cell models for HIV latency correlate poorly with the reactivation behavior of patient cells. We have developed a new model, called QUECEL, which generates a large and homogenous population of latently infected CD4+ memory cells. By purifying HIV-infected cells and inducing cell quiescence with a defined cocktail of cytokines, we have eliminated the largest problems with previous primary cell models of HIV latency: variable infection levels, ill-defined polarization states, and inefficient shutdown of cellular transcription. Latency reversal in the QUECEL model by a wide range of agents correlates strongly with RNA induction in patient samples. This scalable and highly reproducible model of HIV latency will permit detailed analysis of cellular mechanisms controlling HIV latency and reactivation.

35 citations

Patent
14 Nov 2008
TL;DR: In this article, the average delay experienced by packets in a queue is measured, and this information is then used to change the coefficient of a prediction equation to select the optimal point for achieving time delay requirements while preserving airlink resources.
Abstract: A method and apparatus for requesting bandwidth in a subscriber station is disclosed. The method dynamically changes the size of the bandwidth request based on the prediction of the number of packets needed to be transmitted. The average delay experienced by packets in a queue is measured, and this information is then used to change the coefficient of a prediction equation. When the experienced average delay is below the agreed upon QoS latency parameter or delay target, the method reduces the size of the bandwidth requests by making the prediction equation more conservative. On the other hand, when the experienced delay is above the agreed upon latency, the algorithm will make the prediction equation more aggressive, increasing the bandwidth requests and reducing the latency for future packets. By modifying the prediction equation based on the measured delay, the method is able to select the optimal point for achieving time delay requirements while preserving air-link resources.

35 citations

Proceedings Article
01 Jan 2017
TL;DR: A novel hybrid journaling technique is proposed, called ijournaling, which journals only the corresponding file-level transaction for an fsync call, while recording a normal journal transaction during periodic journaling to reduce the fsync latency, mitigate the interference between fsync-intensive threads, and provide high manycore scalability.
Abstract: For data durability, many applications rely on synchronous operations such as an fsync() system call. However, latency-sensitive synchronous operations can be delayed under the compound transaction scheme of the current journaling technique. Because a compound transaction includes irrelevant data and metadata, as well as the data and metadata of fsynced file, the latency of an fsync call can be unexpectedly long. In this paper, we first analyze various factors that may delay an fsync operation, and propose a novel hybrid journaling technique, called ijournaling, which journals only the corresponding file-level transaction for an fsync call, while recording a normal journal transaction during periodic journaling. The file-level transaction journal has only the related metadata updates of the fsynced file. By removing several factors detrimental to fsync latency, the proposed technique can reduce the fsync latency, mitigate the interference between fsync-intensive threads, and provide high manycore scalability. Experiments using a smartphone and a desktop computer showed significant improvements in fsync latency through the use of ijournaling.

35 citations

Journal ArticleDOI
01 Mar 2020
TL;DR: A tractable model of NB-IoT connectivity is developed, comprising message exchanges in random-access, control, and data channels, and it is confirmed that channel scheduling and coexistence of coverage classes significantly affect latency and battery lifetime performance of IoT devices.
Abstract: Narrowband Internet-of-Things (NB-IoT) offers a significant link budget improvement in comparison with the legacy networks by introducing different coverage classes, allowing repeated transmissions, and tuning the repetition order based on the path-loss in communications. However, those repetitions necessarily increase energy consumption and latency in the whole NB-IoT system. The extent to which the whole system is affected depends on the scheduling of the uplink and downlink channels. We address this question, not treated previously, by developing a tractable model of NB-IoT connectivity, comprising message exchanges in random-access, control, and data channels. The model is then used to analyze the impact of channel scheduling and interaction of coverage classes on the performance of IoT devices through the derivation of the expected latency and battery lifetime. These results are subsequently employed in determining the optimized operation points, i.e., (i) scheduling of data and control channels for a given set of users and respective coverage classes, or (ii) determining the optimal set of coverage classes and served users per coverage class for a given scheduling strategy. Simulations results show the validity of the analysis and confirm that channel scheduling and coexistence of coverage classes significantly affect latency and battery lifetime performance of NB-IoT devices.

35 citations


Network Information
Related Topics (5)
The Internet
213.2K papers, 3.8M citations
75% related
Node (networking)
158.3K papers, 1.7M citations
75% related
Wireless
133.4K papers, 1.9M citations
74% related
Server
79.5K papers, 1.4M citations
74% related
Network packet
159.7K papers, 2.2M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
2021485
2020529
2019533
2018500
2017405