Topic
Latency (engineering)
About: Latency (engineering) is a research topic. Over the lifetime, 7278 publications have been published within this topic receiving 115409 citations. The topic is also known as: lag.
Papers published on a yearly basis
Papers
More filters
••
01 Jul 2019TL;DR: It is explained how to easily hide NVM latency by interleaving the execution of parallel work in index joins and tuple reconstruction using coroutines, which accelerates end-to-end query runtimes on both NVM and DRAM by up to 2.6X.
Abstract: Non-Volatile Memory (NVM) technologies exhibit 4X the read access latency of conventional DRAM. When the working set does not fit in the processor cache, this latency gap between DRAM and NVM leads to more than 2X runtime increase for queries dominated by latency-bound operations such as index joins and tuple reconstruction. We explain how to easily hide NVM latency by interleaving the execution of parallel work in index joins and tuple reconstruction using coroutines. Our evaluation shows that interleaving applied to the non-trivial implementations of these two operations in a production-grade codebase accelerates end-to-end query runtimes on both NVM and DRAM by up to 1.7X and 2.6X respectively, thereby reducing the performance difference between DRAM and NVM by more than 60%.
31 citations
••
25 Jun 2018TL;DR: This paper develops a joint multi-user preemptive scheduling strategy to simultaneously cross-optimize system SE and URLLC latency and shows that extensive dynamic system level simulations show that proposed scheduler provides significant performance gain in terms of eMBB SE and ULTIMATE low-latency latency.
Abstract: 5G new radio is envisioned to support three major service classes: enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine type communications. Emerging URLLC services require up to one millisecond of communication latency with 99.999% success probability. Though, there is a fundamental trade-off between system spectral efficiency (SE) and achievable latency. This calls for novel scheduling protocols which cross-optimize system performance on user-centric; instead of network-centric basis. In this paper, we develop a joint multi-user preemptive scheduling strategy to simultaneously cross-optimize system SE and URLLC latency. At each scheduling opportunity, available URLLC traffic is always given higher priority. When sporadic URLLC traffic appears during a transmission time interval (TTI), proposed scheduler seeks for fitting the URLLC-eMBB traffic in a multi-user transmission. If the available spatial degrees of freedom are limited within a TTI, the URLLC traffic instantly overwrites part of the ongoing eMBB transmissions to satisfy the URLLC latency requirements, at the expense of minimal eMBB throughput loss. Extensive dynamic system level simulations show that proposed scheduler provides significant performance gain in terms of eMBB SE and URLLC latency.
31 citations
•
29 Jul 2002
TL;DR: In this article, the authors describe a new method and system for delivering data over a network to a large number of clients, which may be suitable for building large-scale Video-on-Demand (VOD) systems.
Abstract: This invention describes a new method and system for delivering data over a network to a large number of clients, which may be suitable for building large-scale Video-on-Demand (VOD) systems In current VOD systems, the client may suffer from a long latency before starting to receive the requested data that is capable of providing sufficient interactive functions, or the reverse, without significantly increasing the network load The method utilizes two groups of data streams, one responsible for minimizing latency while the other one provides the required interactive functions In the anti-latency data group, uniform, or non-uniform or hierarchical staggered stream intervals may be used The system being realized based on this invention may have a relatively small startup latency while users may enjoy most of the interactive functions that are typical of video recorders including fast-forward, forward-jump, and so on Furthermore, this invention may also be able to maintain the number of data streams, and therefore the bandwidth, required
31 citations
••
TL;DR: A model wherein episodic Blimp-1-mediated plasma cell differentiation leads to MHV68 reactivation, which serves to both renew the latency reservoirs and stimulate long-lived plasma cells to secrete virus-specific antibody is supported.
Abstract: Recent evidence from the study of Epstein-Barr virus and Kaposi's sarcoma-associated herpesvirus supports a model in which terminal differentiation of B cells to plasma cells leads to virus reactivation. Here we address the role of Blimp-1, the master transcriptional regulator of plasma cell differentiation, in murine gammaherpesvirus 68 (MHV68) latency and reactivation. Blimp-1 expression in infected cells was dispensable for acute virus replication in the lung following intranasal inoculation and in the spleen following intraperitoneal inoculation with MHV68. However, we observed a role for Blimp-1 in both the establishment of latency and reactivation from latency in vivo. Additionally, plasma cell-deficient mice also exhibited a significant defect in the establishment of latency in the spleen, as well as reactivation from latency, similar to mice that lacked Blimp-1 only in MHV68-infected cells. In the absence of plasma cells, MHV68 infection failed to elicit a strong germinal center response and fewer B cells in the germinal center were MHV68 infected. Notably, the absence of a functional Blimp-1 gene only in MHV68-infected cells led to a decrease in both B-cell and CD4(+) T-cell responses during the establishment of latency. Finally, Blimp-1 expression in infected cells played a critical role in the maintenance of both MHV68 latency in the spleen and antibody responses to MHV68. Together, these studies support a model wherein episodic Blimp-1-mediated plasma cell differentiation leads to MHV68 reactivation, which serves to both renew the latency reservoirs and stimulate long-lived plasma cells to secrete virus-specific antibody.
31 citations
••
07 Jul 2019TL;DR: It is proved that there is a natural tradeoff between the AoI and packet delay, and it is shown that the service time distribution that minimizes average age, must necessarily have an unbounded-second moment.
Abstract: Information freshness and low latency communication is important to many emerging applications. While Age of Information (AoI) serves as a metric of information freshness, packet delay is a traditional metric of communication latency. We prove that there is a natural tradeoff between the AoI and packet delay. We consider a single server system, in which at most one update packet can be serviced at a time. The system designer controls the order in which the packets get serviced and the service time distribution, with a given service rate. We analyze two tradeoff problems that minimize packet delay and the variance in packet delay, respectively, subject to an average age constraint. We prove a strong age-delay and age-delay variance tradeoff, wherein, as the average age approaches its minimum, the delay and its variance approach infinity. We show that the service time distribution that minimizes average age, must necessarily have an unbounded-second moment.
31 citations