Topic
Latency (engineering)
About: Latency (engineering) is a research topic. Over the lifetime, 7278 publications have been published within this topic receiving 115409 citations. The topic is also known as: lag.
Papers published on a yearly basis
Papers
More filters
•
06 Jun 1994TL;DR: Current results, obtained from a trace-driven simulation, show that prefetching results in as much as a 280% improvement over LRU especially for smaller caches, and can reduce cache size by up to 50%.
Abstract: Despite impressive advances in file system through put resulting from technologies such as high-bandwidth networks and disk arrays, file system latency has not improved and in many cases has become worse. Consequently, file system I/O remains one of the major bottlenecks to operating system performance [10].
This paper investigates an automated predictive approach towards reducing file latency. Automatic Prefetching uses past file accesses to predict future file systemrequests. The objective is to provide data in advance of the request for the data, effectively masking access latencies. We have designed and implement a system to measure the performance benefits of automatic prefetching. Our current results, obtained from a trace-driven simulation, show that prefetching results in as much as a 280% improvement over LRU especially for smaller caches. Alternatively, prefetching can reduce cache size by up to 50%.
402 citations
•
08 Dec 1997TL;DR: It is concluded that for the workload studied caching offers moderate assistance in reducing latency and prefetching can offer more than twice the improvement of caching but is still limited in its ability to reduce latency.
Abstract: Prefetching and caching are techniques commonly used in I/O systems to reduce latency. Many researchers have advocated the use of caching and prefetching to reduce latency in the Web. We derive several bounds on the performance improvements seen from these techniques, and then use traces of Web proxy activity taken at Digital Equipment Corporation to quantify these bounds.
We found that for these traces, local proxy caching could reduce latency by at best 26%, prefetching could reduce latency by at best 57%, and a combined caching and prefetching proxy could provide at best a 60% latency reduction. Furthermore, we found that how far in advance a prefetching algorithmwas able to prefetch an object was a significant factor in its ability to reduce latency. We note that the latency reduction from caching is significantly limited by the rapid changes of objects in the Web. We conclude that for the workload studied caching offers moderate assistance in reducing latency. Prefetching can offer more than twice the improvement of caching but is still limited in its ability to reduce latency.
401 citations
••
TL;DR: In highly-pipelined machines, instructions and data are prefetched and buffered in both the processor and the cache as discussed by the authors, which is done to reduce the average memory access latency and to take advantage o...
Abstract: In highly-pipelined machines, instructions and data are prefetched and buffered in both the processor and the cache. This is done to reduce the average memory access latency and to take advantage o...
398 citations
••
04 Dec 2004TL;DR: This paper develops L2 cache designs for CMPs that incorporate block migration, stride-based prefetching between L1 and L2 caches, and presents a hybrid design-combining all three techniques-that improves performance by an additional 2% to 19% overPrefetching alone.
Abstract: In response to increasing (relative) wire delay, architects have proposed various technologies to manage the impact of slow wires on large uniprocessor L2 caches. Block migration (e.g., D-NUCA and NuRapid) reduces average hit latency by migrating frequently used blocks towards the lower-latency banks. Transmission Line Caches (TLC) use on-chip transmission lines to provide low latency to all banks. Traditional stride-based hardware prefetching strives to tolerate, rather than reduce, latency. Chip multiprocessors (CMPs) present additional challenges. First, CMPs often share the on-chip L2 cache, requiring multiple ports to provide sufficient bandwidth. Second, multiple threads mean multiple working sets, which compete for limited on-chip storage. Third, sharing code and data interferes with block migration, since one processor's low-latency bank is another processor's high-latency bank. In this paper, we develop L2 cache designs for CMPs that incorporate these three latency management techniques. We use detailed full-system simulation to analyze the performance trade-offs for both commercial and scientific workloads. First, we demonstrate that block migration is less effective for CMPs because 40-60% of L2 cache hits in commercial workloads are satisfied in the central banks, which are equally far from all processors. Second, we observe that although transmission lines provide low latency, contention for their restricted bandwidth limits their performance. Third, we show stride-based prefetching between L1 and L2 caches alone improves performance by at least as much as the other two techniques. Finally, we present a hybrid design-combining all three techniques-that improves performance by an additional 2% to 19% over prefetching alone.
391 citations
••
TL;DR: The accumulating evidence suggesting that reactivation of HCMV from latency appears to be linked intrinsically to the differentiation status of the myeloid cell is addressed, and how the cellular mechanisms that normally control host gene expression play a critical role in the differential regulation of viral gene expression during latency and reactivation are addressed.
Abstract: Human cytomegalovirus (HCMV) persists as a subclinical, lifelong infection in the normal human host, maintained at least in part by its carriage in the absence of detectable infectious virus – the hallmark of latent infection. Reactivation from latency in immunocompromised individuals, in contrast, often results in serious disease. Latency and reactivation are defining characteristics of the herpesviruses and key to understanding their biology. However, the precise cellular sites in which HCMV is carried and the mechanisms regulating its latency and reactivation during natural infection remain poorly understood. This review will detail our current knowledge of where HCMV is carried in healthy individuals, which viral genes are expressed upon carriage of the virus and what effect this has on cellular gene expression. It will also address the accumulating evidence suggesting that reactivation of HCMV from latency appears to be linked intrinsically to the differentiation status of the myeloid cell, and how the cellular mechanisms that normally control host gene expression play a critical role in the differential regulation of viral gene expression during latency and reactivation.
386 citations