Topic
Cache invalidation
About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.
Papers published on a yearly basis
Papers
More filters
•
07 Mar 2005TL;DR: A buffer cache interposed between a non-volatile memory and a host may be partitioned into segments that may operate with different policies as mentioned in this paper, such as write-through, write and read-look-ahead.
Abstract: A buffer cache interposed between a non-volatile memory and a host may be partitioned into segments that may operate with different policies. Cache policies include write-through, write and read-look-ahead. Write-through and write back policies may improve speed. Read-look-ahead cache allows more efficient use of the bus between the buffer cache and non-volatile memory. A session command allows data to be maintained in volatile memory by guaranteeing against power loss.
256 citations
••
01 Aug 2000
TL;DR: This paper focuses on the features of the M340 cache sub-system and illustrates the effect on power and performance through benchmark analysis and actual silicon measurements.
Abstract: Advances in technology have allowed portable electronic devices to become smaller and more complex, placing stringent power and performance requirements on the devices components. The M7CORE M3 architecture was developed specifically for these embedded applications. To address the growing need for longer battery life and higher performance, an 8-Kbyte, 4-way set-associative, unified (instruction and data) cache with pro-grammable features was added to the M3 core. These features allow the architecture to be optimized based on the applications requirements. In this paper, we focus on the features of the M340 cache sub-system and illustrate the effect on power and perfor-mance through benchmark analysis and actual silicon measure-ments.
253 citations
••
14 Feb 2004TL;DR: This paper proposes several power-aware storage cache management algorithms that provide more opportunities for the underlying disk power management schemes to save energy and investigates the effects of four storage cache write policies on disk energy consumption.
Abstract: Reducing energy consumption is an important issue for data centers. Among the various components of a data center, storage is one of the biggest consumers of energy. Previous studies have shown that the average idle period for a server disk in a data center is very small compared to the time taken to spin down and spin up. This significantly limits the effectiveness of disk power management schemes. This paper proposes several power-aware storage cache management algorithms that provide more opportunities for the underlying disk power management schemes to save energy. More specifically, we present an off-line power-aware greedy algorithm that is more energy-efficient than Beladys off-line algorithm (which minimizes cache misses only). We also propose an online power-aware cache replacement algorithm. Our trace-driven simulations show that, compared to LRU, our algorithm saves 16% more disk energy and provides 50% better average response time for OLTP I/O workloads. We have also investigated the effects of four storage cache write policies on disk energy consumption.
252 citations
••
TL;DR: The cache, a high-speed buffer establishing a storage hierarchy in the Model 85, is discussed in depth in this part, since it represents the basic organizational departure from other SYSTEM/360 computers.
Abstract: The cache, a high-speed buffer establishing a storage hierarchy in the Model 85, is discussed in depth in this part, since it represents the basic organizational departure from other SYSTEM/360 computers.
Discussed are organization and operation of the cache, including the mechanisms used to locate and retrieve data needed by the processor.
The internal performance studies that led to use of the cache are described, and simulated performance of the chosen configuration is compared with that of a theoretical system having an entire 80-nanosecond main storage. Finally, the effects of varying cache parameters are discussed and tabulated.
252 citations
•
IBM1
TL;DR: In this article, the predicted-to-be selected page is added to a local cache of the requested pages in the client, and the client can update the appearance of the link to indicate to the user that the page represented by that link is available in the local cache.
Abstract: A computer, e.g. a server or computer operated by a network provider sends one or more requesting computers (clients) a most likely predicted-to-be selected (predicted) page of information by determining a preference factor for this page based on one or more pages that are requested by the client. This page is added to a local cache of predicted-to-be-selected pages in the client. Once the predicted-to-be selected page is in the cache, the client can update the appearance of the link (i.e. by changing the color or otherwise changing the appearance of the link indicator) to indicate to the user that the page represented by that link is available in the local cache.
250 citations