scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
10 May 1993
TL;DR: A two-level cache memory system for use in a computer system including two primary cache memories, one for storing instructions and the other for storing data, is described in this article.
Abstract: A two-level cache memory system for use in a computer system including two primary cache memories, one for storing instructions and one for storing data. The system also includes a secondary cache memory for storing both instructions and data. The primary and secondary caches each employ their own separate tag directory. The primary caches use a virtual addressing scheme employing both virtual tags and virtual addresses. The secondary cache employs a hybrid addressing scheme which uses virtual tags and partial physical addresses. The primary and secondary caches operate in parallel unless the larger and slower secondary cache is busy performing a previous operation. Only if a "miss" is encountered in both the primary and secondary caches does the system processor access the main memory.

101 citations

Proceedings ArticleDOI
06 May 2013
TL;DR: A novel cache management algorithm for flash-based disk cache, named Lazy Adaptive Replacement Cache (LARC), which can filter out seldom accessed blocks and prevent them from entering cache and improves performance and extends SSD lifetime at the same time.
Abstract: The increasing popularity of flash memory has changed storage systems. Flash-based solid state drive(SSD) is now widely deployed as cache for magnetic hard disk drives(HDD) to speed up data intensive applications. However, existing cache algorithms focus exclusively on performance improvements and ignore the write endurance of SSD. In this paper, we proposed a novel cache management algorithm for flash-based disk cache, named Lazy Adaptive Replacement Cache(LARC). LARC can filter out seldom accessed blocks and prevent them from entering cache. This avoids cache pollution and keeps popular blocks in cache for a longer period of time, leading to higher hit rate. Meanwhile, LARC reduces the amount of cache replacements thus incurs less write traffics to SSD, especially for read dominant workloads. In this way, LARC improves performance and extends SSD lifetime at the same time. LARC is self-tuning and low overhead. It has been extensively evaluated by both trace-driven simulations and a prototype implementation in flashcache. Our experiments show that LARC outperforms state-of-art algorithms and reduces write traffics to SSD by up to 94.5% for read dominant workloads, 11.2-40.8% for write dominant workloads.

101 citations

Patent
30 Dec 1994
TL;DR: In this article, the cache memory space in a computer system is controlled on a dynamic basis by adjusting the low threshold which triggers the release of more cache free space and the high threshold which ceases the free space.
Abstract: The cache memory space in a computer system is controlled on a dynamic basis by adjusting the low threshold which triggers the release of more cache free space and by adjusting the high threshold which ceases the release of free space The low and high thresholds are predicted based on the number of allocations which are accomplished in response to I/O requests, and based on the number of blockages which occur when an allocation can not be accomplished The predictions may be based on weighted values of different historical time periods, and the high and low thresholds may be made equal to one another In this manner the performance degradation resulting from variations in workload caused by prior art fixed or static high and low thresholds is avoided Instead only a predicted amount of cache memory space is freed and that amount of free space is more likely to accommodate the predicted output requests without releasing so much cache space that an unacceptable number of blockages occur

101 citations

Proceedings ArticleDOI
01 Aug 2011
TL;DR: A low-overhead, fully-hardware technique is utilized to detect write-intensive data blocks of working set and place them into SRAM lines while the remaining data blocks are candidates to be remapped onto STT-RAM blocks during system operation.
Abstract: In this paper, we propose a run-time strategy for managing writes onto last level cache in chip multiprocessors where STT-RAM memory is used as baseline technology. To this end, we assume that each cache set is decomposed into limited SRAM lines and large number of STT-RAM lines. SRAM lines are target of frequently-written data and rarely-written or read-only ones are pushed into STT-RAM. As a novel contribution, a low-overhead, fully-hardware technique is utilized to detect write-intensive data blocks of working set and place them into SRAM lines while the remaining data blocks are candidates to be remapped onto STT-RAM blocks during system operation. Therefore, the achieved cache architecture has large capacity and consumes near zero leakage energy using STT-RAM array; while dynamic write energy, acceptable write latency, and long lifetime is guaranteed via SRAM array. Results of full-system simulation for a quad-core CMP running PARSEC-2 benchmark suit confirm an average of 49 times improvement in cache lifetime and more than 50% reduction in cache power consumption when compared to baseline configurations.

101 citations

Patent
27 Dec 2007
TL;DR: In this paper, a method and apparatus for providing priority aware and consumption guided dynamic probabilistic allocation for a cache memory is described, where allocation probabilities for each priority level are updated based on the measured consumption/utilization.
Abstract: A method and apparatus for is herein described providing priority aware and consumption guided dynamic probabilistic allocation for a cache memory. Utilization of a sample size of a cache memory is measured for each priority level of a computer system. Allocation probabilities for each priority level are updated based on the measured consumption/utilization, i.e. allocation is reduced for priority levels consuming too much of the cache and allocation is increased for priority levels consuming too little of the cache. In response to an allocation request, it is assigned a priority level. An allocation probability associated with the priority level is compared with a randomly generated number. If the number is less than the allocation probability, then a fill to the cache is performed normally. In contrast, a spatially or temporally limited fill is performed if the random number is greater than the allocation probability.

101 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830