scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
03 Jun 2004
TL;DR: In this article, a method and apparatus to avoid incoherency between a cache memory and a flash memory is provided, which may include invalidating at least one cache line of information stored in the cache memory.
Abstract: Briefly, in accordance with an embodiment of the invention, a method and apparatus to avoid incoherency between a cache memory and a flash memory is provided. The method may include invalidating at least one cache line of information stored in the cache memory to avoid incoherency between the cache memory and the flash memory in response to a flash erase operation, a flash write operation, an operation that makes information inaccessible in the flash memory, or an operation that moves information from one region of the flash memory to another region of the flash memory. Other embodiments are described and claimed.

92 citations

Journal ArticleDOI
TL;DR: The simulation results demonstrate that size-based partitioning and heterogeneous cache replacement policies each offer improvements in overall caching performance, and considers novel cache management techniques that can better exploit the changing workload characteristics across a multilevel Web proxy caching hierarchy.
Abstract: This article studies the "filter effects" that occur in Web proxy caching hierarchies due to the presence of multiple levels of caches. That is, the presence of one level of cache changes the structural characteristics of the workload presented to the next level of cache, since only the requests that miss in one cache are forwarded to the next cache.Trace-driven simulations, with empirical and synthetic traces, are used to demonstrate the presence and magnitude of the filter effects in a multilevel Web proxy caching hierarchy. Experiments focus on the effects of cache size, cache replacement policy, Zipf slope, and the depth of the Web proxy caching hierarchy.Finally, the article considers novel cache management techniques that can better exploit the changing workload characteristics across a multilevel Web proxy caching hierarchy. Trace-driven simulations are used to evaluate the performance of these approaches. The simulation results demonstrate that size-based partitioning and heterogeneous cache replacement policies each offer improvements in overall caching performance. The sensitivity of the results to the degree of workload overlap among child-level proxy caches is also studied.

92 citations

Proceedings ArticleDOI
08 Feb 2003
TL;DR: This paper first explores the simple case of two static miss costs using trace-driven simulations to understand when cost-sensitive replacements are effective, and proposes several extensions of LRU which account for nonuniform miss costs.
Abstract: Cache replacement algorithms originally developed in the context of simple uniprocessor systems aim to reduce the miss count. However, in modern systems, cache misses have different costs. The cost may be latency, penalty, power consumption, bandwidth consumption, or any other ad-hoc numerical property attached to a miss. In many practical situations, it is desirable to inject the cost of a miss into the replacement policy. In this paper, we propose several extensions of LRU which account for nonuniform miss costs. These LRU extensions have simple implementations, yet they are very effective in various situations. We first explore the simple case of two static miss costs using trace-driven simulations to understand when cost-sensitive replacements are effective. We show that very large improvements of the cost function are possible in many practical cases. As an example of their effectiveness, we apply the algorithms to the second-level cache of a multiprocessor with superscalar processors, using the miss latency as the cost function. By applying our simple replacement policies sensitive to the latency of misses we can improve the execution time of some parallel applications by up to 18%.

92 citations

Journal ArticleDOI
01 May 1990
TL;DR: This paper explores the interactions between a cache's block size, fetch size and fetch policy from the perspective of maximizing system-level performance and finds the most effective fetch strategy improved performance by between 1.7% and 4.5%.
Abstract: This paper explores the interactions between a cache's block size, fetch size and fetch policy from the perspective of maximizing system-level performance. It has been previously noted that given a simple fetch strategy the performance optimal block size is almost always four or eight words [10]. If there is even a small cycle time penalty associated with either longer blocks or fetches, then the performance-optimal size is noticeably reduced. In split cache organizations, where the fetch and block sizes of instruction and data caches are all independent design variables, instruction cache block size and fetch size should be the same. For the workload and write-back write policy used in this trace-driven simulation study, the instruction cache block size should be about a factor of two greater than the data cache fetch size, which in turn should equal to or double the data cache block size. The simplest fetch strategy of fetching only on a miss and stalling the CPU until the fetch is complete works well. Complicated fetch strategies do not produce the performance improvements indicated by the accompanying reductions in miss ratios because of limited memory resources and a strong temporal clustering of cache misses. For the environments simulated here, the most effective fetch strategy improved performance by between 1.7% and 4.5% over the simplest strategy described above.

92 citations

Proceedings ArticleDOI
01 Dec 2007
TL;DR: This work proposes a novel replacement strategy that mimics the replacement decisions of OPT and can cover 40% of the gap between OPT and LRU for a 2MB cache resulting in 7% overall speedup.
Abstract: The inherent temporal locality in memory accesses is filtered out by the L1 cache. As a consequence, an L2 cache with LRU replacement incurs significantly higher misses than the optimal replacement policy (OPT). We propose to narrow this gap through a novel replacement strategy that mimics the replacement decisions of OPT. The L2 cache is logically divided into two components, a Shepherd Cache (SC) with a simple FIFO replacement and a Main Cache (MC) with an emulation of optimal replacement. The SC plays the dual role of caching lines and guiding the replacement decisions in MC. Our pro- posed organization can cover 40% of the gap between OPT and LRU for a 2MB cache resulting in 7% overall speedup. Comparison with the dynamic insertion policy, a victim buffer, a V-Way cache and an LRU based fully associative cache demonstrates that our scheme performs better than all these strategies.

92 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830