scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
15 Nov 1994
TL;DR: In this paper, a cache subsystem for a computer system having a processor and a main memory is described, which includes a prefetch buffer coupled to the processor and the main memory.
Abstract: A cache subsystem for a computer system having a processor and a main memory is described. The cache subsystem includes a prefetch buffer coupled to the processor and the main memory. The prefetch buffer stores a first data prefetched from the main memory in accordance with a predicted address for a next memory fetch by the processor. The predicted address is based upon an address for a last memory fetch from the processor. A main cache is coupled to the processor and the main memory. The main cache is not coupled to the prefetch buffer and does not receive data from the prefetch buffer. The main cache stores a second data fetched from the main memory in accordance with the address for the last memory fetch by the processor only if the address for the last memory fetch is an unpredictable address. The address for the last memory fetch is the unpredictable address if both of the prefetch buffer and the main cache do not contain the address and the second data associated with the address.

87 citations

Proceedings ArticleDOI
17 Aug 1999
TL;DR: This work proposes, implements, and evaluates a series of run-time techniques for dynamic analysis of the program instruction access behavior, which are then used to preactively guide the access of the LO-Cache, an additional mini cache located between the instruction cache (I-Cache) and the CPU core.
Abstract: In this paper, we propose a technique that uses an additional mini cache, the LO-Cache, located between the instruction cache (I-Cache) and the CPU core. This mechanism can provide the instruction stream to the data path and, when managed properly, it can effectively eliminate the need for high utilization of the more expensive I-Cache. In this work, we propose, implement, and evaluate a series of run-time techniques for dynamic analysis of the program instruction access behavior, which are then used to preactively guide the access of the LO-Cache. The basic idea is that only the most frequently executed portions of the code should be stored in the LO-Cache since this is where the program spends most of its time. We present experimental results to evaluate the effectiveness of our scheme in terms of performance and energy dissipation for a series of SPEC95 benchmarks. We also discuss the performance and energy tradeoffs that are involved in these dynamic schemes.

87 citations

Patent
Igal Megory-Cohen1
01 Nov 1991
TL;DR: In this paper, a modified steepest descent method is proposed to handle unpredictable local cache activities prior to cache repartitioning to avoid readjustments which would result in unacceptably small or negative cache sizes in cases where a local cache is extremely underutilized.
Abstract: Dynamic partitioning of cache storage into a plurality of local caches for respective classes of competing processes is performed by a step of dynamically determining adjustments to the cache partitioning using a steepest descent method. A modified steepest descent method allows unpredictable local cache activities prior to cache repartitioning to be taken into account to avoid readjustments which would result in unacceptably small or, even worse, negative cache sizes in cases where a local cache is extremely underutilized. The method presupposes a unimodal distribution of cache misses.

87 citations

Patent
03 Nov 1995
TL;DR: In this paper, a doubly linked list is used to track the most recently used channels and the corresponding entry is moved to the top of the list as cached channel information is accessed, and the bottom pointer points to the channel data to be removed from the cache.
Abstract: An on-chip cache memory is used to provide a high speed access mechanism to frequently used channel state information for operation of a DMA device that supports multiple virtual channels in a high speed network interface. When an access to a particular channel state is performed, e.g., by a host processor or the DMA device, the cache is first accessed and if the state information is not located currently in the cache, external memory is read and the state information is written to the cache. As the cache does not store all the states stored in external memory, replacement algorithms are utilize to determine which channel state information to remove from the cache in order to provide room to store a recently accessed channel. A doubly linked list is used to track the most recently used channel. As cached channel information is accessed, the corresponding entry is moved to the top of the list. The doubly linked list provides a rapid apparatus and method for updating pointers to the cache. Top and bottom pointers are maintained, pointing to the most recently used and least recently used channels. When a channel is used, it moved to the top of the list. When channel data is moved from external memory to the cache, the bottom pointer points to the channel data to be removed from the cache.

87 citations

Patent
30 Sep 1994
TL;DR: In this paper, a data cache unit is employed within a microprocessor capable of speculative and out-of-order processing of memory instructions, where each microprocessor is capable of snooping the cache lines of data cache units of each other microprocessor.
Abstract: The data cache unit includes a separate fill buffer and a separate write-back buffer. The fill buffer stores one or more cache lines for transference into data cache banks of the data cache unit. The write-back buffer stores a single cache line evicted from the data cache banks prior to write-back to main memory. Circuitry is provided for transferring a cache line from the fill buffer into the data cache banks while simultaneously transferring a victim cache line from the data cache banks into the write-back buffer. Such allows the overall replace operation to be performed in only a single clock cycle. In a particular implementation, the data cache unit is employed within a microprocessor capable of speculative and out-of-order processing of memory instructions. Moreover, the microprocessor is incorporated within a multiprocessor computer system wherein each microprocessor is capable of snooping the cache lines of data cache units of each other microprocessor. The data cache unit is also a non-blocking cache.

87 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830