scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
19 Dec 1979
TL;DR: In this article, the index is a set associative memory and bits provided to an address input of the index are selectively inhibited by an address inhibit circuit when the size of the data blocks in the data buffer is to be varied.
Abstract: A cache memory has a data buffer for storing blocks of data from a main memory and an index for storing main memory addresses associated with the data blocks in the data buffer. The size of the blocks of data stored in the data buffer can be varied in order to increase the "hit ratio" of the cache memory. The index is a set associative memory and bits provided to an address input of the index are selectively inhibited by an address inhibit circuit when the size of the data blocks in the data buffer is to be varied. A block size register stores block size information that is provided to the address inhibit circuit. The block size information is also provided to a fetch generate counter and a fetch return counter that control the number of words transferred as a block from the main memory to the cache memory.

75 citations

Patent
15 May 2002
TL;DR: In this paper, the authors present a method for supporting a compressed main memory in a computer system and corresponding method for support a compression translation table entry register in signal communication with the processor cache and the memory controller.
Abstract: A computer system and corresponding method for supporting a compressed main memory includes a processor, a processor cache in signal communication with the processor, a memory controller in signal communication with the processor cache, a compression translation table entry register in signal communication with the processor cache and the memory controller, a compression translation table directory in signal communication with the compression translation table entry register, and a compressed main memory in signal communication with the memory controller wherein the memory controller manages the compressed main memory by storing entries of the compression translation table directory into the processor cache from the compression translation table entry register; where the corresponding method includes receiving a real address for a processor cache miss, finding a compression translation table address for the cache miss within the processor cache, if the cache miss is a cache write miss: decompressing the memory line corresponding to the cache line being written, writing the content of the cache line into the appropriate position in the memory line, compressing the data contained in said memory line, and storing the compressed data into the compressed main memory, and, if the cache miss is a cache read miss: retrieving the compressed data corresponding to the compression translation table address from the compressed main memory and decompressing the retrieved data.

75 citations

Patent
07 Mar 1997
TL;DR: In this article, a two-way cache memory with multiplexed outputs and alternating access patterns is described, where the cache memory is more densely packed and implemented with fewer sense amplifiers.
Abstract: A two-way cache memory having multiplexed outputs and alternating ways is disclosed. Multiplexed outputs enable the cache memory to be more densely packed and implemented with fewer sense amplifiers. Alternating ways enable two distinct cache access patterns. According to a first access pattern, two doublewords in the same way may be accessed simultaneously. Such access facilities the leading of data into main memory. According to a second access pattern, two doublewords in the same location but in different ways may be accessed simultaneously. Such access facilitates the loading a particular word into a register file.

75 citations

Proceedings ArticleDOI
11 Sep 2010
TL;DR: Increasing cache efficiency can improve performance by reducing miss rate, or alternately, improve power and energy by allowing a smaller cache with the same miss rate.
Abstract: Caches mitigate the long memory latency that limits the performance of modern processors. However, caches can be quite inefficient. On average, a cache block in a 2MB L2 cache is dead 59% of the time, i.e., it will not be referenced again before it is evicted. Increasing cache efficiency can improve performance by reducing miss rate, or alternately, improve power and energy by allowing a smaller cache with the same miss rate.This paper proposes using predicted dead blocks to hold blocks evicted from other sets. When these evicted blocks are referenced again, the access can be satisfied from the other set, avoiding a costly access to main memory. The pool of predicted dead blocks can be thought of as a virtual victim cache. For a set of memory-intensive single-threaded workloads, a virtual victim cache in a 16-way set associative 2MB L2 cache reduces misses by 26%, yields an geometric mean speedup of 12.1% and improves cache efficiency by 27% on average, where cache efficiency is defined as the average time during which cache blocks contain live information. This virtual victim cache yields a lower average miss rate than a fully-associative LRU cache of the same capacity. For a set of multi-core workloads, the virtual victim cache improves throughput performance by 4% over LRU while improving cache efficiency by 62%.Alternately, a 1.7MB virtual victim cache achieves about the same performance as a larger 2MB L2 cache, reducing the number of SRAM cells required by 16%, thus maintaining performance while reducing power and area.

75 citations

Proceedings ArticleDOI
06 Aug 2001
TL;DR: In this paper, a new L1 data cache structure that combines a Specialized Stack Cache (SSC) and a Pseudo Set-Associative Cache (PSAC) is proposed.
Abstract: The L1 data cache is a time-critical module and, at the same time a major consumer of energy To reduce its energy-delay product, we apply two principles of low-power design: specialize part of the cache structure and break the cache down into smaller caches To this end, we propose a new L1 data cache structure that combines a Specialized Stack Cache (SSC) and a Pseudo Set-Associative Cache (PSAC) Individually, our SSC and PSAC designs have a lower energy-delay product than previously-proposed related designs In addition, their combined operation is very effective Relative to a conventional 2-way 32 KB data cache, a design containing a 4-way 32 KB PSAC and a 512 B SSC reduces the energy-delay product of several applications by an average of 44%

75 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830