scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
27 Jan 2004
TL;DR: In this paper, a multimedia apparatus comprises a cache buffer configured to be coupled to a storage device, wherein the cache buffer stores multimedia data, including video and audio data, read from the storage device.
Abstract: Systems and methods are provided for caching media data to thereby enhance media data read and/or write functionality and performance. A multimedia apparatus, comprises a cache buffer configured to be coupled to a storage device, wherein the cache buffer stores multimedia data, including video and audio data, read from the storage device. A cache manager coupled to the cache buffer, wherein the cache buffer is configured to cause the storage device to enter into a reduced power consumption mode when the amount of data stored in the cache buffer reaches a first level.

64 citations

Proceedings ArticleDOI
02 Oct 2005
TL;DR: A counter-based L2 cache replacement with two new replacement algorithms: access interval predictor (AIP) and live-time predictor (LvP), which speed up SPEC2000 benchmarks by up to 40% and 11% on average.
Abstract: Recent studies have shown that in highly associative caches, the performance gap between the least recently used (LRU) and the theoretical optimal replacement algorithms is large, suggesting that alternative replacement algorithms can improve the performance of the cache. One main reason for this performance gap is that in the LRU replacement algorithm, a line is only evicted after it becomes the LRU line, long after its last access/touch, while unnecessarily occupying the cache space for a long time. This paper proposes a new approach to deal with the problem: counter-based L2 cache replacement. In this approach, each line in the L2 cache is augmented with an event counter that is incremented when an event of interest, such as a cache access to the same set, occurs. When the counter exceeds a threshold, the line "expires", and becomes evictable. When expired lines are evicted early from the cache, they make extra space for lines that may be more useful, reducing the number of capacity and conflict misses. Each line's threshold is unique and is dynamically learned and stored in a small 40-Kbyte counter prediction table. We propose two new replacement algorithms: access interval predictor (AIP) and live-time predictor (LvP). AIP and LvP speed up 10 (out of 21) SPEC2000 benchmarks by up to 40% and 11% on average.

64 citations

Patent
Osamu Nishii1, Kunio Uchiyama1, Hirokazu Aoki1, Kanji Oishi1, Jun Kitano1, Susumu Hatano1 
24 Sep 1992
TL;DR: In this article, a multiprocessor system consisting of first and second processors (1001 and 1002), first-and second cache memories (100:#1 and #2), an address bus (123), a data bus (126), an invalidating signal line (PURGE:131), and a main memory (1004) is described.
Abstract: Herein disclosed is a multiprocessor system which comprises first and second processors (1001 and 1002), first and second cache memories (100:#1 and #2), an address bus (123), a data bus (126), an invalidating signal line (PURGE:131) and a main memory (1004). The first and second cache memories are operated by the copy-back method. The state of the data of the first cache (100:#1) exists in one state selected from a group consisting of an invalid first state, a valid and non-updated second state and a valid and updated third state. The second cache (100:#2) is constructed like the first cache. When the write access of the first processor hits the first cache, the state of the data of the first cache is shifted from the second state to the third state, and the first cache outputs the address of the write hit and the invalidating signal to the address bus and the invalidating signal line, respectively. When the write access from the first processor misses the first cache, a data of one block is block-transferred from the main memory to the first cache, and the invalidating signal is outputted. After this, the first cache executes the write of the data in the transfer block. In case the first and second caches hold the data in the third state relating to the pertinent address when an address of an access request is fed to the address bus (123), the pertinent cache writes back the pertinent data in the main memory.

64 citations

Proceedings Article
26 Jun 2013
TL;DR: It is found that the chief benefit of the flash cache is its size, not its persistence, and for some workloads a large flash cache allows using miniscule amounts of RAM for file caching leaving more memory available for application use.
Abstract: Flash memory has recently become popular as a caching medium. Most uses to date are on the storage server side. We investigate a different structure: flash as a cache on the client side of a networked storage environment. We use trace-driven simulation to explore the design space. We consider a wide range of configurations and policies to determine the potential client-side caches might offer and how best to arrange them. Our results show that the flash cache writeback policy does not significantly affect performance. Write-through is sufficient; this greatly simplifies cache consistency handling. We also find that the chief benefit of the flash cache is its size, not its persistence. Cache persistence offers additional performance benefits at system restart at essentially no runtime cost. Finally, for some workloads a large flash cache allows using miniscule amounts of RAM for file caching (e.g., 256 KB) leaving more memory available for application use.

64 citations

Patent
10 Nov 2005
TL;DR: In this paper, a multi-core processor providing heterogeneous processor cores and a shared cache is presented, which is based on the same architecture as the one described in this paper.
Abstract: A multi-core processor providing heterogeneous processor cores and a shared cache is presented.

64 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830