scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
14 Apr 2003
TL;DR: In this article, the authors employ receiving an operational parameter characteristic of a storage device, and adapt a read cache pre-fetch depth based in part on the operational parameter, which can be used to generate an operational parameters and vary the read cache depth in response to the operational parameters.
Abstract: Exemplary systems, methods, and devices employ receiving an operational parameter characteristic of a storage device, and adapting a read cache pre-fetch depth based in part on the operational parameter. An exemplary device includes a read cache memory and a read cache pre-fetch adaptation module operable to generate an operational parameter and vary read cache pre-fetch depth in response to the operational parameter.

74 citations

Proceedings Article
10 Sep 2000
TL;DR: This work presents a calibration tool that automatically extracts the relevant parameters about the memory subsystem from any hardware and demonstrates how a database system equipped with this calibrator can automatically tune memory-conscious database algorithms to their optimal settings.
Abstract: Performance of modern hardware increasingly depends on proper utilization of both the memory cache hierarchy and parallel execution possibilities in todays super-scalar CPUs. Recent database research has demonstrated that database system performance severely suffers from poor utilization of these resources. In previous work, we presented join algorithms that strongly accelerate large equi-join by tuning the memory access pattern to match the characteristics of the memory cache subsystem in the benchmark hardware. In order to make such algorithms applicable in database systems that run on a wide variety of platforms, we now present a calibration tool that automatically extracts the relevant parameters about the memory subsystem from any hardware. Exhaustive experiments with join-queries demonstrate how a database system equipped with this calibrator can automatically tune memory-conscious database algorithms to their optimal settings. Once memory access is optimized, CPU resource usage becomes crucial for database performance. We demonstrate how CPU resource usage can be improved by using appropriate implementation techniques. Join experiments with the Monet database system on various hardware platforms confirm that combining memory and CPU optimization can lead to almost an order of magnitude of performance improvement on modern processors.

74 citations

Patent
Vivek Garg1, Jagannath Keshava1
20 Sep 2002
TL;DR: Cache sharing for a chip multiprocessor is discussed in this paper, where a control mechanism is provided to allow sharing between caches that are associated with individual processor cores, each having an associated cache.
Abstract: Cache sharing for a chip multiprocessor. In one embodiment, a disclosed apparatus includes multiple processor cores, each having an associated cache. A control mechanism is provided to allow sharing between caches that are associated with individual processor cores.

74 citations

Proceedings ArticleDOI
01 Dec 2012
TL;DR: Amoeba-Cache is proposed, a design that supports a variable number of cache blocks, each of a different granularity, that adapts to the appropriate granularity both for different data objects in an application as well as for different phases of access to the same data.
Abstract: The fixed geometries of current cache designs do not adapt to the working set requirements of modern applications, causing significant inefficiency. The short block lifetimes and moderate spatial locality exhibited by many applications result in only a few words in the block being touched prior to eviction. Unused words occupy between 17 -- 80% of a 64K L1 cache and between 1% -- 79% of a 1MB private LLC. This effectively shrinks the cache size, increases miss rate, and wastes on-chip bandwidth. Scaling limitations of wires mean that unused-word transfers comprise a large fraction (11%) of on-chip cache hierarchy energy consumption. We propose Amoeba-Cache, a design that supports a variable number of cache blocks, each of a different granularity. Amoeba-Cache employs a novel organization that completely eliminates the tag array, treating the storage array as uniform and morph able between tags and data. This enables the cache to harvest space from unused words in blocks for additional tag storage, thereby supporting a variable number of tags (and correspondingly, blocks). Amoeba-Cache adjusts individual cache line granularities according to the spatial locality in the application. It adapts to the appropriate granularity both for different data objects in an application as well as for different phases of access to the same data. Overall, compared to a fixed granularity cache, the Amoeba-Cache reduces miss rate on average (geometric mean) by 18% at the L1 level and by 18% at the L2 level and reduces L1 -- L2 miss bandwidth by ?46%. Correspondingly, Amoeba-Cache reduces on-chip memory hierarchy energy by as much as 36% (mcf) and improves performance by as much as 50% (art).

74 citations

Proceedings ArticleDOI
Chi-Hung Chi1, Henry G. Dietz
03 Jan 1989
TL;DR: A technique is proposed to prevent the return of infrequently used items to cache after they are bumped from it, which involves the use of hardware called a bypass-cache, which will determine whether each reference should be through the cache or should bypass the cache and reference main memory directly.
Abstract: A technique is proposed to prevent the return of infrequently used items to cache after they are bumped from it. Simulations have shown that the return of these items, called cache pollution, typically degrade cache-based system performance (average reference time) by 10% to 30%. The technique proposed involves the use of hardware called a bypass-cache, which, under program control, will determine whether each reference should be through the cache or should bypass the cache and reference main memory directly. Several inexpensive heuristics for the compiler to determine how to make each reference are given. It is shown that much of the performance loss can be regained. >

74 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830