scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
Akio Shigeeda1
15 Mar 1995
TL;DR: In this paper, an electronic device for use in a computer system, and having a small second-level write-back cache, is disclosed, where the device may be implemented into a single integrated circuit, as a microprocessor unit, to include a micro processor core, a memory controller circuit, and first and second level caches.
Abstract: An electronic device for use in a computer system, and having a small second-level write-back cache, is disclosed. The device may be implemented into a single integrated circuit, as a microprocessor unit, to include a microprocessor core, a memory controller circuit, and first and second level caches. In a system implementation, the device is connected to external dynamic random access memory (DRAM). The first level cache is a write-through cache, while the second level cache is a write-back cache that is much smaller than the first level cache. In operation, a write access that is a cache hit in the second level cache writes to the second level cache, rather than to DRAM, thus saving a wait state. A dirty bit is set for each modified entry in the second level cache. Upon the second level cache being full of modified data, a cache flush to DRAM is automatically performed. In addition, each entry of the second level cache is flushed to DRAM upon each of its byte locations being modified. The computer system may also include one or more additional integrated circuit devices, such as a direct memory access (DMA) circuit and a bus bridge interface circuit for bidirectional communication with the microprocessor unit. The microprocessor unit may also include handshaking control to prohibit configuration register updating when a memory access is in progress or is imminent. The disclosed microprocessor unit also includes circuitry for determining memory bank size and memory address type.

79 citations

Patent
06 Nov 2001
TL;DR: In this paper, the authors present a hierarchy of level-1 caches and level-2 caches for hierarchically caching objects, based on a set of one-or more criteria.
Abstract: A system and method for hierarchically caching objects includes one or more level 1 nodes, each including at least one level 1 cache; one or more level 2 nodes within which the objects are permanently stored or generated upon request, each level 2 node coupled to at least one of the one or more level 1 nodes and including one or more level 2 caches; and means for storing, in a coordinated manner, one or more objects in at least one level 1 cache and/or at least one level 2 cache, based on a set of one or more criteria. Furthermore, in a system adapted to receive requests for objects from one or more clients, the system having a set of one or more level 1 nodes, each containing at least one level 1 cache, a method for managing a level 1 cache includes the steps of applying, for part of the at least one level 1 cache, a cache replacement policy designed to minimize utilization of a set of one or more resources in the system; and using, for other parts of the at least one level 1 cache, one or more other cache replacement policies designed to minimize utilization of one or more other sets of one or more resources in the system.

78 citations

Proceedings ArticleDOI
05 Jun 2011
TL;DR: This paper presents a novel energy optimization technique which employs both dynamic reconfiguration of private caches and partitioning of the shared cache for multicore systems with real-time tasks and can achieve 29.29% energy saving on average.
Abstract: Multicore architectures, especially chip multi-processors, have been widely acknowledged as a successful design paradigm. Existing approaches primarily target application-driven partitioning of the shared cache to alleviate inter-core cache interference so that both performance and energy efficiency are improved. Dynamic cache reconfiguration is a promising technique in reducing energy consumption of the cache subsystem for uniprocessor systems. In this paper, we present a novel energy optimization technique which employs both dynamic reconfiguration of private caches and partitioning of the shared cache for multicore systems with real-time tasks. Our static profiling based algorithm is designed to judiciously find beneficial cache configurations (of private caches) for each task as well as partition factors (of the shared cache) for each core so that the energy consumption is minimized while task deadline is satisfied. Experimental results using real benchmarks demonstrate that our approach can achieve 29.29% energy saving on average compared to systems employing only cache partitioning.

78 citations

Journal ArticleDOI
TL;DR: It is shown how the cache DRAM bridges the gap in speed between high-performance microprocessor units and existing DRAMs and its architecture is presented.
Abstract: A DRAM (dynamic RAM) with an on-chip cache, called the cache DRAM, has been proposed and fabricated. It is a hierarchical RAM containing a 1-Mb DRAM for the main memory and an 8-kb SRAM (static RAM) for cache memory. It uses a 1.2- mu m CMOS technology. Suitable for no-wait-state memory access in low-end workstations and personal computers, the chip also serves high-end systems as a secondary cache scheme. It is shown how the cache DRAM bridges the gap in speed between high-performance microprocessor units and existing DRAMs. The cache DRAM concept is explained, and its architecture is presented. The error checking and correction scheme used to improve the cache DRAM's reliability is described. Performance results for an experimental device are reported. >

78 citations

Patent
David A. Luick1
03 Feb 1998
TL;DR: In this paper, the authors present a predictive instruction cache system for a VLIW processor, which includes a first cache, a real or virtual second cache, and a history look-up table for storing relations between first instructions and second instructions in the second cache.
Abstract: Disclosed is a predictive instruction cache system, and the method it embodies, for a VLIW processor. The system comprises: a first cache; a real or virtual second cache for storing a subset of the instructions in the second cache; and a real or virtual history look-up table for storing relations between first instructions and second instructions in the second cache. If a first instruction is located in a stage of the pipeline, then one of the relations will predict that a second instruction will be needed in the same stage a predetermined time later. The first cache can be physically distinct from the second cache, but preferably is not, i.e., the second cache is a virtual array. The history look-up table can also be physically distinct from the first cache, but preferably is not, i.e., the history look-up table is a virtual look-up table. The first cache is organized as entries. Each entry has a first portion for the first instruction and a second portion for a branch-to address indicator pointing to the second instruction. For a given first instruction, a new branch-to address indicator independently can be stored in the second field to replace an old branch-to address indicator and so reflect a revised prediction. Alternatively, redundant data fields in the parcels of the VLIWs are used to store the branch-to address guesses so that a physically distinct second portion can be eliminated in the entries of the first cache.

78 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830