scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
30 Jul 2010
TL;DR: In this paper, an apparatus, system, and method for redundant write caching is described. But the authors do not specify a hardware implementation of the redundancy mechanism, only a plurality of modules, including a write request module, a first cache write module, and a second cache write modules.
Abstract: An apparatus, system, and method are disclosed for redundant write caching. The apparatus, system, and method are provided with a plurality of modules including a write request module, a first cache write module, a second cache write module, and a trim module. The write request module detects a write request to store data on a storage device. The first cache write module writes data of the write request to a first cache. The second cache write module writes the data to a second cache. The trim module trims the data from one of the first cache and the second cache in response to an indicator that the storage device stores the data. The data remains available in the other of the first cache and the second cache to service read requests.

128 citations

Patent
27 Nov 1981
TL;DR: In this article, a buffered cache memory subsystem is described which features a solid-state cache memory connected to a storage director which interfaces a host channel with a control module controlling operation of a long-term data storage device such as a disk drive.
Abstract: A buffered cache memory subsystem is disclosed which features a solid-state cache memory connected to a storage director which interfaces a host channel with a control module controlling operation of a long-term data storage device such as a disk drive. The solid-state cache memory is connected to plural directors which in turn may be connected to differing types of control modules, whereby the cache is usable with more than one type of long-term data storage means within a given system. The cache memory may be field-installed in a preexisting disk drive storage system and is software transparent to the host computer, while providing improvements in overall operating efficiency. In a preferred embodiment, data is only cached when it is expected to be the subject of a future host request.

128 citations

Patent
14 Oct 2003
TL;DR: In this article, a power saving cache includes circuitry to dynamically reduce the logical size of the cache in order to save power, using a variety of combinable hardware and software techniques.
Abstract: A power saving cache and a method of operating a power saving cache. The power saving cache includes circuitry to dynamically reduce the logical size of the cache in order to save power. Preferably, a method is used to determine optimal cache size for balancing power and performance, using a variety of combinable hardware and software techniques. Also, in a preferred embodiment, steps are used for maintaining coherency during cache resizing, including the handling of modified (“dirty”) data in the cache, and steps are provided for partitioning a cache in one of several way to provide an appropriate configuration and granularity when resizing.

128 citations

Proceedings ArticleDOI
14 Feb 2012
TL;DR: A dynamic scheme is presented that further divides the cache space into read and write caches and manages the three spaces according to the workload characteristics for optimal performance and improves performance of hybrid storage solutions up to the off-line optimal performance of a fixed partitioning scheme.
Abstract: Hybrid storage solutions use NAND flash memory based Solid State Drives (SSDs) as non-volatile cache and traditional Hard Disk Drives (HDDs) as lower level storage. Unlike a typical cache, internally, the flash memory cache is divided into cache space and overprovisioned space, used for garbage collection. We show that balancing the two spaces appropriately helps improve the performance of hybrid storage systems. We show that contrary to expectations, the cache need not be filled with data to the fullest, but may be better served by reserving space for garbage collection. For this balancing act, we present a dynamic scheme that further divides the cache space into read and write caches and manages the three spaces according to the workload characteristics for optimal performance. Experimental results show that our dynamic scheme improves performance of hybrid storage solutions up to the off-line optimal performance of a fixed partitioning scheme. Furthermore, as our scheme makes efficient use of the flash memory cache, it reduces the number of erase operations thereby extending the lifetime of SSDs.

128 citations

Proceedings ArticleDOI
10 Aug 1998
TL;DR: It is shown that, by using buffers, energy consumption of the memory subsystem may be reduced by as much as 13% for certain data cache configurations and by asmuch as 23% forcertain instruction cache configurations without adversely effecting processor performance or on-chip energy consumption.
Abstract: In this paper, we propose several different data and instruction cache configurations and analyze their power as well as performance implications on the processor. Unlike most existing work in low power microprocessor design, we explore a high performance processor with the latest innovations for performance. Using a detailed, architectural-level simulator, we evaluate full system performance using several different power/performance sensitive cache configurations such as increasing cache size or associatively and including buffers along side L1 caches. We then use the information obtained from the simulator to calculate the energy consumption of the memory hierarchy of the system. As an alternative to simply increasing cache associatively or size to reduce lower-level memory energy consumption (which may have a detrimental effect on on-chip energy consumption), we show that, by using buffers, energy consumption of the memory subsystem may be reduced by as much as 13% for certain data cache configurations and by as much as 23% for certain instruction cache configurations without adversely effecting processor performance or on-chip energy consumption.

128 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830