scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A survey of the state of the art techniques to bound theCRPD, based on, but not limited to UCBs, an alternative definition of UCBs to improve the CRPD bounds substantially.

61 citations

Patent
Hitoshi Yamahata1
20 Jun 1990
TL;DR: In this paper, a cache bypass signal generator is used to prevent the cache memory from performing a data caching operation on the data in order to prevent data to be cache-bypassed without checking bus status signals.
Abstract: A microprocessor capable of being incorporated in an information processing system with a cache memory unit and capable of realizing fine cache bypass control. The microprocessor can detect data to be cache-bypassed without checking bus status signals. The microprocessor is equipped with a cache bypass signal generator. Upon detection of data to be bypassed, the cache bypass signal generator generates a cache bypass request signal, which prevents the cache memory from performing a data caching operation on the data.

61 citations

Proceedings Article
22 Feb 2016
TL;DR: The paper proposes a new cache demand model, Reuse Working Set (RWS), to capture only the data with good temporal locality, and uses the RWS size (RWSS) to model a workload's cache demand.
Abstract: Host-side flash caching has emerged as a promising solution to the scalability problem of virtual machine (VM) storage in cloud computing systems, but it still faces serious limitations in capacity and endurance. This paper presents CloudCache, an on-demand cache management solution to meet VM cache demands and minimize cache wear-out. First, to support on-demand cache allocation, the paper proposes a new cache demand model, Reuse Working Set (RWS), to capture only the data with good temporal locality, and uses the RWS size (RWSS) to model a workload's cache demand. By predicting the RWSS online and admitting only RWS into the cache, CloudCache satisfies the workload's actual cache demand and minimizes the induced wear-out. Second, to handle situations where a cache is insufficient for the VMs' demands, the paper proposes a dynamic cache migration approach to balance cache load across hosts by live migrating cached data along with the VMs. It includes both on-demand migration of dirty data and background migration of RWS to optimize the performance of the migrating VM. It also supports rate limiting on the cache data transfer to limit the impact to the co-hosted VMs. Finally, the paper presents comprehensive experimental evaluations using real-world traces to demonstrate the effectiveness of CloudCache.

61 citations

Book ChapterDOI
21 Nov 2002
TL;DR: An alternative optimal cache oblivious priority queue based only on binary merging is devised and it is shown that the structure can be made adaptive to different usage profiles.
Abstract: The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory model Arge et al recently presented the first optimal cache oblivious priority queue, and demonstrated the importance of this result by providingthe first cache oblivious algorithms for graph problems Their structure uses cache oblivious sorting and selection as subroutines In this paper, we devise an alternative optimal cache oblivious priority queue based only on binary merging We also show that our structure can be made adaptive to different usage profiles

61 citations

Patent
07 Jan 2000
TL;DR: In this paper, the authors propose a control mechanism that enables two logical cache lines to coexist within the same physical cache line, during line fill and replacement, thus minimizing the likelihood of stalling accesses to the cache while the line is being filled or replaced.
Abstract: In a cache memory system, a mechanism enabling two logical cache lines to coexist within the same physical cache line, during line fill and replacement, thus minimizing the likelihood of stalling accesses to the cache while the line is being filled or replaced. A control mechanism governs access to the cache line and tracks which sub-cache line units contain old or new data, or are empty during the fill/replacement procedure. The control mechanism thus maintains a sub-cache line state for the purpose of permitting a processor to gain access to a portion of the cache line before it is completely filled or replaced.

60 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820