scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
17 Jan 1997
TL;DR: In this article, the authors proposed a set-associative cache memory for allocating entries in a branch prediction table (BPT) to branch prediction information for related branch instructions.
Abstract: Allocation circuitry for allocating entries within a set-associative cache memory is disclosed. The set-associative cache memory comprises N ways, each way having M entries and corresponding entries in each of the N ways constituting a set of entries. The allocation circuitry has a first circuit which identifies related data units by identifying a probability that the related data units may be successively read from the cache memory. A second circuit within the allocation circuitry allocates the corresponding entries in each of the ways to the related data units, so that related data units are stored in a common set of entries. Accordingly, the related data units will be simultaneously outputted from the set-associative cache memory, and are thus concurrently available for processing. The invention may find application in allocating entries of a common set in a branch prediction table (BPT) to branch prediction information for related branch instructions.

122 citations

Patent
Kevin Frank Smith1
18 Nov 1991
TL;DR: In this article, the signed difference between an assigned and actual hit ratio is used to dynamically allocate cache size to a class of data and its correlation with priority of that class, and cache space is dynamically reallocated among the partitions in a direction that forces the weighted hit rate slope for each partition into equality over the entire cache.
Abstract: Methods for managing a Least Recently Used (LRU) cache in a staged storage system on a prioritized basis permitting management of data using multiple cache priorities assigned at the data set level. One method uses the signed difference between an assigned and actual hit ratio to dynamically allocate cache size to a class of data and its correlation with priority of that class. Advantageously, the hit ratio performance of a class of data can not be degraded beyond a predetermined level by a class of data having lower priority. Another method dynamically reallocates cache space among partitions asymptotically to the general equality of a weighted hit rate slope function attributable to each partition. This function is the product of the slope of a weighting factor and the partition hit rate versus partition space allocation function. Cache space is dynamically reallocated among the partitions in a direction that forces the weighted hit rate slope for each partition into equality over the entire cache. Partition space is left substantially unchanged in those partitions where the weighted hit rate slope is non-positive. Hit rate is defined as hit ratio times I/O rate.

122 citations

Proceedings ArticleDOI
01 Dec 2007
TL;DR: The tagless hit instruction cache (TH-IC) is proposed, a technique for completely eliminating the performance penalty associated with filter caches, as well as a further reduction in energy consumption due to not having to access the tag array on cache hits.
Abstract: Very small instruction caches have been shown to greatly reduce fetch energy. However, for many appli- cations the use of a small filter cache can lead to an unacceptable increase in execution time. In this paper, we propose the Tagless Hit Instruction Cache (TH-IC), a technique for completely eliminating the performance penalty associated with filter caches, as well as a fur- ther reduction in energy consumption due to not having to access the tag array on cache hits. Using a few meta- data bits per line, we are able to more efficiently track the cache contents and guarantee when hits will occur in our small TH-IC. When a hit is not guaranteed, we can instead fetch directly from the L1 instruction cache, eliminating any additional cycles due to a TH-IC miss. Experimental results show that the overall processor en- ergy consumption can be significantly reduced due to the faster application running time and the elimination of tag comparisons for most of the accesses.

122 citations

Patent
08 Jun 2001
TL;DR: In this article, a method and system for exclusive two-level caching in a chip-multiprocessor is presented to maximize the effective use of on-chip cache.
Abstract: To maximize the effective use of on-chip cache, a method and system for exclusive two-level caching in a chip-multiprocessor are provided. The exclusive two-level caching in accordance with the present invention involves method relaxing the inclusion requirement in a two-level cache system in order to form an exclusive cache hierarchy. Additionally, the exclusive two-level caching involves providing a first-level tag-state structure in a first-level cache of the two-level cache system. The first tag-state structure has state information. The exclusive two-level caching also involves maintaining in a second-level cache of the two-level cache system a duplicate of the first-level tag-state structure and extending the state information in the duplicate of the first tag-state structure, but not in the first-level tag-state structure itself, to include an owner indication. The exclusive two-level caching further involves providing in the second-level cache a second tag-state structure so that a simultaneous lookup at the duplicate of the first tag-state structure and the second tag-state structure is possible. Moreover, the exclusive two-level caching involves associating a single owner with a cache line at any given time of its lifetime in the chip-multiprocessor.

121 citations

Proceedings Article
08 Dec 1997
TL;DR: Trace-driven simulation of this mechanism on two large, independent data sets shows that PCV both provides stronger cache coherency and reduces the request traffic in comparison to the time-to-live (TTL) based techniques currently used.
Abstract: This paper presents work on piggyback cache validation (PCV), which addresses the problem of maintaining cache coherency for proxy caches. The novel aspect of our approach is to capitalize on requests sent from the proxy cache to the server to improve coherency. In the simplest case, whenever a proxy cache has a reason to communicate with a server it piggybacks a list of cached, but potentially stale, resources from that server for validation. Trace-driven simulation of this mechanism on two large, independent data sets shows that PCV both provides stronger cache coherency and reduces the request traffic in comparison to the time-to-live (TTL) based techniques currently used. Specifically, in comparison to the best TTL-based policy, the best PCV-based policy reduces the number of request messages from a proxy cache to a server by 16-17% and the average cost (considering response latency, request messages and bandwidth) by 6-8%. Moreover, the best PCV policy reduces the staleness ratio by 57-65% in comparison to the best TTL-based policy. Additionally, the PCV policies can easily be implemented within the HTTP 1.1 protocol.

121 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820