scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
13 Jul 1994
TL;DR: In this article, an analytical model for calculating cache hit rate for combinations of data sets and LRU sizes is presented. But the model is not directly applied in software for constructing a precise model that can be used to predict cache hit rates for a cache, using statistics accumulated for each element independently.
Abstract: Method and structure for collecting statistics for quantifying locality of data and thus selecting elements to be cached, and then calculating the overall cache hit rate as a function of cached elements. LRU stack distance has a straight-forward probabilistic interpretation and is part of statistics to quantify locality of data for each element considered for caching. Request rates for additional slots in the LRU are a function of file request rate and LRU size. Cache hit rate is a function of locality of data and the relative request rates for data sets. Specific locality parameters for each data set and arrival rate of requests for data-sets are used to produce an analytical model for calculating cache hit rate for combinations of data sets and LRU sizes. This invention provides algorithms that can be directly implemented in software for constructing a precise model that can be used to predict cache hit rates for a cache, using statistics accumulated for each element independently. The model can rank the elements to find the best candidates for caching. Instead of considering the cache as a whole, the average arrival rates and re-reference statistics for each element are estimated, and then used to consider various combinations of elements and cache sizes in predicting the cache hit rate. Cache hit rate is directly calculated using the to-be-cached files' arrival rates and re-reference statistics and used to rank the elements to find the set that produces the optimal cache hit rate.

93 citations

Patent
21 Jan 1992
TL;DR: A simple mixed first level cache memory system as mentioned in this paper includes a level 1 cache (52) connected to a processor (54) by read data and write data lines (56) and (58).
Abstract: A simple mixed first level cache memory system (50) includes a level 1 cache (52) connected to a processor (54)by read data and write data lines (56) and (58). The level 1 cache (52) is connected to level 2 cache (60) by swap tag lines (62) and (64), swap data lines (66) and (68), multiplexer (70) and swap/read Line (72). The level 2 cache (60) is connected to the next lower level in the memorv hierarchy by write tag and write data lines (74) and (76). The next lower level in the memory hierarchy below the level 2 cache (60) is also connected by a read data line (78) through the multiplexer (70) and the swap/read line (72) to the level 1 cache (52). When processor (54) requires an instruction or data, it puts out an address on lines (80). If the instruction or data is present in the level 1 cache (52), it is supplied to the processor (54) on read data line (56). If the instruction or data is not present in the level 1 cache (52), the processor looks for it in the level 2 cache (60) by putting out the address of the instruction or data on lines (80). If the instruction or data is in the level 2 cache, it is supplied to the processor (54) through the level 1 cache (52) by means of a swap operation on tag swap lines (62) and (64), swap data lines (66) and (68), multiplexer (70) and swap/read data line (72). If the instruction or data is present in neither the level 1 cache (52) nor the level 2 cache (60), the address on lines (80) fetches the instruction or data from successively lower levels in the memory hierarchy as required via read data line (78), multiplexer (70) and swap/read data line (72). The instruction or data is then supplied from the level 1 cache to the processor (54).

93 citations

Posted Content
TL;DR: In this article, the authors derived an order-optimal scheme which judiciously shares cache memory among files with different popularities, and derived new information-theoretic lower bounds, which use a sliding-window entropy inequality.
Abstract: To address the exponentially rising demand for wireless content, use of caching is emerging as a potential solution. It has been recently established that joint design of content delivery and storage (coded caching) can significantly improve performance over conventional caching. Coded caching is well suited to emerging heterogeneous wireless architectures which consist of a dense deployment of local-coverage wireless access points (APs) with high data rates, along with sparsely-distributed, large-coverage macro-cell base stations (BS). This enables design of coded caching-and-delivery schemes that equip APs with storage, and place content in them in a way that creates coded-multicast opportunities for combining with macro-cell broadcast to satisfy users even with different demands. Such coded-caching schemes have been shown to be order-optimal with respect to the BS transmission rate, for a system with single-level content, i.e., one where all content is uniformly popular. In this work, we consider a system with non-uniform popularity content which is divided into multiple levels, based on varying degrees of popularity. The main contribution of this work is the derivation of an order-optimal scheme which judiciously shares cache memory among files with different popularities. To show order-optimality we derive new information-theoretic lower bounds, which use a sliding-window entropy inequality, effectively creating a non-cutset bound. We also extend the ideas to when users can access multiple caches along with the broadcast. Finally we consider two extreme cases of user distribution across caches for the multi-level popularity model: a single user per cache (single-user setup) versus a large number of users per cache (multi-user setup), and demonstrate a dichotomy in the order-optimal strategies for these two extreme cases.

93 citations

Proceedings ArticleDOI
31 Oct 2008
TL;DR: This paper analyzes two new cache designs to defeat cache-based side channel attacks by eliminating/obfuscating cache interferences and identifies significant vulnerabilities and shortcomings.
Abstract: Software cache-based side channel attacks present a serious tthreat to computer systems. Previously proposed countermeasures were either too costly for practical use or only effective against particular attacks. Thus, a recent work identified cache interferences in general as the root cause and proposed two new cache designs, namely partition-locked cache (PLcache) and random permutation cache(RPcache), to defeat cache-based side channel attacks by eliminating/obfuscating cache interferences. In this paper, we analyze these new cache designs and identify significant vulnerabilities and shortcomings of those new cache designs. We also propose possible solutions and improvements over the original new cache designs to overcome the identified shortcomings.

93 citations

20 Nov 1995
TL;DR: The technique of static cache simulation is shown to address the issue of predicting cache behavior, contrary to the belief that cache memories introduce unpredictability to real-time systems that cannot be efficiently analyzed.
Abstract: This work takes a fresh look at the simulation of cache memories. It introduces the technique of static cache simulation that statically predicts a large portion of cache references. To efficiently utilize this technique, a method to perform efficient on-the-fly analysis of programs in general is developed and proved correct. This method is combined with static cache simulation for a number of applications. The application of fast instruction cache analysis provides a new framework to evaluate instruction cache memories that outperforms even the fastest techniques published. Static cache simulation is shown to address the issue of predicting cache behavior, contrary to the belief that cache memories introduce unpredictability to real-time systems that cannot be efficiently analyzed. Static cache simulation for instruction caches provides a large degree of predictability for real-time systems. In addition, an architectural modification through bit-encoding is introduced that provides fully predictable caching behavior. Even for regular instruction caches without architectural modifications, tight bounds for the execution time of real-time programs can be derived from the information provided by the static cache simulator. Finally, the debugging of real-time applications can be enhanced by displaying the timing information of the debugged program at breakpoints. The timing information is determined by simulating the instruction cache behavior during program execution and can be used, for example, to detect missed deadlines and locate time-consuming code portions. Overall, the technique of static cache simulation provides a novel approach to analyze cache memories and has been shown to be very efficient for numerous applications.

93 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820