scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Journal ArticleDOI
01 May 2005
TL;DR: This paper proposes simple architectural extensions and adaptive policies for managing the L2 and L3 cache hierarchy in a CMP system and evaluates two mechanisms that improve cache effectiveness, observing a reduction in the overall execution time of up to 13%.
Abstract: With the ability to place large numbers of transistors on a single silicon chip, manufacturers have begun developing chip multiprocessors (CMPs) containing multiple processor cores, varying amounts of level 1 and level 2 caching, and on-chip directory structures for level 3 caches and memory. The level 3 cache may be used as a victim cache for both modified and clean lines evicted from on-chip level 2 caches. Efficient area and performance management of this cache hierarchy is paramount given the projected increase in access latency to off-chip memory. This paper proposes simple architectural extensions and adaptive policies for managing the L2 and L3 cache hierarchy in a CMP system. In particular, we evaluate two mechanisms that improve cache effectiveness. First, we propose the use of a small history table to provide hints to the L2 caches as to which lines are resident in the L3 cache. We employ this table to eliminate some unnecessary clean write backs to the L3 cache, reducing pressure on the L3 cache and utilization of the on-chip bus. Second, we examine the performance benefits of allowing write backs from L2 caches to be placed in neighboring, on-chip L2 caches rather than forcing them to be absorbed by the L3 cache. This not only reduces the capacity pressure on the L3 cache but also makes subsequent accesses faster since L2-to-L2 cache transfers have typically lower latencies than accesses to a large L3 cache array. We evaluate the performance improvement of these two designs, and their combined effect, on four commercial workloads and observe a reduction in the overall execution time of up to 13%.

80 citations

Proceedings ArticleDOI
10 Jun 2003
TL;DR: An analytical model is developed based on the fundamental components of the search process that shows that using node sizes much larger than the cache line size can result in better search performance for the CSB+-tree.
Abstract: In main-memory databases, the number of processor cache misses has a critical impact on the performance of the system. Cache-conscious indices are designed to improve performance by reducing the number of processor cache misses that are incurred during a search operation. Conventional wisdom suggests that the index's node size should be equal to the cache line size in order to minimize the number of cache misses and improve performance. As we show in this paper, this design choice ignores additional effects, such as the number of instructions executed and the number of TLB misses, which play a significant role in determining the overall performance. To capture the impact of node size on the performance of a cache-conscious B+ tree (CSB+-tree), we first develop an analytical model based on the fundamental components of the search process. This model is then validated with an actual implementation, demonstrating that the model is accurate. Both the analytical model and experiments confirm that using node sizes much larger than the cache line size can result in better search performance for the CSB+-tree.

80 citations

Patent
02 Nov 2011
TL;DR: In this paper, a multi-level cache comprises a plurality of cache levels, each configured to cache I/O request data pertaining to requests of a different respective type and/or granularity.
Abstract: A multi-level cache comprises a plurality of cache levels, each configured to cache I/O request data pertaining to I/O requests of a different respective type and/or granularity A cache device manager may allocate cache storage space to each of the cache levels Each cache level maintains respective cache metadata that associates I/O request data with respective cache address The cache levels monitor I/O requests within a storage stack, apply selection criteria to identify cacheable I/O requests, and service cacheable I/O requests using the cache storage device

79 citations

Patent
28 Oct 1997
TL;DR: In this article, a system and system for automatically refreshing documents in a cache, so that each particular document is refreshed no more often and no less often than needed, is presented.
Abstract: The invention provides a system and system for automatically refreshing documents in a cache, so that each particular document is refreshed no more often and no less often than needed. For each document, the cache estimates a probability distribution of times for client requests for that document and a probability distribution of times for server changes to that document. Times for refresh are selected for each particular document in response to both the estimated probability distribution of times for client requests and the estimated probability distribution of times for server changes. The invention also provides a system and system for objectively estimating the value the cache is providing for the system including the cache. The cache estimates for each document a probability distribution of times for client requests for that document, and determines a cumulative probability distribution which reflects the estimated marginal hit rate at the storage limit of the cache and the marginal advantage of adding storage to the cache.

79 citations

Proceedings ArticleDOI
27 Jan 2003
TL;DR: A generalized cost function for cache replacement algorithms for mobile environment that is general and can be used for various performance metrics by making the necessary changes and significantly improves the performance.
Abstract: Caching frequently accessed data items on the client side is an effective technique to improve system performance in a mobile environment. Due to cache size limitations, cache replacement algorithms are used to find a suitable subset of items for eviction from the cache. In this paper, we propose a generalized cost function for cache replacement algorithms for mobile environment. The distinctive feature of our cost function is that it is general and it can be used for various performance metrics by making the necessary changes. To demonstrate the practical effectiveness of the general cost function, we derive two specific functions to be evaluated by setting two different targets: minimizing the query delay and minimizing the downlink traffic. Detailed experiments are carried out to evaluate the proposed methodology. Compared to previous schemes, our algorithm significantly improves the performance in terms of query delay or in terms of bandwidth utilization depending on the targets.

79 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820