scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
Josef Ezra1
17 Jun 2003
TL;DR: In this paper, various quality of service (QOS) parameters that may be used in characterizing device behavior in connection with a cache are described. The QOS parameters may be included in configuration data.
Abstract: Described are various quality of service (QOS) parameters that may be used in characterizing device behavior in connection with a cache. A Partition parameter indicates which portions of available cache may used with data of an associated device. A Survival parameter indicates how long data of an associate device should remain in cache after use. A Linearity parameter indicates a likelihood factor that subsequent data tracks may be used such that this parameter may be used in determining whether to prefetch data. A Flush parameter indicates how long data should remain in cache after a write pending slot is returned to cache after being written out to the actual device. The QOS parameters may be included in configuration data. The QOS parameter values may be read and/or modified.

64 citations

01 Jan 1987
TL;DR: These techniques are significant extensions to the stack analysis technique (Mattson et al., 1970) which computes the read miss ratio for all cache sizes in a single trace-driven simulation, and are used to study caching in a network file system.
Abstract: This dissertation describes innovative techniques for efficiently analyzing a wide variety of cache designs, and uses these techniques to study caching in a network file system. The techniques are significant extensions to the stack analysis technique (Mattson et al., 1970) which computes the read miss ratio for all cache sizes in a single trace-driven simulation. Stack analysis is extended to allow the one-pass analysis of: (1) writes in a write-back cache, including periodic write-back and deletions, important factors in file system cache performance. (2) sub-block or sector caches, including load-forward prefetching. (3) multi-processor caches in a shared-memory system, for an entire class of consistency protocols, including all of the well-known protocols. (4) client caches in a network file system, using a new class of consistency protocols. The techniques are completely general and apply to all levels of the memory hierarchy, from processor caches to disk and file system caches. The dissertation also discusses the use of hash tables and binary trees within the simulator to further improve performance for some types of traces. Using these techniques, the performance of all cache sizes can be computed in little more than twice the time required to simulate a single cache size, and often in just 10% more time. In addition to presenting techniques, this dissertation also demonstrates their use by studying client caching in a network file system. It first reports the extent of file sharing in a UNIX environment, showing that a few shared files account for two-thirds of all accesses, and nearly half of these are to files which are both read and written. It then studies different cache consistency protocols, write policies, and fetch policies, reporting the miss ratio and file server utilization for each. Four cache consistency protocols are considered: a polling protocol that uses the server for all consistency controls; a protocol designed for single-user files; one designed for read-only files; and one using write-broadcast to maintain consistency. It finds that the choice of consistency protocol has a substantial effect on performance; both the read-only and write-broadcast protocols showed half the misses and server load of the polling protocol. The choice of write or fetch policy made a much smaller difference.

64 citations

Patent
30 Mar 2001
TL;DR: A data storage method comprises the steps of selecting a data set from a data base (104), assigning the selected data set a cache time attribute defining a cache storage period, and storing the data set in a cache memory (106) for a time period defined by said cache-time attribute as discussed by the authors.
Abstract: A data storage method comprises the steps of selecting a data set from a data base (104), assigning the selected data set a cache time attribute defining a cache storage period, and storing the data set in a cache memory (106) for a time period defined by said cache time attribute. A storage time period in the cache memory can so be assigned individually to every data set depending on parameters as the data volume of the data set, the access frequency, etc. independently of an output medium as e.g. an HTML, XML or WML document.

64 citations

Patent
29 Jan 1997
TL;DR: In this paper, a caching tag is manufactured to be located on the same chip as the cache controller, which allows faster data access than if the caching tag was located on a separate chip.
Abstract: A computer system cache memory has a caching tag which stores a subset of the L2 cache memory tag store. The caching tag is smaller, faster memory device than the L2 cache memory. The cache memory latency is reduced because the tag access time and tag comparison time are improved due to the caching tag. The caching tag may be manufactured to be located on the same chip as the cache controller, which allows faster data access than if the caching tag is located on a separate chip.

64 citations

Proceedings ArticleDOI
07 Jun 2004
TL;DR: The results show the technique is accurate to within 20% of miss rate for uniprocessors and was able to reduce the die area of a multiprocessor chip by a projected 14% over a naive design by accurately sizing caches for each processor.
Abstract: As multiprocessor systems-on-chip become a reality, performance modeling becomes a challenge. To quickly evaluate many architectures, some type of high-level simulation is required, including high-level cache simulation. We propose to perform this cache simulation by defining a metric to represent memory behavior independently of cache structure and back-annotate this into the original application. While the annotation phase is complex, requiring time comparable to normal address trace based simulation, it need only be performed once per application set and thus enables simulation to be sped up by a factor of 20 to 50 over trace based simulation. This is important for embedded systems, as software is often evaluated against many input sets and many architectures. Our results show the technique is accurate to within 20% of miss rate for uniprocessors and was able to reduce the die area of a multiprocessor chip by a projected 14% over a naive design by accurately sizing caches for each processor.

64 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820