scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
25 Mar 2010
TL;DR: In this article, a method for optimising the distribution of data objects between caches in a cache domain of a resource limited network is described, where object information including the request frequency of each requested data object and the locations of the caches at which the requests were received is collated and stored.
Abstract: There is described a method for optimising the distribution of data objects between caches in a cache domain of a resource limited network. User requests for data objects are received at caches in the cache domain. A notification is sent from each cache at which a request is received to a cache manager. The notification reports the user request and identifies the requested data object. At the cache manager, object information including the request frequency of each requested data object and the locations of the caches at which the requests were received is collated and stored. At the cache manager, objects for distribution within the cache domain are identified on the basis of the object information. Instructions are sent from the cache manager to the caches to distribute data objects stored in those caches between themselves. The objects are classified into classes according to popularity, the classes including a high popularity class comprising objects which should be distributed to all caches in the cache domain, a medium popularity class comprising objects which should be distributed to a subset of the caches in the cache domain, and a low popularity class comprising objects which should not be distributed.

137 citations

Proceedings ArticleDOI
12 Aug 1996
TL;DR: This paper presents a simple but efficient novel hardware design called the non-temporal streaming (NTS) cache that supplements the conventional direct-mapped cache with a parallel fully associative buffer.
Abstract: Direct-mapped caches are often plagued by conflict misses because they lack the associativity to store more than one memory block in each set. However, some blocks that have no temporal locality actually cause program execution degradation by displacing blocks that do manifest temporal behavior. In this paper, we present a simple but efficient novel hardware design called the non-temporal streaming (NTS) cache that supplements the conventional direct-mapped cache with a parallel fully associative buffer. Every cache block loaded into the main cache is monitored for temporal behavior by a hardware detection unit. Cache blocks identified as nontemporal are allocated to the buffer on subsequent requests. Our simulations show that the NTS Cache not only provides a performance improvement over the conventional direct-mapped cache, but can also save on-chip area. For some numerical programs like FFTPDE, APPSP and APPBT from the NAS benchmark suite, an integral NTS Cache of size 9 KB (i.e., 8 KB direct-mapped cache plus 1 KB NT buffer) performs as well as a 16 KB conventional direct-mapped cache.

137 citations

Patent
07 Apr 1998
TL;DR: In this article, the authors propose a centralized cache directory for a computer network system, where each station caches a data object received from a remote network, and the local network server first checks the central directory cache to see if the request can be satisfied at one of the local stations.
Abstract: In a computer network system, the caches at individual stations are available to other stations. A central cache directory is maintained at a network server. Each time a station caches a data object received from a remote network, it informs the central cache directory. When a station comes online, it is asked to send a list of the contents of its cache. Whenever a station seeks an object from the remote network, the local network server first checks the central directory cache to see if the request can be satisfied at one of the local stations. Only if it cannot is the requested object retrieved from the remote network.

137 citations

Journal ArticleDOI
TL;DR: An analytical access time model for on-chip cache memories that shows the dependence of the cache access time on the cache parameters is described and it is shown that for given C, B, and A, optimum array configuration parameters can be used to minimize the access time.
Abstract: An analytical access time model for on-chip cache memories that shows the dependence of the cache access time on the cache parameters is described. The model includes general cache parameters, such as cache size (C), block size (B), and associativity (A), and array configuration parameters that are responsible for determining the subarray aspect ratio and the number of subarrays. With this model, a large cache design space can be covered, which cannot be done using only SPICE circuit simulation within a limited time. Using the model, it is shown that for given C, B, and A, optimum array configuration parameters can be used to minimize the access time; if the optimum array parameters are used, then the optimum access time is roughly proportional to the log (cache size), and when the optimum array parameters are used, larger block size gives smaller access time, but larger associativity does not give smaller access time because of the increase of the data-bus capacitances. >

136 citations

Patent
18 Nov 1998
TL;DR: In this paper, the cache manager attempts to free space needed for caching the next object by deleting files from the cache if no server updates are pending and if such deletion will provide the needed space.
Abstract: A system and method for managing a mobile file system cache to maximize data storage and reduce problems from cache full conditions. Cache management automatically determines when the space available in the cache falls below a user-specified threshold. The cache manager attempts to free space needed for caching the next object. Files are deleted from the cache if no server updates are pending and if such deletion will provide the needed space. If automatic deletion does not provide sufficient space, the user is prompted for action. The system user can control the cache by increasing or reducing its size and drive allocation and can explicitly evict clean files from the cache. Cache expansion can be to logical or physical storage devices different than those on which the original cache is stored. The system enables separate storage of temporary files allowing identification and deletion of such files.

136 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820