scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
Gregory Tad Kishi1
05 May 1998
TL;DR: In this article, a cache is managed by a predictive cache management engine that evaluates cache contents and purges entries unlikely to receive sufficient future cache hits, based on a single output back propagation neural network trained in response to various event triggers.
Abstract: In a data storage system, a cache is managed by a predictive cache management engine that evaluates cache contents and purges entries unlikely to receive sufficient future cache hits. The engine includes a single output back propagation neural network that is trained in response to various event triggers. Accesses to stored datasets are logged in a data access log; conversely, log entries are removed according to a predefined expiration criteria. In response to access of a cached dataset or expiration of its log entry, the cache management engine prepares training data. This is achieved by determining characteristics of the dataset at various past times between the time of the access/expiration and a time of last access, and providing these characteristics and the times of access as input to train the neural network. As another part of training, the cache management engine provides the neural network with output representing the expiration or access of the dataset. According to a predefined schedule, the cache management engine operates the trained neural network to generate scores for cached datasets, these scores ranking the datasets relative to each other. According to this or a different schedule, the cache management engine reviews the scores, identifies one or more datasets with the least scores, and purges the identified datasets from the cache.

65 citations

Book ChapterDOI
26 Mar 2007
TL;DR: An analytical model for time-driven cache attacks that accurately forecasts the strength of a symmetric key cryptosystem based on 3 simple parameters: the number of lookup tables, the size of the lookup tables; and the length of the microprocessor's cache line is presented.
Abstract: Cache attacks exploit side-channel information that is leaked by a microprocessor's cache. There has been a significant amount of research effort on the subject to analyze and identify cache side-channel vulnerabilities since early 2002. Experimental results support the fact that the effectiveness of a cache attack depends on the particular implementation of the cryptosystem under attack and on the cache architecture of the device this implementation is running on. Yet, the precise effect of the mutual impact between the software implementation and the cache architecture is still an unknown. In this manuscript, we explain the effect and present an analytical model for time-driven cache attacks that accurately forecasts the strength of a symmetric key cryptosystem based on 3 simple parameters: (1) the number of lookup tables; (2) the size of the lookup tables; (3) and the length of the microprocessor's cache line. The accuracy of the model has been experimentally verified on 3 different platforms with different implementations of the AES algorithm attacked by adversaries with different capabilities.

65 citations

Patent
Millind Mittal1
17 Dec 1996
TL;DR: In this article, the locality hint is used to identify a lowest level where management of cache avocation is desired and cache memory is allocated at that level and any higher level(s).
Abstract: A computer system and method in which allocation of a cache memory is managed by utilizing a locality hint value included within an instruction. When a processor accesses a memory for transfer of data between the processor and the memory, that access can be allocated or not allocated in the cache memory. The locality hint included within the instruction controls if the cache allocation is to be made. When a plurality of cache memories are present, they are arranged into a cache hierarchy and a locality value is assigned to each level of the cache hierarchy where allocation control is desired. The locality hint may be used to identify a lowest level where management of cache avocation is desired and cache memory is allocated at that level and any higher level(s). The locality hint value is based on spatial and/or temporal locality for the data associated with the access. Data is recognized at each cache hierarchy level depending on the attributes associated with the data at a particular level. If the locality hint identifies a particular access for data as temporal or non-temporal with respect to a particular cache level, the particular access may be determined to be temporal or non-temporal with respect to the higher and lower cache levels.

65 citations

Patent
15 Jul 2002
TL;DR: In this article, a link to a resource is constructed so that it contains all information necessary to make a cache correctness decision without having to query the server hosting the linked resource, rather than with respect to metadata about the resource, e.g., name, creation date, location, etc.
Abstract: A document may be received over a network that contains a link to a resource. Various contexts, including Internet browsers, can operate more effectively, and make better use of available bandwidth, by caching network resources so that repeated requests for cached resources can be satisfied from a local cache and avoid a duplicative transfer from a server. Instead of utilizing typical cache operations, such as taught in Request For Comments (RFC) 2616, where the server hosting the resource is queried to resolve cache correctness, instead a link to a resource is constructed so that it contains all information necessary to make a cache correctness decision without having to query the server hosting the linked resource. In addition, the link may be constructed so that the cache determination is made with respect to the contents of the linked resource, rather than with respect to metadata about the resource, e.g., name, creation date, location, etc.

65 citations

Journal ArticleDOI
TL;DR: A utility based cache replacement policy, least utility value (LUV), is proposed, to improve the data availability and reduce the local cache miss ratio and simulation results show that, LUV replacement policy substantially outperforms the LRU policy.
Abstract: Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV), to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC), is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

65 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820