scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
14 Jul 2014
TL;DR: In this article, cache optimization techniques are employed to organize resources within caches such that the most requested content (e.g., the most popular content) is more readily available, and the resources propagate through a cache server hierarchy associated with the service provider.
Abstract: Resource management techniques, such as cache optimization, are employed to organize resources within caches such that the most requested content (e.g., the most popular content) is more readily available. A service provider utilizes content expiration data as indicative of resource popularity. As resources are requested, the resources propagate through a cache server hierarchy associated with the service provider. More frequently requested resources are maintained at edge cache servers based on shorter expiration data that is reset with each repeated request. Less frequently requested resources are maintained at higher levels of a cache server hierarchy based on longer expiration data associated with cache servers higher on the hierarchy.

136 citations

Patent
24 Jan 2012
TL;DR: In this article, an apparatus, system, and method for managing a cache is described, and a cache management module manages the at least one cache unit based on the cache management information exchanged with the one or more cache clients.
Abstract: An apparatus, system, and method are disclosed for managing a cache. A cache interface module provides access to a plurality of virtual storage units of a solid-state storage device over a cache interface. At least one of the virtual storage units comprises a cache unit. A cache command module exchanges cache management information for the at least one cache unit with one or more cache clients over the cache interface. A cache management module manages the at least one cache unit based on the cache management information exchanged with the one or more cache clients.

135 citations

Journal ArticleDOI
Avraham Leff1, Joel L. Wolf1, Philip S. Yu1
TL;DR: Performance of the distributed algorithms is found to be close to optimal, while that of the greedy algorithms is far from optimal.
Abstract: Studies the cache performance in a remote caching architecture. The authors develop a set of distributed object replication policies that are designed to implement different optimization goals. Each site is responsible for local cache decisions, and modifies cache contents in response to decisions made by other sites. The authors use the optimal and greedy policies as upper and lower bounds, respectively, for performance in this environment. Critical system parameters are identified, and their effect on system performance studied. Performance of the distributed algorithms is found to be close to optimal, while that of the greedy algorithms is far from optimal. >

135 citations

Patent
17 Sep 2013
TL;DR: In this article, a hop-count-based content caching scheme is proposed to decrease traffics of a network by the routing node's primarily judging whether to cache a content chunk by grasping an attribute of the received content chunk, and the caching probability of 1/hop count.
Abstract: Disclosed is hop-count based content caching. The present invention implements hop-count based content cache placement strategies that efficiently decrease traffics of a network by the routing node's primarily judging whether to cache a content chunk by grasping an attribute of the received content chunk; the routing node's secondarily judging whether to cache the content chunk based on a caching probability of ‘1/hop count’; and storing the content chunk and the hop count information in the cache memory of the routing node when the content chunk is determined to cache the content chunk as a result of the secondary judgment.

135 citations

Journal ArticleDOI
17 May 1988
TL;DR: This paper presents a series of simulations that explore the interactions between various organizational decisions and program execution time, and investigates the tradeoffs between cache size and CPU/Cache cycle time, set associativity and cycleTime, and between block size and main memory speed.
Abstract: Cache memories have become common across a wide range of computer implementations. To date, most analyses of cache performance have concentrated on time independent metrics, such as miss rate and traffic ratio. This paper presents a series of simulations that explore the interactions between various organizational decisions and program execution time. We investigate the tradeoffs between cache size and CPU/Cache cycle time, set associativity and cycle time, and between block size and main memory speed. The results indicate that neither cycle time nor cache size dominates the other across the entire design space. For common implementation technologies, performance is maximized when the size is increased to the 32KB to 128KB range with modest penalties to the cycle time. If set associativity impacts the cycle time by more than a few nanoseconds, it increases overall execution time. Since the block size and memory transfer rate combine to affect the cache miss penalty, the optimum block size is substantially smaller than that which minimizes the miss rate. Finally, the interdependence between optimal cache configuration and the main memory speed necessitates multi-level cache hierarchies for high performance uniprocessors.

134 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820