Topic
Cache invalidation
About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.
Papers published on a yearly basis
Papers
More filters
•
AT&T1
TL;DR: In this article, a cache for use with a network filter that receives, stores, and ejects local rule bases dynamically is proposed, where the cache stores a rule that was derived from a rule base in the filter.
Abstract: A cache for use with a network filter that receives, stores and ejects local rule bases dynamically. The cache stores a rule that was derived from a rule base in the filter. The cache rule is associated in the cache with a rule base indicator indicating from which rule base the cache rule was derived, and a rule base version number indicating the version of the rule base from which the cache rule was derived. When the filter receives a packet, the cache is searched for a rule applicable to a received packet. If no such rule is found, the filter rule base is found, and an applicable rule is carried out and copied to the cache along with a rule base indicator and version number. If a cache rule is found, it is implemented if its version number matches the version number of the rule base from which it was derived. Otherwise, the cache rule is deleted. The cache provides an efficient way of accurately implementing the rules of a dynamic rule base without having to search the entire rule base for each packet.
223 citations
•
25 Jan 2013
TL;DR: In this paper, a de-duplication cache is configured to cache data for access by a plurality of different storage clients, such as virtual machines, and metadata pertaining to the contents of the cache may be persisted and/or transferred with respective storage clients.
Abstract: A de-duplication is configured to cache data for access by a plurality of different storage clients, such as virtual machines. A virtual machine may comprise a virtual machine de-duplication module configured to identify data for admission into the de-duplication cache. Data admitted into the de-duplication cache may be accessible by two or more storage clients. Metadata pertaining to the contents of the de-duplication cache may be persisted and/or transferred with respective storage clients such that the storage clients may access the contents of the de-duplication cache after rebooting, being power cycled, and/or being transferred between hosts.
223 citations
•
AT&T1
TL;DR: In this article, a system includes a name server, an edge cache server and a local cache server, which is configured to respond to the anycast IP address and a unicast IP address.
Abstract: A system includes a name server, an edge cache server, and a local cache server. The name server is configured to provide an anycast IP address in response to a request for an IP address of an origin hostname from a client system. The edge cache server is configured to respond to the anycast IP address and a unicast IP address and to retrieve content from an origin. The local cache server includes a storage and is configured to respond to the anycast IP address, to retrieve content from the edge cache server, and provide the content to a client system.
221 citations
•
13 Aug 2001TL;DR: In this article, a content analysis engine determines which of the caches a data item should be stored in, based on an analysis of data requests or data items served in response to the requests, guidelines set by a system administrator, etc.
Abstract: A multi-tier caching system and method of operating the same. The system comprises a first cache implemented in operating system or kernel space (e.g., in memory managed by or allocated to an operating system) and a second cache implemented in application or user space (e.g., in memory managed by or allocated to an application program). Data requests requiring little processing to identify responsive data may be served from the first cache, while those requiring further processing are served from the second. The first cache may therefore store frequently requested data items or items that can be served in response to requests having different forms, qualifiers or other indicia. A content analysis engine determines which of the caches a data item should be stored in, based on an analysis of data requests or data items served in response to the requests, guidelines set by a system administrator, etc.
220 citations
••
20 Jun 2005TL;DR: It is demonstrated that migratory, dynamic NUCA approaches improve performance significantly for a subset of the workloads at the cost of increased power consumption and complexity, especially as per-application cache partitioning strategies are applied.
Abstract: We propose an organization for the on-chip memory system of a chip multiprocessor, in which 16 processors share a 16MB pool of 256 L2 cache banks. The L2 cache is organized as a non-uniform cache architecture (NUCA) array with a switched network embedded in it for high performance. We show that this organization can support the spectrum of degrees of sharing: unshared, in which each processor has a private portion of the cache, thus reducing hit latency, completely shared, in which every processor shares the entire cache, thus minimizing misses, and every point in between. We find the optimal degree of sharing for a number of cache bank mapping policies, and also evaluate a per-application cache partitioning strategy. We conclude that a static NUCA organization with sharing degrees of two or four work best across a suite of commercial and scientific parallel workloads. We also demonstrate that migratory, dynamic NUCA approaches improve performance significantly for a subset of the workloads at the cost of increased power consumption and complexity, especially as per-application cache partitioning strategies are applied.
218 citations