scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
Alexander D. Peleg1, Uri Weiser1
30 Mar 1994
TL;DR: In this paper, the authors propose an improved cache and organization particularly suitable for superscalar architectures, where the cache is organized around trace segments of running programs rather than an organization based on memory addresses.
Abstract: An improved cache and organization particularly suitable for superscalar architectures. The cache is organized around trace segments of running programs rather than an organization based on memory addresses. A single access to the cache memory may cross virtual address line boundaries. Branch prediction is integrally incorporated into the cache array permitting the crossing of branch boundaries with a single access.

186 citations

Patent
29 Dec 1997
TL;DR: In this article, the cache directory structure is used for defining the name of each configured central cache system and for providing an index value identifying the particular set of descriptors associated therewith.
Abstract: A host system includes a multicache system configured within the host system's memory which has a plurality of local and central cache systems used for storing information being utilized by a plurality of processes running on the system. Persistent shared memory is used to store control structure information entries required for operating central cache systems for substantially long periods of time in conjunction with the local caches established for the processes. Such entries includes a descriptor value for identifying a directory control structure and individual sets of descriptors for identifying a group of control structures defining those components required for operating the configured central cache systems. The cache directory structure is used for defining the name of each configured central cache system and for providing an index value identifying the particular set of descriptors associated therewith. The multicache system also includes a plurality of interfaces for configuring the basic characteristics of both local and central cache systems as a function of the type and performance requirements of application processes being run.

186 citations

Journal ArticleDOI
TL;DR: This paper investigates the problem of how to cache a set of media files with optimal streaming rates, under HTTP adaptive bit rate streaming over wireless networks, and finds there is a fundamental phase change in the optimal solution as the number of cached files grows.
Abstract: In this paper, we investigate the problem of optimal content cache management for HTTP adaptive bit rate (ABR) streaming over wireless networks. Specifically, in the media cloud, each content is transcoded into a set of media files with diverse playback rates, and appropriate files will be dynamically chosen in response to channel conditions and screen forms. Our design objective is to maximize the quality of experience (QoE) of an individual content for the end users, under a limited storage budget. Deriving a logarithmic QoE model from our experimental results, we formulate the individual content cache management for HTTP ABR streaming over wireless network as a constrained convex optimization problem. We adopt a two-step process to solve the snapshot problem. First, using the Lagrange multiplier method, we obtain the numerical solution of the set of playback rates for a fixed number of cache copies and characterize the optimal solution analytically. Our investigation reveals a fundamental phase change in the optimal solution as the number of cached files increases. Second, we develop three alternative search algorithms to find the optimal number of cached files, and compare their scalability under average and worst complexity metrics. Our numerical results suggest that, under optimal cache schemes, the maximum QoE measurement, i.e., mean-opinion-score (MOS), is a concave function of the allowable storage size. Our cache management can provide high expected QoE with low complexity, shedding light on the design of HTTP ABR streaming services over wireless networks.

186 citations

Proceedings ArticleDOI
Neal E. Young1
01 Jan 1998
TL;DR: A simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm is given.
Abstract: Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache, pay the retrieval cost, and choose files to remove from the cache so that the total size of files in the cache is at most k. This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm. On any request sequence, the total cost incurred by the algorithm is at most k/(k-h+1) times the minimum possible using a cache of size h >= k. For any algorithm satisfying the latter bound, we show it is also the case that for most choices of k, the retrieval cost is either insignificant or the competitive ratio is constant. This helps explain why competitive ratios of many on-line paging algorithms have been typically observed to be constant in practice.

183 citations

Proceedings ArticleDOI
10 Jun 1996
TL;DR: The paper describes how to incorporate the effect of instruction cache to the Response Time schedulability Analysis (RTA), an efficient analysis for preemptive fixed priority schedulers and compares the results of such an approach to both cache partitioning and CRMA.
Abstract: Cache memories are commonly avoided in real time systems because of their unpredictable behavior. Recently, some research has been done to obtain tighter bounds on the worst case execution time (WCET) of cached programs. These techniques usually assume a non preemptive underlying system. However, some techniques can be applied to allow the use of caches in preemptive systems. The paper describes how to incorporate the effect of instruction cache to the Response Time schedulability Analysis (RTA). RTA is an efficient analysis for preemptive fixed priority schedulers. We also compare through simulations the results of such an approach to both cache partitioning (increase of the cache predictability by assigning private cache partitions to tasks) and CRMA (Cached RMA: cache effect is incorporated in the utilization based rate monotonic schedulability analysis). The results show that the cached version of RTA (CRTA) clearly outperforms CRMA, however the partitioning scheme may be better depending on the system configuration. The obtained results bound the applicability domain for each method for a variety of hardware and workload configurations. The results can be used as design guidelines.

182 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820