Topic
Cache invalidation
About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Experimental results indicate that this scheme has a high level of predictability, and that the performance loss is negligible for around 70% of the tasks.
81 citations
••
07 Mar 2005TL;DR: This work exploits the power of the cache miss equations (CME) framework but lifts a number of limitations of traditional CME to generalize the analysis to more arbitrary programs, and devised a transformation, coined "forced" loop fusion, which facilitates the analysis across sequential loops.
Abstract: While caches have become invaluable for higher-end architectures due to their ability to hide, in part, the gap between processor speed and memory access times, caches (and particularly data caches) limit the timing predictability for data accesses that may reside in memory or in cache. This is a significant problem for real-time systems. The objective our work is to provide accurate predictions of data cache behavior of scalar and nonscalar references whose reference patterns are known at compile time. Such knowledge about cache behavior provides the basis for significant improvements in bounding the worst-case execution time (WCET) of real-time programs, particularly for hard-to-analyze data caches. We exploit the power of the cache miss equations (CME) framework but lift a number of limitations of traditional CME to generalize the analysis to more arbitrary programs. We further devised a transformation, coined "forced" loop fusion, which facilitates the analysis across sequential loops. Our contributions result in exact data cache reference patterns minus; in contrast to approximate cache miss behavior of prior work. Experimental results indicate improvements on the accuracy of worst-case data cache behavior up to two orders of magnitude over the original approach. In fact, our results closely bound and sometimes even exactly match those obtained by trace-driven simulation for worst-case inputs. The resulting WCET bounds of timing analysis confirm these findings in terms of providing tight bounds. Overall, our contributions lift analytical approaches to predict data cache behavior to a level suitable for efficient static timing analysis and, subsequently, real-time schedulability of tasks with predictable WCET.
81 citations
••
11 Aug 2014TL;DR: Optimal cache content placement is studied in a wireless infostation network (WIN), which models a limited coverage wireless network with a large cache memory, formulated as a multi-armed bandit problem with switching cost, and an algorithm to solve it is presented.
Abstract: Optimal cache content placement is studied in a wireless infostation network (WIN), which models a limited coverage wireless network with a large cache memory. WIN provides content-level selective offloading by delivering high data rate contents stored in its cache memory to the users through a broadband connection. The goal of the WIN central controller (CC) is to store the most popular content in the cache memory of the WIN such that the maximum amount of data can be fetched directly from the cache rather than being downloaded from the core network. If the popularity profile of the available set of contents is known in advance, the optimization of the cache content reduces to a knapsack problem. However, it is assumed in this work that the popularity profile of the files is not known, and only the instantaneous demands for those contents stored in the cache can be observed. Hence, the cache content placement is optimised based on the demand history, and on the cost associated to placing each content in the cache. By refreshing the cache content at regular time intervals, the CC tries to learn the popularity profile, while at the same time exploiting the limited cache capacity in the best way possible. This problem is formulated as a multi-armed bandit problem with switching cost, and an algorithm to solve it is presented. The performance of the algorithm is measured in terms of regret, which is proven to be logarithmic and sub-linear uniformly over time for a specific and a general case, respectively.
81 citations
•
08 Apr 2002TL;DR: In this article, a streaming media caching mechanism and cache manager efficiently establish and maintain the contents of streaming media cache for use in serving streaming media requests from cache rather than from an original data source when appropriate.
Abstract: A streaming media caching mechanism and cache manager efficiently establish and maintain the contents of a streaming media cache for use in serving streaming media requests from cache rather than from an original data source when appropriate. The cost of caching is incurred only when the benefits of caching are likely to be experienced. The caching mechanism and cache manager evaluate the request count for each requested URL to determine whether the URL represents a cache candidate, and further analyze the URL request rate to determine whether the content associated with the URL will be cached. In an embodiment, the streaming media cache is maintained with a predetermined amount of reserve capacity rather than being filled to capacity whenever possible.
81 citations
••
30 Sep 2007
TL;DR: Techniques exploring the lockdown of instruction caches at compile-time to minimize WCETs are presented, which explicitly take the worst-case execution path into account during each step of the optimization procedure.
Abstract: Caches are notorious for their unpredictability. It is difficult or even impossible to predict if a memory access results in a definite cache hit or miss. This unpredictability is highly undesired for real-time systems. The Worst-Case Execution Time (WCET) of a software running on an embedded processor is one of the most important metrics during real-time system design. The WCET depends to a large extent on the total amount of time spent for memory accesses. In the presence of caches, WCET analysis must always assume a memory access to be a cache miss if it can not be guaranteed that it is a hit. Hence, WCETs for cached systems are imprecise due to the overestimation caused by the caches. Modern caches can be controlled by software. The software can load parts of its code or of its data into the cache and lock the cache afterwards. Cache locking prevents the cache's contents from being flushed by deactivating the replacement. A locked cache is highly predictable and leads to very precise WCET estimates, because the uncertainty caused by the replacement strategy is eliminated completely. This paper presents techniques exploring the lockdown of instruction caches at compile-time to minimize WCETs. In contrast to the current state of the art in the area of cache locking, our techniques explicitly take the worst-case execution path into account during each step of the optimization procedure. This way, we can make sure that always those parts of the code are locked in the I-cache that lead to the highest WCET reduction. The results demonstrate that WCET reductions from 54% up to 73% can be achieved with an acceptable amount of CPU seconds required for the optimization and WCET analyses themselves.
81 citations