scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
06 Jun 2008
TL;DR: In this article, a shared code caching engine receives native code comprising at least a portion of a single module of the application program, and stores runtime data corresponding to the native code in a cache data file in the non-volatile memory.
Abstract: Computer code from an application program comprising a plurality of modules that each comprise a separately loadable file is code cached in a shared and persistent caching system. A shared code caching engine receives native code comprising at least a portion of a single module of the application program, and stores runtime data corresponding to the native code in a cache data file in the non-volatile memory. The engine then converts cache data file into a code cache file and enables the code cache file to be pre-loaded as a runtime code cache. These steps are repeated to store a plurality of separate code cache files at different locations in non-volatile memory.

66 citations

Proceedings ArticleDOI
24 Oct 2016
TL;DR: A novel construction of flush-reload side channels on last-level caches of ARM processors, which, particularly, exploits return-oriented programming techniques to reload instructions is demonstrated.
Abstract: Cache side-channel attacks have been extensively studied on x86 architectures, but much less so on ARM processors. The technical challenges to conduct side-channel attacks on ARM, presumably, stem from the poorly documented ARM cache implementations, such as cache coherence protocols and cache flush operations, and also the lack of understanding of how different cache implementations will affect side-channel attacks. This paper presents a systematic exploration of vectors for flush-reload attacks on ARM processors. flush-reload attacks are among the most well-known cache side-channel attacks on x86. It has been shown in previous work that they are capable of exfiltrating sensitive information with high fidelity. We demonstrate in this work a novel construction of flush-reload side channels on last-level caches of ARM processors, which, particularly, exploits return-oriented programming techniques to reload instructions. We also demonstrate several attacks on Android OS (e.g., detecting hardware events and tracing software execution paths) to highlight the implications of such attacks for Android devices.

66 citations

Journal ArticleDOI
TL;DR: Simulation results show that the proposed cache-miss-initiated prefetch (CMIP) scheme can greatly improve the system performance in terms of improved cache hit ratio, reduced uplink requests, and negligible additional traffic.

66 citations

Proceedings ArticleDOI
04 Apr 2006
TL;DR: This work bound the penalty of cache interference for real-time tasks by providing accurate predictions of the data cache behavior across preemptions, and it is shown that such accurate modeling of data Cache behavior in preemptive systems significantly improves the WCET predictions for a task.
Abstract: Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. In this paper, we bound the penalty of cache interference for real-time tasks by providing accurate predictions of the data cache behavior across preemptions. For every task, we derive data cache reference patterns for all scalar and non-scalar references. Partial timing of a task is performed up to a preemption point using these patterns. The effects of cache interference are then analyzed using a settheoretic approach, which identifies the number and location of additional misses due to preemption. A feedback mechanism provides the means to interact with the timing analyzer, which subsequently times another interval of a task bounded by the next preemption. Our experimental results demonstrate that it is sufficient to consider the n most expensive preemption points, where n is the maximum possible number of preemptions. Further, it is shown that such accurate modeling of data cache behavior in preemptive systems significantly improves the WCET predictions for a task. To the best of our knowledge, our work of bounding preemption delay for data caches is unprecedented.

66 citations

Journal ArticleDOI
TL;DR: A popularity prediction-based cooperative cache replacement mechanism, which predicts and ranks popular content during a period of time is put forward, which aims to lower the cache replacement overhead and reduce the cache redundancy.
Abstract: Information centric networking (ICN) has been recently proposed as a prominent solution for content delivery in vehicular ad hoc networks. By caching the data packets in vehicular unused storage space, vehicles can obtain the replicate of contents from other vehicles instead of original content provider, which reduces the access pressure of content provider and increases the response speed of content request. In this paper, we propose a community similarity and population-based cache policy in an ICN vehicle-to-vehicle scenario. First, a dynamic probability caching scheme is designed by evaluating the community similarity and privacy rating of vehicles. Then, a caching vehicle selection method with hop numbers based on content popularity is proposed to reduce the cache redundancy. Moreover, to lower the cache replacement overhead, we put forward a popularity prediction-based cooperative cache replacement mechanism, which predicts and ranks popular content during a period of time. Simulation results show that the performance of our proposed mechanisms is greatly outstanding in reducing the average time delay and increasing the cache hit ratio and the cache hit distance.

66 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820