scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A number of hardware- and software-based approaches to defending against methods of attack using cache-based side-channel analysis are surveyed and evaluated using simulated results.

100 citations

Proceedings ArticleDOI
04 Jun 2011
TL;DR: This paper researches the integration of STT-RAM in a 3D multi-core environment and proposes solutions at the on-chip network level to circumvent the write overhead problem in the cache architecture with STt-RAM technology.
Abstract: Emerging memory technologies such as STT-RAM, PCRAM, and resistive RAM are being explored as potential replacements to existing on-chip caches or main memories for future multi-core architectures. This is due to the many attractive features these memory technologies posses: high density, low leakage, and non-volatility. However, the latency and energy overhead associated with the write operations of these emerging memories has become a major obstacle in their adoption. Previous works have proposed various circuit and architectural level solutions to mitigate the write overhead. In this paper, we study the integration of STT-RAM in a 3D multi-core environment and propose solutions at the on-chip network level to circumvent the write overhead problem in the cache architecture with STT-RAM technology. Our scheme is based on the observation that instead of staggering requests to a write-busy STT-RAM bank, the network should schedule requests to other idle cache banks for effectively hiding the latency. Thus, we prioritize cache accesses to the idle banks by delaying accesses to the STT-RAM cache banks that are currently serving long latency write requests. Through a detailed characterization of the cache access patterns of 42 applications, we propose an efficient mechanism to facilitate such delayed writes to cache banks by (a) accurately estimating the busy time of each cache bank through logical partitioning of the cache layer and (b) prioritizing packets in a router requesting accesses to idle banks. Evaluations on a 3D architecture, consisting of 64 cores and 64 STT-RAM cache banks, show that our proposed approach provides 14% average IPC improvement for multi-threaded benchmarks, 19% instruction throughput benefits for multi-programmed workloads, and 6% latency reduction compared to a recently proposed write buffering mechanism.

100 citations

Patent
Dharmendra S. Modha1
21 Oct 2003
TL;DR: In this article, the authors propose a method, system, and program storage medium for adaptively managing pages in a cache memory included within a system having a variable workload, comprising arranging cache memory including a pointer that rotates around a circular buffer; maintaining a bit for each page in the circular buffer, wherein a bit value 0 indicates that the page was not accessed by the system since a last time that the pointer traversed over the page, and a hit value 1 indicates that a page has been accessed since the last time the pointer was accessed.
Abstract: A method, system, and program storage medium for adaptively managing pages in a cache memory included within a system having a variable workload, comprising arranging a cache memory included within a system into a circular buffer; maintaining a pointer that rotates around the circular buffer; maintaining a bit for each page in the circular buffer, wherein a bit value 0 indicates that the page was not accessed by the system since a last time that the pointer traversed over the page, and a hit value 1 indicates that the page has been accessed since the last time the pointer traversed over the page; and dynamically controlling a distribution of a number of pages in the cache memory that are marked with bit 0 in response to a variable workload in order to increase a hit ratio of the cache memory.

100 citations

Patent
19 Nov 1999
TL;DR: Curious caching as mentioned in this paper improves upon cache snooping by allowing a cache to insert data from snooped bus operations that is not currently in the cache and independent of any prior accesses to the associated memory location.
Abstract: Curious caching improves upon cache snooping by allowing a snooping cache to insert data from snooped bus operations that is not currently in the cache and independent of any prior accesses to the associated memory location. In addition, curious caching allows software to specify which data producing bus operations, e.g., reads and writes, result in data being inserted into the cache. This is implemented by specifying “memory regions of curiosity” and insertion and replacement policy actions for those regions. In column caching, the replacement of data can be restricted to particular regions of the cache. By also making the replacement address-dependent, column caching allows different regions of memory to be mapped to different regions of the cache. In a set-associative cache, a replacement policy specifies the particular column(s) of the set-associative cache in which a page of data can be stored. The column specification is made in page table entries in a TLB that translates between virtual and physical addresses. The TLB includes a bit vector, one bit per column, which indicates the columns of the cache that are available for replacement.

100 citations

Patent
01 Dec 2000
TL;DR: In this article, the data access layer determines whether a data item required by an application program is in the cache, and if it is, the access layer obtains the item from the cache; otherwise, it obtains from the data source.
Abstract: A middle-tier Web server (230) with a queryable cache (219) that contains items from one or more data sources (241). Items are included in the cache (223) on the basis of the probability of future hits on the items. When the data source (241) determines that an item that has been included in the cache (223) has changed, it sends an update message to the server (230), which updates the item if it is still included in the cache. In a preferred embodiment, the data source is a database system and triggers in the database system are used to generate update messages. In a preferred embodiment, the data access layer determines whether a data item required by an application program is in the cache. If it is, the data access layer obtains the item from the cache; otherwise, it obtains the item from the data source. The queryable cache includes a miss table that accelerates the determination of whether a data item is in the cache. The miss table is made up of miss table entries that relate the status of a data item to the query used to access the data item. There are three statuses: miss, indicating that the item is not in the cache, hit, indicating that it is, and unknown, indicating that it is not known whether the item is in the cache. When an item is referenced, the query used to access it is presented to the table. If the entry for the query has the status miss, the data access layer obtains the item from the data source instead of attempting to obtain it from the cache. If the entry has the status unknown, the data access layer attempts to obtain it from the cache and the miss table entry for the item is updated in accordance with the result. When a copy of an item is added to the cache, miss table entries with the status miss are set to indicate unknown.

100 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820