scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Proceedings ArticleDOI
29 Nov 2011
TL;DR: This paper introduces a new method of bounding pre-emption costs, called the ECB-Union approach, which complements an existing UCB- Union approach and combines the two into a simple composite approach that dominates both.
Abstract: Without the use of cache the increasing gap between processor and memory speeds in modern embedded microprocessors would have resulted in memory access times becoming an unacceptable bottleneck. In such systems, cache related pre-emption delays can be a significant proportion of task execution times. To obtain tight bounds on the response times of tasks in pre-emptively scheduled systems, it is necessary to integrate worst-case execution time analysis and schedulability analysis via the use of an appropriate model of pre-emption costs. In this paper, we introduce a new method of bounding pre-emption costs, called the ECB-Union approach. The ECB-Union approach complements an existing UCB-Union approach. We combine the two into a simple composite approach that dominates both. These approaches are integrated into response time analysis for fixed priority pre-emptively scheduled systems. Further, we extend this analysis to systems where tasks can access resources in mutual exclusion, in the process resolving omissions in existing models of pre-emption delays. A case study and empirical evaluation demonstrate the e?ectiveness of the ECB-Union and combined approaches for a wide range of di?erent cache configurations including cache utilization, cache set size, reuse, and block reload times.

91 citations

Patent
09 Feb 1993
TL;DR: In this article, a cache locking scheme in a two-set associative instruction cache that utilizes a specially designed Least Recently Used (LRU) unit to effectively lock a first portion of the instruction cache to allow high speed and predictable execution time for time critical program code sections residing in the first portion while leaving another portion of instruction cache free to operate as an instruction cache for other, non-critical, code sections.
Abstract: An instruction locking apparatus and method for a cache memory allowing execution time predictability and high speed performance. The present invention implements a cache locking scheme in a two set associative instruction cache that utilizes a specially designed Least Recently Used (LRU) unit to effectively lock a first portion of the instruction cache to allow high speed and predictable execution time for time critical program code sections residing in the first portion while leaving another portion of the instruction cache free to operate as an instruction cache for other, non-critical, code sections. The present invention provides the above features in a system that is virtually transparent to the program code and does not require a variety of complex or specialized instructions or address coding methods. The present invention is flexible in that the two set associative instruction cache is transformed into what may be thought of as a static RAM in cache, and in addition, a direct map cache unit. Several different time critical code sections may be loaded and locked into the cache at different times.

91 citations

Patent
24 Apr 1992
TL;DR: In this article, the bus interface is coupled to the processor, the backup cache memory and to the bus in accordance with a SNOOPY protocol to monitor transactions on the bus for write transactions affecting data items in the corresponding secondary cache memory having set VALID indicators.
Abstract: A processor apparatus for use in a multiprocessor computer system having a main memory storing a plurality of data items and being coupled to a bus operating according of a SNOOPY protocol. The processor apparatus includes a processor, a primary cache, a backup cache and a bus interface. The backup cache memory a first TAG store comprising a plurality of VALID indicators, one VALID indicator for each of the data items currently contained in the backup cache memory. The primary cache memory includes a second TAG store comprising a plurality of address indicators and a plurality of VALID indicators, one address indicator and one VALID indicator for each of the data items currently contained in the primary cache memory. The interface includes a duplicate TAG store coupled to the primary cache memory, the duplicate TAG store consisting of a copy of the address indicators of the second TAG store. The bus interface is coupled to the processor, the backup cache memory and to the bus. The bus interface operates in accordance with a SNOOPY protocol to monitor transactions on the bus for write transactions affecting data items in the corresponding backup cache memory having set VALID indicators. The bus interface will invalidate or update each VALID data item of the backup cache memory when there is a write transaction affecting data item and assert an invalidate signal for an affected data item indicated by the address indicators of the duplicate TAG store. The invalidate signal causes the VALID indicator in the second TAG store for the affected data item to be cleared.

91 citations

Patent
31 Mar 1995
TL;DR: In this paper, a multiprocessor computer system is provided having a multiplicity of sub-systems and a main memory coupled to a system controller, each of which includes a master interface having master classes for sending memory transaction requests to the system controller.
Abstract: A multiprocessor computer system is provided having a multiplicity of sub-systems and a main memory coupled to a system controller. An interconnect module, interconnects the main memory and sub-systems in accordance with interconnect control signals received from the system controller. At least two of the sub-systems are data processors, each having a respective cache memory that stores multiple blocks of data and a respective master cache index. Each master cache index has a set of master cache tags (Etags), including one cache tag for each data block stored by the cache memory. Each data processor includes a master interface having master classes for sending memory transaction requests to the system controller. The system controller includes memory transaction request logic for processing each memory transaction request by a data processor. The system controller maintains a duplicate cache index having a set of duplicate cache tags (Dtags) for each data processor. Each data processor has a writeback buffer for storing the data block previously stored in a victimized cache line until its respective writeback transaction is completed and an Nth+1 Dtag for storing the cache state of a cache line associated with a read transaction which is executed prior to an associated writeback transaction of a read-writeback transaction pair. Accordingly, upon a cache miss, the interconnect may execute the read and writeback transactions in parallel relying on the writeback buffer or Nth+1 Dtag to accommodate any ordering of the transactions.

91 citations

Proceedings ArticleDOI
27 Sep 2003
TL;DR: This work uses the recently published locality analysis to generate a parameterized model of program cache behavior that predicts the miss rate for arbitrary data input set sizes and identifies critical data input sizes where cache behavior exhibits marked changes.
Abstract: Improving cache performance requires understanding cache behavior. However, measuring cache performance for one or two data input sets provides little insight into how cache behavior varies across all data input sets. This paper uses our recently published locality analysis to generate a parameterized model of program cache behavior. Given a cache size and associativity, this model predicts the miss rate for arbitrary data input set sizes. This model also identifies critical data input sizes where cache behavior exhibits marked changes. Experiments show this technique is within 2% of the hit rate for set associative caches on a set of integer and floating-point programs.

91 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820