scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
01 Apr 2013
TL;DR: In this article, a method and apparatus for dynamically controlling the cache size of a processor is presented. But the method does not specify how to dynamically change the operating point of the processor, nor how to selectively remove power from different ways of the cache memory.
Abstract: A method and apparatus for dynamically controlling a cache size is disclosed. In one embodiment, a method includes changing an operating point of a processor from a first operating point to a second operating point, and selectively removing power from one or more ways of a cache memory responsive to changing the operating point. The method further includes processing one or more instructions in the processor subsequent to removing power from the one or more ways of the cache memory, wherein said processing includes accessing one or more ways of the cache memory from which power was not removed.

65 citations

Patent
09 Dec 1993
TL;DR: In this paper, a microprocessor is provided with an integral, two-level cache memory architecture, where the first level cache misses and the second level cache is discarded and stored in the replacement cache.
Abstract: A microprocessor is provided with an integral, two level cache memory architecture. The microprocessor includes a microprocessor core and a set associative first level cache both located on a common semiconductor die. A replacement cache, which is at least as large as approximately one half the size of the first level cache, is situated on the same semiconductor die and is coupled to the first level cache. In the event of a first level cache miss, a first level entry is discarded and stored in the replacement cache. When such a first level cache miss occurs, the replacement cache is checked to see if the desired entry is stored therein. If a replacement cache hit occurs, then the hit entry is forwarded to the first level cache and stored therein. If a cache miss occurs in both the first level cache and the replacement cache, then a main memory access is commenced to retrieve the desired entry. In that event, the desired entry retrieved from main memory is forwarded to the first level cache and stored therein. When a replacement cache entry is removed from the replacement cache by the replacement algorithm associated therewith, that entry is written back to main memory if that entry was modified. Otherwise the entry is discarded.

65 citations

Patent
Samantha J. Edirisooriya1
27 Oct 2005
TL;DR: In this paper, a centralized pushing mechanism is used to push data into a processor cache in a computing system with at least one processor, where each processor may comprise one or more processing units, each of which may be associated with a cache.
Abstract: An arrangement is provided for using a centralized pushing mechanism to actively push data into a processor cache in a computing system with at least one processor. Each processor may comprise one or more processing units, each of which may be associated with a cache. The centralized pushing mechanism may predict data requests of each processing unit in the computing system based on each processing unit’s memory access pattern. Data predicted to be requested by a processing unit may be moved from a memory to the centralized pushing mechanism which then sends the data to the requesting processing unit. A cache coherency protocol in the computing system may help maintain the coherency among all caches in the system when the data is placed into a cache of the requesting processing unit.

65 citations

Patent
02 Apr 2002
TL;DR: In this paper, a microprocessor (10) configured to store victimized instruction and data bytes is disclosed, which includes a predecode unit (12), and instruction cache (16), a data cache (28), and a level two cache (50).
Abstract: A microprocessor (10) configured to store victimized instruction and data bytes is disclosed. In one embodiment, the microprocessor includes a predecode unit (12), and instruction cache (16), a data cache (28), and a level two cache (50). The predecode unit received instruction bytes and generates corresponding predecode information that is stored in the instruction cache with the instruction bytes. The data cache received and stores data bytes. The level two cache is configured to receive and store victimized instruction bytes from the instruction cache along with parity information and predecode information, and victimized data bytes from the data cache along with error correction code bits. Indicator bits may be stored on a cache line basis to indicate the type of data is stored therein.

65 citations

Proceedings ArticleDOI
23 Jun 1997
TL;DR: It is proposed that Soft Caching, where an image can be cached at one of a set of levels of resolutions, can benefit the overall performance when combined with cache management strategies that estimate, for each object, both the bandwidth to the server where the object is stored and the appropriate resolution level demanded by the user.
Abstract: The vast majority of current Internet traffic is generated by web browsing applications. Proxy caching, which allows some of the most popular web objects to be cached at intermediate nodes within the network, has been shown to provide substantial performance improvements. In this paper we argue that image-specific caching strategies are desirable and will result in improved performance over approaches treating all objects alike. We propose that Soft Caching, where an image can be cached at one of a set of levels of resolutions, can benefit the overall performance when combined with cache management strategies that estimate, for each object, both the bandwidth to the server where the object is stored and the appropriate resolution level demanded by the user. We formalize the cache management problem under these conditions and describe an experimental system to test these techniques.

65 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820