scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Patent
27 Jan 2003
TL;DR: In this article, a data element having a special write with inject attribute is received from a data producer (160, 640), such as an Ethernet controller, and forwarded to the cache without accessing the lower-level memory system (170, 650).
Abstract: A data processing system (100, 600) has a memory hierarchy including a cache (124, 624) and a lower-level memory system (170, 650). A data element having a special write with inject attribute is received from a data producer (160, 640), such as an Ethernet controller. The data element is forwarded to the cache (124, 624) without accessing the lower-level memory system (170, 650). Subsequently at least one cache line containing the data element is updated in the cache (124, 624).

59 citations

Patent
25 Jul 1988
TL;DR: In this paper, the authors propose a load/store pipeline in a computer processor for loading data to registers and storing data from the registers has a cache memory within the pipeline for storing data.
Abstract: A load/store pipeline in a computer processor for loading data to registers and storing data from the registers has a cache memory within the pipeline for storing data. The pipeline includes buffers which support multiple outstanding read request misses. Data from out of the pipeline is obtained independently of the operation of the pipeline, this data corresponding to the request misses. The cache memory can then be filled with the data that has been requested. The provision of a cache memory within the pipeline, and the buffers for supporting the cache memory, speed up loading operations for the computer processor.

59 citations

Patent
John D. Curtis1
15 Sep 2003
TL;DR: In this paper, a history of requests for data objects is tracked and maintained in a cache log, and discard and refresh rules are assigned to each data object on a class basis.
Abstract: Under the present invention, a history of requests for data objects are tracked and maintained in a cache log. Based on the history, certain data objects are prefetched into a cache. When a request for a cached data object is later received, the requested data object can be retrieved from the cache and served to the requesting user. Thus, the latency involved with obtaining the data objects from the appropriate sources is eliminated. Further, under the present invention, discard and refresh rules are assigned to each data object on a class basis. Accordingly, data objects in the cache can be refreshed and/or discarded so that the caching operation can be optimized.

59 citations

Proceedings ArticleDOI
28 Jun 2013
TL;DR: This paper investigates the current state of side-channel vulnerabilities involving the CPU cache, and identifies the shortcomings of traditional defenses in a Cloud environment, and develops a mitigation technique applicable for Cloud security.
Abstract: As Cloud services become more common place, recent work have uncovered vulnerabilities unique to Cloud systems. Specifically, the paradigm promotes a risk of information leakage across virtual machine isolation via side-channels. In this paper, we investigate the current state of side-channel vulnerabilities involving the CPU cache, and identify the shortcomings of traditional defenses in a Cloud environment. We explore why solutions to non-Cloud cache-based side-channels cease to work in Cloud environments, and develop a mitigation technique applicable for Cloud security. Applying this solution to a canonical Cloud environment, we demonstrate the validity of this Cloud-specific, cache-based side-channel mitigation technique. Furthermore, we show that it can be implemented as a server-side approach to improve security without inconveniencing the client. Finally, we conduct a comparison of our solution to the current state-of-the-art.

59 citations

Patent
26 Oct 1990
TL;DR: In this article, an integrated circuit structure for a multiprocessor system includes an execution unit operative on the basis of a virtual storage scheme and a cache memory having entries designated by logical addresses from the execution unit.
Abstract: A processing apparatus of an integrated circuit structure for a multiprocessor system includes an execution unit operative on the basis of a virtual storage scheme and a cache memory having entries designated by logical addresses from the execution unit For controlling the cache memory, a first address array containing entries designated by the same logical addresses as the cache memory and storing control information for the corresponding entries of the cache memory is provided in association with a second address array having entries designated by physical addresses and storing translation information for translation of physical addresses to logical addresses for the entries When a physical address at which invalidation is to be performed is inputted in response to a cache memory invalidation request supplied externally, access is made to the second address array by using the physical address to obtain the translation information from the second address array to thereby generate a logical address to be invalidated The first address array is accessed by using the generated logical address to perform a invalidation processing on the control information

59 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820