scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
27 Jun 1988
TL;DR: In this article, a pipelined central processor capable of executing both single-cycle instructions and multicycle instructions is provided, which includes an instruction cache memory and a prediction cache memory that are commonly addressed by a program counter register.
Abstract: A pipelined central processor capable of executing both single-cycle instructions and multicycle instructions is provided. An instruction fetch stage of the processor includes an instruction cache memory and a prediction cache memory that are commonly addressed by a program counter register. The instruction cache memory stores instructions of a program being executed and microinstructions of a multicycle instruction interpreter. The prediction cache memory stores interpreter call predictions and interpreter entry addresses at the addresses of the multicycle intructions. When a call prediction occurs, the entry address of the instruction interpreter is loaded into the program counter register on the processing cycle immediately following the call prediction, and a return address is pushed onto a stack. The microinstructions of the interpreter are fetched sequentially from the instruction cache memory. When the interpreter is completed, the prediction cache memory makes a return prediction. The return address is transferred from the stack to the program counter register on the processing cycle immediately following the return prediction, and normal program flow is resumed. The prediction cache memory also stores branch instruction predictions and branch target addresses.

140 citations

Patent
17 Jul 1990
TL;DR: In this article, a disk drive access control apparatus for connection between a host computer and a plurality of disk drives to provide an asynchronously operating storage system is presented. But the disk drive controller channels are not connected to the disk drives and each of the channels includes a cache/buffer memory and a micro-processor unit.
Abstract: This invention provides disk drive access control apparatus for connection between a host computer and a plurality of disk drives to provide an asynchronously operating storage system. It also provides increases in performance over earlier versions thereof. There are a plurality of disk drive controller channels connected to respective ones of the disk drives and controlling transfers of data to and from the disk drives, each of the disk drive controller channels includes a cache/buffer memory and a micro-processor unit. An interface and driver unit interfaces with the host computer and there is a central cache memory. Cache memory control logic controls transfers of data from the cache/buffer memory of the plurality of disk drive controller channels to the cache memory and from the cache memory to the cache/buffer memory of the plurality of disk drive controller channels and from the cache memory to the host computer through the interface and driver unit. A central processing unit manages the use of the cache memory by requesting data transfers only of data not presently in the cache memory and by sending high level commands to the disk drive controller channels. A first (data) bus interconnects the plurality of disk drive cache/buffer memories, the interface and driver unit, and the cache memory for the transfer of information therebetween and a second (information and commands) bus interconnects the same elements with the central processing unit for the transfer of control and information therebetween.

140 citations

Proceedings ArticleDOI
08 Mar 2010
TL;DR: This paper modeled the timing, energy, endurance, and area of PCM caches and integrated them into a PCM cache simulator to evaluate the techniques and shows that the design can achieve 8% of energy saving and 3.8 years of lifetime compared with a baseline PCM Cache.
Abstract: Phase change memory (PCM) is one of the most promising technology among emerging non-volatile random access memory technologies. Implementing a cache memory using PCM provides many benefits such as high density, non-volatility, low leakage power, and high immunity to soft error. However, its disadvantages such as high write latency, high write energy, and limited write endurance prevent it from being used as a drop-in replacement of an SRAM cache. In this paper, we study a set of techniques to design an energy- and endurance-aware PCM cache. We also modeled the timing, energy, endurance, and area of PCM caches and integrated them into a PCM cache simulator to evaluate the techniques. Experiments show that our PCM cache design can achieve 8% of energy saving and 3.8 years of lifetime compared with a baseline PCM cache having less than a hour of lifetime.

140 citations

Patent
24 Apr 2006
TL;DR: In this paper, a method and system of managing data access in a shared memory cache of a processor is described, which includes probing one or more memory addresses that map to a subset of the shared memory caches.
Abstract: A method and system of managing data access in a shared memory cache of a processor are disclosed. The method includes probing one or more memory addresses that map to a subset of the shared memory cache and sensing a plurality of events in the one or more memory addresses. Cache utilization information is then obtained by reading a hardware performance counter of the processor. The hardware performance counter is incremented based on the occurrence of the plurality of events. Based upon the cache utilization information, an occurrence of one of the plurality of events is reduced.

140 citations

Patent
27 Dec 2002
TL;DR: In this article, a log-structured write cache for a data storage system and method for improving the performance of the storage system is described, where cache lines where write data is temporarily accumulated in a nonvolatile state so that it can be sequentially written to the target storage locations at a later time.
Abstract: A log-structured write cache for a data storage system and method for improving the performance of the storage system are described. The system might be a RAID storage array, a disk drive, an optical disk, or a tape storage system. The write cache is preferably implemented in the main storage medium of the system, but can also be provided in other storage components of the system. The write cache includes cache lines where write data is temporarily accumulated in a non-volatile state so that it can be sequentially written to the target storage locations at a later time, thereby improving the overall performance of the system. Meta-data for each cache line is also maintained in the write cache. The meta-data includes the target sector address for each sector in the line and a sequence number that indicates the order in which data is posted to the cache lines. A buffer table entry is provided for each cache line. A hash table is used to search the buffer table for a sector address that is needed at each data read and write operation.

140 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830