scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
Lishing Liu1
10 Jun 1988
TL;DR: In this paper, a simple sequential prefetching algorithm based on simple histories is proposed, where each memory line in cache memory is associated with a bit in an S-vector, which is called the S-bit for the line.
Abstract: A computer memory management method for cache memory (10) uses a deconfirmation technique to provide a simple sequential prefetching algorithm. Access sequentiality is predicted based on simple histories. Each memory line in cache memory is associated with a bit in an S-vector (20), which is called the S-bit for the line. When the S-bit is on, sequentiality is predicted meaning that the sequentially next line is regarded as a good candidate for prefetching, if that line is not already in the cache memory. The key to the operation of the memory management method is the manipulation (turning on and off) the S-bits.

86 citations

Patent
19 Feb 1999
TL;DR: In this article, a method and apparatus for accessing a cache memory of a computer graphics system, including a frame buffer memory having a graphics memory for storing pixel data for ultimate supply to a video display device, was presented.
Abstract: A method and apparatus for accessing a cache memory of a computer graphics system, the apparatus including a frame buffer memory having a graphics memory for storing pixel data for ultimate supply to a video display device, a read cache memory for storing data received from the graphics memory, and a write cache memory for storing data received externally of the frame buffer and data that is to be written into the graphics memory. Also included is a frame buffer controller for controlling access to the graphics memory and read and write cache memories. The frame buffer controller includes a cache first in, first out (FIFO) memory pipeline for temporarily storing pixel data prior to supply thereof to the cache memories.

86 citations

Patent
27 Mar 1986
TL;DR: In this paper, the authors propose to use a cache memory and a main memory with a transformation unit between the main memory and the cache memory so that at least a portion of an information unit retrieved from the cache may be transformed during retrieval of the information (fetch) from a mainmemory and prior to storage in the cache (cache).
Abstract: A computer having a cache memory and a main memory is provided with a transformation unit between the main memory and the cache memory so that at least a portion of an information unit retrieved from the main memory may be transformed during retrieval of the information (fetch) from a main memory and prior to storage in the cache memory (cache). In a specific embodiment, an instruction may be predecoded prior to storage in the cache memory. In another embodiment involving a branch instruction, the address of the target of the branch is calculated prior to storing in the instruction cache. The invention has advantages where a particular instruction is repetitively executed since a needed decode operation which has been partially performed previously need not be repeated with each execution of an instruction. Consequently, the latency time of each machine cycle may be reduced, and the overall efficiency of the computing system can be improved. If the architecture defines delayed branch instructions, such branch instructions may be executed in effectively zero machine cycles. This requires a wider bus and an additional register in the processor to allow the fetching of two instructions from the cache memory in the same cycle.

86 citations

Proceedings ArticleDOI
01 Apr 1992
TL;DR: A new technique for reducing direct-mapped cache misses caused by conflicts for a particular cache line is presented, which shows an average reduction in miss rate of 33% for a 32KB instruction cache with 16B lines.
Abstract: Most recent cache designs use direct-mapped caches to provide the fast access time required by modern high speed CPU's. Unfortunately, direct-mapped caches have higher miss rates than set-associative caches, largely because direct-mapped caches are more sensitive to conflicts between items needed frequently in the same phase of program execution.This paper presents a new technique for reducing direct-mapped cache misses caused by conflicts for a particular cache line. A small finite state machine recognizes the common instruction reference patterns where storing an instruction in the cache actually harms performance. Such instructions are dynamically excluded, that is they are passed directly through the cache without being stored. This reduces misses to the instructions that would have been replaced.The effectiveness of dynamic exclusion is dependent on the severity of cache conflicts and thus on the particular program and cache size of interest. However, across the SPEC benchmarks, simulation results show an average reduction in miss rate of 33% for a 32KB instruction cache with 16B lines. In addition, applying dynamic exclusion to one level of a cache hierarchy can improve the performance of the next level since instructions do not need to be stored on both levels. Finally, dynamic exclusion also improves combined instruction and data cache miss rates.

86 citations

Patent
18 Dec 1995
TL;DR: In this article, an x86 microprocessor system with a process identification system which stores a number assigned to each process run by the microprocessor systems and associates this number with instructions, data, and information fetched and stored in a cache or translation lookaside buffer (TLB) during the execution of the process is described.
Abstract: An x86 microprocessor system with a process identification system which stores a number assigned to each process run by the microprocessor system and associates this number with instructions, data, and information fetched and stored in a cache or translation lookaside buffer (TLB) during the execution of the process. Upon a process or context switch, the instructions, data, and information are not automatically flushed from the cache and TLB. The instructions, data, and information are replaced only when instructions, data, and information for a new process require the same cache memory locations or the same TLB memory location. The cache and TLB may include a valid bit block and a tag block that includes memory locations for storing the pertinent process identification number for each entry. The cache, which may be a set associative cache, and TLB include logic for comparing a process identification number stored in a process identification register with the process identification number stored in the tag block.

86 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830