Topic
Cache pollution
About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.
Papers published on a yearly basis
Papers
More filters
•
10 Oct 2000TL;DR: In this paper, the authors propose an integrated digital signal processor (IDS) architecture which includes a cache subsystem, a first bus and a second bus, which provides caching and data routing for the processors.
Abstract: To achieve high performance at low cost, an integrated digital signal processor uses an architecture which includes both a general purpose processor and a vector processor. The integrated digital signal processor also includes a cache subsystem, a first bus and a second bus. The cache subsystem provides caching and data routing for the processors and buses. Multiple simultaneous communication paths can be used in the cache subsystem for the processors and buses. Furthermore, simultaneous reads and writes are supported to a cache memory in the cache subsystem.
65 citations
•
27 Oct 2005TL;DR: In this paper, a centralized pushing mechanism is used to push data into a processor cache in a computing system with at least one processor, where each processor may comprise one or more processing units, each of which may be associated with a cache.
Abstract: An arrangement is provided for using a centralized pushing mechanism to actively push data into a processor cache in a computing system with at least one processor. Each processor may comprise one or more processing units, each of which may be associated with a cache. The centralized pushing mechanism may predict data requests of each processing unit in the computing system based on each processing unit’s memory access pattern. Data predicted to be requested by a processing unit may be moved from a memory to the centralized pushing mechanism which then sends the data to the requesting processing unit. A cache coherency protocol in the computing system may help maintain the coherency among all caches in the system when the data is placed into a cache of the requesting processing unit.
65 citations
•
05 Oct 2001TL;DR: In this paper, the cache memory system of the home node stores an exclusive copy of the particular memory line, and thus does not retrieve the memory line and its directory entry from the main memory.
Abstract: Each node of a multiprocessor computer system includes a main memory, a cache memory system and logic. The main memory stores memory lines of data. A directory entry for each memory line indicates whether a copy of the corresponding memory line is stored in the cache memory system in another node. The cache memory system stores copies of memory lines and cache state information indicating whether the cached copy of each memory line is an exclusive copy. The logic of each respective node is configured to respond to a transaction request for a particular memory line and its corresponding directory entry, where the respective node is the home node of the particular memory. When the cache memory system of the home node stores an exclusive copy of the particular memory line, the logic responds to the request by sending the copy of the particular memory line retrieved from the cache memory system and a predefined null directory entry value, and thus does not retrieve the memory line and its directory entry from the main memory of the home node.
65 citations
••
22 Sep 1997TL;DR: This paper uses log files from four Web servers and proposes and evaluates static caching, a novel cache policy for Web servers that incur no CPU overhead and does not suffer from memory fragmentation.
Abstract: This paper studies caching in primary Web servers. We use log files from four Web servers to analyze the performance of various proposed cache policies for Web servers: LRU-threshold, LFU, LRU-SIZE, LRU-MIN, LRU-k-threshold and the Pitkow/Recker (1994) policy. Web document access patterns change very slowly. Based on this fact, we propose and evaluate static caching, a novel cache policy for Web servers. In static caching, the set of documents kept in the cache is determined periodically by analyzing the request log file for the previous period. The cache is filled with documents to maximize cache performance provided document access patterns do not change. The set of cached documents remains constant during the period. Surprisingly, this simple policy results in high cache performance, especially for small cache sizes. Unlike other policies, static caching incur no CPU overhead and does not suffer from memory fragmentation.
65 citations
•
26 Nov 1997TL;DR: In this article, a method for prefetching data in a microprocessor includes receiving a memory request, which is compared to an access history of previously encountered memory requests, and at least one prefetch candidate is generated based on the memory request and the access history.
Abstract: A microprocessor includes an execution engine, a prediction table cache, and a prefetch controller. The execution engine is adapted to issue a memory request. The memory request includes an identifier corresponding to a row location in an external main memory. The prediction table cache is adapted to store a plurality of entries defining an access history of previously encountered memory requests. The prediction table cache is indexed by the identifier. The prefetch controller is adapted to receive the memory request and generate at least one prefetch candidate based on the memory request and the access history. A method for prefetching data in a microprocessor includes receiving a memory request. The memory request includes an identifier corresponding to a row location in an external main memory. The memory request is compared to an access history of previously encountered memory requests. The access history is indexed by the identifier. At least one prefetch candidate is generated based on the memory request and the access history.
65 citations