scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Patent
03 Mar 1982
TL;DR: In a hierarchical memory system, replacement of segments in a cache memory is governed by a least recently used algorithm, while trickling of segments from the cache memory to the bulk memory was governed by the age since first write.
Abstract: In a hierarchical memory system, replacement of segments in a cache memory is governed by a least recently used algorithm, while trickling of segments from the cache memory to the bulk memory is governed by the age since first write The host processor passes an AGEOLD parameter to the memory subsystem and this parameter regulates the trickling of segments Unless the memory system is idle (no I/O activity), no trickling takes place until the age of the oldest written-to segment is at least as great as AGEOLD A command is generated for each segment to be trickled and the priority of execution assigned to such commands is variable and determined by the relationship of AGEOLD to the oldest age since first write of any of the segments If the subsystem receives no command from the host processor for a predetermined interval, AGEOLD is ignored and any written-to segment becomes a candidate for trickling

68 citations

Patent
22 Dec 1997
TL;DR: In this paper, an improved cache memory architecture with way prediction is proposed, which places the address tag array of a cache memory on the central processing unit core while the cache data array remains off the microprocessor chip.
Abstract: The present invention provides an improved cache memory architecture with way prediction. The improved architecture entails placing the address tag array of a cache memory on the central processing unit core (i.e. the microprocessor chip), while the cache data array remains off the microprocessor chip. In addition, a way predictor is provided in conjunction with the improved memory cache architecture to increase the overall performance of the cache memory system.

68 citations

Patent
Michael Kagan1, David Perlmutter1
10 May 1995
TL;DR: In this article, a multiprocessor computer system which maintains cache coherency includes first and second microprocessors each having an associated cache memory storing lines of data, each line of data has associated protocol bits that indicate a protocol state consistent with write-through, write-back, or write-once cache cohemrency policies that are selected via a protocol selection terminal for different system configurations.
Abstract: A multiprocessor computer system which maintains cache coherency includes first and second microprocessors each having an associated cache memory storing lines of data. Each line of data has associated protocol bits that indicate a protocol state consistent with write-through, write-back, or write-once cache coherency policies that are selected via a protocol selection terminal for different system configurations. In one configuration, the output and external address terminals of the first microprocessor are coupled to the external and output address terminals, respectively, of the second microprocessor. This configuration enables each microprocessor to snoop memory cycles to main memory initiated by the other microprocessor so that it can be readily determined if a particular cache has the latest version of data.

68 citations

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper seeks to construct an analytical framework based on optimal control theory and dynamic programming, to help form an in-depth understanding of optimal strategies to design cache replacement algorithms in peer-assisted VoD systems.
Abstract: Peer-assisted Video-on-Demand (VoD) systems have not only received substantial recent research attention, but also been implemented and deployed with success in large-scale real- world streaming systems, such as PPLive. Peer-assisted Video- on-Demand systems are designed to take full advantage of peer upload bandwidth contributions with a cache on each peer. Since the size of such a cache on each peer is limited, it is imperative that an appropriate cache replacement algorithm is designed. There exists a tremendous level of flexibility in the design space of such cache replacement algorithms, including the simplest alternatives such as Least Recently Used (LRU). Which algorithm is the best to minimize server bandwidth costs, so that when peers need a media segment, it is most likely available from caches of other peers? Such a question, however, is arguably non-trivial to answer, as both the demand and supply of media segments are stochastic in nature. In this paper, we seek to construct an analytical framework based on optimal control theory and dynamic programming, to help us form an in-depth understanding of optimal strategies to design cache replacement algorithms. With such analytical insights, we have shown with extensive simulations that, the performance margin enjoyed by optimal strategies over the simplest algorithms is not substantial, when it comes to reducing server bandwidth costs. In most cases, the simplest choices are good enough as cache replacement algorithms in peer-assisted VoD systems.

68 citations

Patent
03 Mar 1982
TL;DR: In this paper, a timestamp is generated with each write command and the timestamp accompanying a write command which is the first command to write to a segment after that segment is moved from bulk memory to the cache is entered into the list at the most recently used position.
Abstract: In a data processing system including a processor, a bulk memory, a cache, and a storage control unit for controlling the transfer of data between the bulk memory and the cache, a timestamp is generated with each write command. A linked list is maintained, having an entry therein corresponding to each segment in the cache which has been written to since it was moved from the bulk memory to the cache. The timestamp accompanying a write command which is the first command to write to a segment after that segment is moved from bulk memory to the cache is entered into the list at the most recently used position. An entry in the linked list is removed from the list when the segment corresponding thereto is transferred from the cache to the bulk memory. The linked list is utilized to update a value TOLDEST, which represents the age of the oldest written-to segment in the cache that has not been returned to bulk memory since it was first written to. The processor periodically issues a command which transfers TOLDEST from the subsystem to the processor. In case of a cache failure, such as might result from a power loss, the processor may sense the latest value of TOLDEST together with other file recovery synchronization information and determine what part of the data which it sent to the cache was lost because it did not get recorded in the bulk memory.

68 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830