scispace - formally typeset
Search or ask a question
Topic

Cache pollution

About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.


Papers
More filters
Proceedings ArticleDOI
07 Nov 2002
TL;DR: The problem of efficiently streaming a set of heterogeneous videos from a remote server through a proxy to multiple asynchronous clients so that they can experience playback with low startup delays is addressed.
Abstract: In this paper, we address the problem of efficiently streaming a set of heterogeneous videos from a remote server through a proxy to multiple asynchronous clients so that they can experience playback with low startup delays. We develop a technique to analytically determine the optimal proxy prefix cache allocation to the videos that minimizes the aggregate network bandwidth cost. We integrate proxy caching with traditional server-based reactive transmission schemes such as batching, patching and stream merging to develop a set of proxy-assisted delivery schemes. We quantitatively explore the impact of the choice of transmission scheme, cache allocation policy, proxy cache size, and availability of unicast versus multicast capability, on the resultant transmission cost.. Our evaluations show that even a relatively small prefix cache (10%-20% of the video repository) is sufficient to realize substantial savings in transmission cost. We find that carefully designed proxy-assisted reactive transmission schemes can produce significant cost savings even in predominantly unicast environments such as the Internet.

156 citations

Patent
David Brian Kirk1
18 Mar 1996
Abstract: The traditional computer system is modified by providing, in addition to a processor unit, a main memory and a cache memory buffer, remapping logic for remapping the cache memory buffer, and a plurality of registers for containing remapping information. With this environment the cache memory buffer is divided into segments, and the segments are one or more cache lines allocated to a task to form a partition, so as to make available (if a size is set above zero) of a shared partition and a group of private partitions. Registers include the functions of count registers which contain count information for the number of cache segments in a specific partition, a flag register, and two register which act as cache identification number registers. The flag register has bits acting as a flag, which bits include a non-real time flag which allows operation without the partition system, a private partition permitted flag, and a private partition selected flag. With this system a traditional computer system can be changed to operate without impediments of interrupts and other prior impediments to a real-time task to perform. By providing cache partition areas, and causing an active task to always have a pointer to a private partition, and a size register to specify how many segments can be used by the task, real time systems can take advantage of a cache. Thus each task can make use of a shared partition, and know how many segments can be used by the task. The system cache provides a high speed access path to memory data, so that during execution of a task the logic means and registers provide any necessary cache partitioning to assure a preempted task that it's cache contents will not be destroyed by a preempting task. This permits use of a software controlled partitioning system which allows segments of a cache to be statically allocated on a priority I benefit basis without hardware modification to said system. The cache allocation provided by the logic gives consideration of the scheduling requirements of tasks of the system in deciding the size of each cache partition. Accordingly, the cache can make use of a for dynamic programming implementation of an allocation algorithm which can determine an optimal cache allocation in polynomial time.

155 citations

Proceedings ArticleDOI
10 Jun 2003
TL;DR: This paper combines compile-time cache analysis with data cache locking to estimate the worst-case memory performance (WCMP) in a safe, tight and fast way, and shows that this scheme is fully predictable, without compromising the performance of the transformed program.
Abstract: Caches have become increasingly important with the widening gap between main memory and processor speeds. However, they are a source of unpredictability due to their characteristics, resulting in programs behaving in a different way than expected.Cache locking mechanisms adapt caches to the needs of real-time systems. Locking the cache is a solution that trades performance for predictability: at a cost of generally lower performance, the time of accessing the memory becomes predictable.This paper combines compile-time cache analysis with data cache locking to estimate the worst-case memory performance (WCMP) in a safe, tight and fast way. In order to get predictable cache behavior, we first lock the cache for those parts of the code where the static analysis fails. To minimize the performance degradation, our method loads the cache, if necessary, with data likely to be accessed.Experimental results show that this scheme is fully predictable, without compromising the performance of the transformed program. When compared to an algorithm that assumes compulsory misses when the state of the cache is unknown, our approach eliminates all overestimation for the set of benchmarks, giving an exact WCMP of the transformed program without any significant decrease in performance.

155 citations

Patent
01 Mar 2001
TL;DR: In this article, a garbage collector that uses an LRU algorithm to free memory from an XML DOM tree active in an application cache is described. But it is not shown how to remove the nodes from the DOM tree.
Abstract: The present invention relates to a garbage collector that uses an LRU algorithm to free memory from an XML DOM tree active in an application cache. According to one or more embodiments of the present invention, a threshold for the amount of memory permitted to reside in an application cache is set. Then, a garbage collector removes entries from the cache until it falls below the threshold. In one or more embodiments, a node table is used. When nodes are added to the XML DOM tree in the application cache the node table is updated. When the threshold for the amount of memory permitted to reside in the application cache is exceeded, the garbage collector applies an LRU algorithm uses the node table to determine which nodes to remove from the application cache. In one embodiment, the LRU algorithm scans the node table to determine the least recently used node in the table by examining time stamp entries in the table. Then, the algorithm removes that node and repeats the process until the XML DOM tree uses less memory in the cache than the threshold.

154 citations

Patent
19 Apr 1993
TL;DR: In this paper, a cache memory replacement scheme with a locking feature is provided, where the locking bits associated with each line in the cache are supplied in the tag table and used by the application program/process executing and are utilized in conjunction with cache replacement bits by the cache controller to determine the lines in cache to replace.
Abstract: In a memory system having a main memory and a faster cache memory, a cache memory replacement scheme with a locking feature is provided. Locking bits associated with each line in the cache are supplied in the tag table. These locking bits are preferably set and reset by the application program/process executing and are utilized in conjunction with cache replacement bits by the cache controller to determine the lines in the cache to replace. The lock bits and replacement bits for a cache line are "ORed" to create a composite bit for the cache line. If the composite bit is set the cache line is not removed from the cache. When deadlock due to all composite bits being set will result, all replacement bits are cleared. One cache line is always maintained as non-lockable. The locking bits "lock" the line of data in the cache until such time when the process resets the lock bit. By providing that the process controls the state of the lock bits, the intelligence and knowledge the process contains regarding the frequency of use of certain memory locations can be utilized to provide a more efficient cache.

153 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Compiler
26.3K papers, 578.5K citations
89% related
Scalability
50.9K papers, 931.6K citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Static routing
25.7K papers, 576.7K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
2022110
202112
202020
201915
201830