scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
26 Sep 2012
TL;DR: In this paper, a bit array is employed to store recency information in a memory element that is configured to store metadata for data objects stored in a separate cache memory element, which includes bit offset information for each of the keys denoting different slots in the bit array.
Abstract: Caching systems and methods for managing a cache are disclosed. One method includes determining whether a cache eviction condition is satisfied. In response to determining that the cache eviction condition is satisfied, at least one Bloom filter registering keys denoting objects in the cache is referenced to identify a particular object in the cache to evict. Further, the identified object is evicted from the cache. In accordance with an alternative scheme, a bit array is employed to store recency information in a memory element that is configured to store metadata for data objects stored in a separate cache memory element. This separate cache memory element stores keys denoting the data objects in the cache and further includes bit offset information for each of the keys denoting different slots in the bit array to enable access to the recency information.

112 citations

Journal ArticleDOI
TL;DR: The paper employs stretch as the major performance metric since it accounts for the data service time and, thus, is fair when items have different sizes and proves that Min-SAUD achieves optimal stretch under some standard assumptions.
Abstract: Data caching at mobile clients is an important technique for improving the performance of wireless data dissemination systems. However, variable data sizes, data updates, limited client resources, and frequent client disconnections make cache management a challenge. We propose a gain-based cache replacement policy, Min-SAUD, for wireless data dissemination when cache consistency must be enforced before a cached item is used. Min-SAUD considers several factors that affect cache performance, namely, access probability, update frequency, data size, retrieval delay, and cache validation cost. The paper employs stretch as the major performance metric since it accounts for the data service time and, thus, is fair when items have different sizes. We prove that Min-SAUD achieves optimal stretch under some standard assumptions. Moreover, a series of simulation experiments have been conducted to thoroughly evaluate the performance of Min-SAUD under various system configurations. The simulation results show that, in most cases, the Min-SAUD replacement policy substantially outperforms two existing policies, namely, LRU and SAIU.

112 citations

Proceedings ArticleDOI
01 May 1986
TL;DR: An analytical model for a cache-reload transient is developed and it is shown that the reload transient is related to the area in the tail of a normal distribution whose mean is a function of the footprints of the programs that compete for the cache.
Abstract: This paper develops an analytical model for a cache-reload transient. When an interrupt program or system program runs periodically in a cache-based computer, a short cache-reload transient occurs each time the interrupt program is invoked. That transient depends on the size of the cache, the fraction of the cache used by the interrupt program, and the fraction of the cache used by background programs that run between interrupts. We call the portion of a cache used by a program its footprint in the cache, and we show that the reload transient is related to the area in the tail of a normal distribution whose mean is a function of the footprints of the programs that compete for the cache. We believe that the model may be useful as well for predicting paging behavior in virtual-memory systems with round-robin scheduling.

112 citations

Journal ArticleDOI
TL;DR: It is claimed that there is a sufficient number of good policies for proxies with different characteristics, such as proxies with a small cache, limited bandwidth, and limited processing power, as well as suggest policies for different types of proxies.
Abstract: Research involving Web cache replacement policy has been active for at least a decade. In this-article we would like to claim that there is a sufficient number of good policies, and further proposals would only produce minute improvements. We argue that the focus should be fitness for purpose rather than proposing any new policies. Up to now, almost all policies were purported to perform better than others, creating confusion as to which policy should be used. Actually, a policy only performs well in certain environments. Therefore, the goal of this article is to identify the appropriate policies for proxies with different characteristics, such as proxies with a small cache, limited bandwidth, and limited processing power, as well as suggest policies for different types of proxies, such as ISP-level and root-level proxies

112 citations

Patent
17 Aug 2001
TL;DR: In this article, a shared L2 cache architecture with 4-way associativity, four segments per entry and four valid and dirty bits is presented, and a shared translation look-aside buffer (TLB) is provided for L2 accesses, while a private TLB associated with each processor.
Abstract: A digital system is provided with a several processors, a private level one (L1) cache associated with each processor, a shared level two (L2) cache having several segments per entry, and a level three (L3) physical memory. The shared L2 cache architecture is embodied with 4-way associativity, four segments per entry and four valid and dirty bits. When the L2-cache misses, the penalty to access to data within the L3 memory is high. The system supports miss under miss to let a second miss interrupt a segment prefetch being done in response to a first miss. Thus, an interruptible SDRAM to L2-cache prefetch system with miss under miss support is provided. A shared translation look-aside buffer (TLB) is provided for L2 accesses, while a private TLB is associated with each processor. A micro TLB (μTLB) is associated with each resource that can initiate a memory transfer. The L2 cache, along with all of the TLBs and μTLBs have resource ID fields and task ID fields associated with each entry to allow flushing and cleaning based on resource or task. Configuration circuitry is provided to allow the digital system to be configured on a task by task basis in order to reduce power consumption.

111 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818