scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
16 Aug 1999
TL;DR: In this article, the cache management strategy is customized for each semantic type, using different caching policies for different semantic types, and the relationship between semantic content type and caching policy to be associated with the type can be determined in advance, or may be determined directly by the user, or could be based, at least partly, on user-history and profiling of userinteraction with the resources.
Abstract: Resources are cached based on the semantic type of the resource. The cache management strategy is customized for each semantic type, using different caching policies for different semantic types. Semantic types that can be expected to contain dynamic information, such as news and weather, employ an active caching policy wherein the resource in the cache memory is chosen for replacement based on the duration of time that the resource has been in cache memory. Conversely, semantic types that can be expected to contain static resources, such as encyclopedic information, employ a more conservative caching strategy, such as LRU (Last Recently Used) and LFU (Least Frequently Used) that is substantially independent of the time duration that the resource remains in cache memory. Additionally, some semantic types, such as communicated e-mail messages, newsgroup messages, and so on, may employ a caching policy that is a combination of multiple strategies, wherein the resource progresses from an active cache with a dynamic caching policy to a more static caches with increasing less dynamic caching policies. The relationship between semantic content type and caching policy to be associated with the type can be determined in advance, or may be determined directly by the user, or could be based, at least partly, on user-history and profiling of user-interaction with the resources.

67 citations

Journal ArticleDOI
TL;DR: A new cache replacement method based on Adaptive Neuro-Fuzzy Inference System (ANFIS) is presented to mitigate the cache pollution attacks in NDN and mitigates them efficiently without very much computational cost as compared to the most common policies.

67 citations

Proceedings ArticleDOI
24 Jan 2006
TL;DR: A method is given to rapidly find the L1 cache miss rate of an application and an energy model and an execution time model are developed to find the best cache configuration for the given embedded application.
Abstract: Modern embedded system execute a single application or a class of applications repeatedly. A new emerging methodology of designing embedded system utilizes configurable processors where the cache size, associativity, and line size can be chosen by the designer. In this paper, a method is given to rapidly find the L1 cache miss rate of an application. An energy model and an execution time model are developed to find the best cache configuration for the given embedded application. Using benchmarks from Mediabench, we find that our method is on average 45 times faster to explore the design space, compared to Dinero IV while still having 100% accuracy.

67 citations

Journal ArticleDOI
TL;DR: This paper proposes a cost-effective solution to enhance data reliability significantly with minimum impact on performance by adding a small fully associative cache to store the replica of every write to the L1 data cache.
Abstract: Soft error conscious cache design has become increasingly crucial for reliable computing. The widely used ECC or parity-based integrity checking techniques have only limited capability in error detection and correction, while incurring nontrivial penalty in area or performance. The N modular redundancy (NMR) scheme is too costly for processors with stringent cost constraints. This paper proposes a cost-effective solution to enhance data reliability significantly with minimum impact on performance. The idea is to add a small fully associative cache to store the replica of every write to the L1 data cache. Due to data locality and its full associativity, the replication cache can be kept small while providing replicas for a significant fraction of read hits in L1, which can be used to enhance data integrity against soft errors. Our experiments show that a replication cache with eight blocks can provide replicas for 97.3 percent of read hits in L1 on average. Moreover, compared with the recently proposed in-cache replication schemes, the replication cache is more energy efficient, while improving the data integrity against soft errors significantly.

67 citations

Proceedings ArticleDOI
30 Mar 2004
TL;DR: This work shows that the standard hash join algorithm/or disk-oriented databases (i.e. GRACE) spends over 73% of its user time stalled on CPU cache misses, and explores the use of prefetching to improve its cache performance.
Abstract: Hash join algorithms suffer from extensive CPU cache stalls. We show that the standard hash join algorithm/or disk-oriented databases (i.e. GRACE) spends over 73% of its user time stalled on CPU cache misses, and explores the use of prefetching to improve its cache performance. Applying prefetching to hash joins is complicated by the data dependencies, multiple code paths, and inherent randomness of hashing. We present two techniques, group prefetching and software-pipelined prefetching, that overcome these complications. These schemes achieve 2.0-2.9X speedups for the join phase and 1.4-2.6X speedups for the partition phase over GRACE and simple prefetching approaches. Compared with previous cache-aware approaches (i.e. cache partitioning), the schemes are at least 50% faster on large relations and do not require exclusive use of the CPU cache to be effective.

67 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818