scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Proceedings ArticleDOI
11 Jul 1997
TL;DR: It is shown that for a 8 Kbyte data cache, XOR-mapping schemes approximately halve the miss ratio for two-way associative and column-associative organizations, and XOR mapping schemes provide a very significant reduction in the misses ratio for the other cache organizations, including the direct-mapped cache.
Abstract: This paper makes the case for the use of XOR-based placement functions for cache memories. It shows that these XOR-mapping schemes can eliminate many conflict misses for direct-mapped and victim caches and practically all of them for (pseudo) two-way associative organizations. The paper evaluates the performance of XOR-mapping schemes for a number of different cache organizations: direct-mapped, set-associative, victim, hash-rehash, column-associative and skewed-associative. It also proposes novel replacement policies for some of these cache organizations. In particular, it presents a low-cost implementation of a pure LRU replacement policy which demonstrates a significant improvement over the pseudo-LRU replacement previously proposed. The paper shows that for a 8 Kbyte data cache, XOR-mapping schemes approximately halve the miss ratio for two-way associative and column-associative organizations. Skewed-associative caches, which already make use of XOR-mapping functions, can benefit from the LRU replacement and also from the use of more sophisticated mapping functions. For two-way associative, columnassociative and two-way skewed-associative organizations, XORmapping schemes achieve a miss ratio that is not higher than 1.10 times that of a fully-associative cache. XOR mapping schemes also provide a very significant reduction in the miss ratio for the other cache organizations, including the direct-mapped cache. Ultimately, the conclusion of this study is that XOR-based placement functions unequivocally provide highly significant performance benefits to most cache organizations.

152 citations

Proceedings ArticleDOI
25 Mar 2012
TL;DR: CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks, and is effective for both CCN and today's cache servers.
Abstract: With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly.

152 citations

Book ChapterDOI
09 Sep 2003
TL;DR: This work introduces a new database object called Cache Table that enables persistent caching of the full or partial content of a remote database table that supports transparent caching both at the edge of content-delivery networks and in the middle of an enterprise application infrastructure, improving the response time, throughput and scalability of transactional web applications.
Abstract: We introduce a new database object called Cache Table that enables persistent caching of the full or partial content of a remote database table. The content of a cache table is either defined declaratively and populated in advance at setup time, or determined dynamically and populated on demand at query execution time. Dynamic cache tables exploit the characteristics of typical transactional web applications with a high volume of short transactions, simple equality predicates, and 3-4 way joins. Based on federated query processing capabilities, we developed a set of new technologies for database caching: cache tables, "Janus" (two-headed) query execution plans, cache constraints, and asynchronous cache population methods. Our solution supports transparent caching both at the edge of content-delivery networks and in the middle-tier of an enterprise application infrastructure, improving the response time, throughput and scalability of transactional web applications.

151 citations

Journal ArticleDOI
TL;DR: This work investigates multiple approaches to effectively manage second-level buffer caches and reports a new local algorithm called multi-queue (MQ) that performs better than nine tested alternative algorithms for second-levels buffer caches, and a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second- level buffer cache hit ratios over corresponding local algorithms.
Abstract: Buffer caches are commonly used in servers to reduce the number of slow disk accesses or network messages. These buffer caches form a multilevel buffer cache hierarchy. In such a hierarchy, second-level buffer caches have different access patterns from first-level buffer caches because accesses to a second-level are actually misses from a first-level. Therefore, commonly used cache management algorithms such as the least recently used (LRU) replacement algorithm that work well for single-level buffer caches may not work well for second-level. We investigate multiple approaches to effectively manage second-level buffer caches. In particular, we report our research results in 1) second-level buffer cache access pattern characterization, 2) a new local algorithm called multi-queue (MQ) that performs better than nine tested alternative algorithms for second-level buffer caches, 3) a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second-level buffer cache hit ratios over corresponding local algorithms, and 4) implementation and evaluation of these algorithms in a real storage system connected with commercial database servers (Microsoft SQL server and Oracle) running industrial-strength online transaction processing benchmarks.

150 citations

Patent
31 Jul 1997
TL;DR: In this article, a cache-extension disk region is used to expand the size of the log structured cache by partitioning the cache memory region into write cache segments and redundancy data (parity) cache segments.
Abstract: Method and apparatus for accelerating write operations logging write requests in a log structured cache and by expanding the log structured cache using a cache-extension disk region The log structured cache include a cache memory region partitioned into one or more write cache segments and one or more redundancy-data (parity) cache segments The cache-extension disk region is a portion of a disk array separate from a main disk region The cache-extension disk region is also partitioned into segments and is used to extend the size of the log structured cache The main disk region is instead managed in accordance with storage management techniques (eg, RAID storage management) The write cache segment is partitioned into multiple write cache segments so that when one is full another can be used to handle new write requests When one of these multiple write cache segments is filled, it is moved to the cache-extension disk region thereby freeing the write cache segment for reuse The redundancy-data (parity) cache segment holds redundancy data for recent write requests, thereby assuring integrity of the logged write request data in the log structured cache

150 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818