scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Proceedings ArticleDOI
01 Oct 2013
TL;DR: In-memory object caches, such as memcached, are critical to the success of popular web sites, by reducing database load and improving scalability, but unfortunately cache configuration is poorly understood.
Abstract: Large-scale in-memory object caches such as memcached are widely used to accelerate popular web sites and to reduce burden on backend databases. Yet current cache systems give cache operators limited information on what resources are required to optimally accommodate the present workload. This paper focuses on a key question for cache operators: how much total memory should be allocated to the in-memory cache tier to achieve desired performance?We present our Mimir system: a lightweight online profiler that hooks into the replacement policy of each cache server and produces graphs of the overall cache hit rate as a function of memory size. The profiler enables cache operators to dynamically project the cost and performance impact from adding or removing memory resources within a distributed in-memory cache, allowing "what-if" questions about cache performance to be answered without laborious offline tuning. Internally, Mimir uses a novel lock-free algorithm and lookup filters for quickly and dynamically estimating hit rate of LRU caches.Running Mimir as a profiler requires minimal changes to the cache server, thanks to a lean API. Our experiments show that Mimir produces dynamic hit rate curves with over 98% accuracy and 2--5% overhead on request latency and throughput when Mimir is run in tandem with memcached, suggesting online cache profiling can be a practical tool for improving provisioning of large caches.

62 citations

Proceedings ArticleDOI
01 Apr 2001
TL;DR: A cache management scheme is proposed, which involves the relocation of full caches to the most probable cells but also percentages of the caches to less likelyNeighborhoods, which demonstrates substantial benefits for the end user.
Abstract: Mobile computing is considered of major importance to the computing industry for the forthcoming years due to the progress in the wireless communications area. A proxy-based architecture for accelerating Web browsing in cellular customer premises networks (CPN) is presented. Proxy caches, maintained in base stations, are constantly relocated to accompany the roaming user. A cache management scheme is proposed, which involves the relocation of full caches to the most probable cells but also percentages of the caches to less likely neighbors. Relocation is performed according to a movement prediction algorithm based on a learning automaton. The simulation of the scheme demonstrates substantial benefits for the end user.

62 citations

Patent
17 Oct 1983
TL;DR: In this paper, the cache operating cycle is divided into two subcycles dedicated to mutually exclusive operations: the first subcycle is dedicated to receiving a central processor memory read request, with its address.
Abstract: A data processing machine in which the cache operating cycle is divided into two subcycles dedicated to mutually exclusive operations. The first subcycle is dedicated to receiving a central processor memory read request, with its address. The second subcycle is dedicated to every other kind of cache operation, in particular either (a) receiving an address from a peripheral processor for checking the cache contents after a peripheral processor write to main memory, or (b) writing anything to the cache, including an invalid bit after a cache check match condition, or data after either a cache miss or a central processor write to main memory. The central processor can continue uninteruptedly to read the cache on successive central processor microinstruction cycles, regardless of the fact that the cache contents are being "simultaneously" checked, invalidated or updated after central processor writes. After a cache miss, although the central processor must be stopped to permit updating, it can resume operations a cycle earlier than is possible without the divided cache cycle.

62 citations

Patent
20 Jun 2005
TL;DR: In this paper, the authors describe a cache and I/O management in a multi-layer storage virtualization environment, where the storage entities may be at the same layer (horizontal cache coordination) or at different layers (vertical cache coordination).
Abstract: Systems and methods are disclosed for performing cache and I/O management in a multi-layer storage virtualization environment. Block virtualization may be implemented at various layers, each with one or more storage entities. One storage entity may coordinate and leverage the cache and the caching mechanism available with another storage entity. The storage entities may be at the same layer (horizontal cache coordination) or at different layers (vertical cache coordination). Cache coordination may include application-assisted read-ahead.

62 citations

Patent
Peichun Peter Liu1
29 Apr 1996
TL;DR: In this article, content-addressable tag-compare arrays (CAMs) are used to select a cache line, and arbitration logic in each subarray selects a word line (cache line).
Abstract: A cache memory for a computer uses content-addressable tag-compare arrays (CAM) to determine if a match occurs. The cache memory is partitioned in four subarrays, i.e., interleaved, providing a wide cache line (word lines) but shallow depth (bit lines). The cache can be accessed by multiple addresses, producing multiple data outputs in a given cycle. Two effective addresses and one real address are applied at one time, and if addresses are matched in different subarrays, or two on the same line in a single subarray, then multiple access is permitted. The two content-addressable memories, or CAMs, are used to select a cache line, and in parallel with this, arbitration logic in each subarray selects a word line (cache line).

62 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20216
20201
20198
201818