scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
08 Oct 1999
TL;DR: In this article, a request can be forwarded to a cooperating cache server if the requested object cannot be found locally, and the load is balanced by shifting some or all of the forwarded requests from an overloaded cache server to a less loaded one.
Abstract: In a system including a collection of cooperating cache servers, such as proxy cache servers, a request can be forwarded to a cooperating cache server if the requested object cannot be found locally. An overload condition is detected if for example, due to reference skew, some objects are in high demand by all the clients and the cache servers that contain those hot objects become overloaded due to forwarded requests. In response, the load is balanced by shifting some or all of the forwarded requests from an overloaded cache server to a less loaded one. Both centralized and distributed load balancing environments are described.

44 citations

Patent
23 Mar 1998
TL;DR: In this article, the same address field is employed as a index to the cache directory and cache memory regardless of the cache memory size, to avoid multiplexing within critical address paths.
Abstract: To avoid multiplexing within the critical address paths, the same address field is employed as a index to the cache directory and cache memory regardless of the cache memory size. An increase in cache memory size is supported by increasing associativity within the cache directory and memory, for example by increasing congruence classes from two members to four members. For the smaller cache size, an additional address “index” bit is employed to select one of multiple groups of address tags/data items within a cache directory or cache memory row by comparison to a bit forced to a logic 1.

44 citations

Proceedings ArticleDOI
27 Oct 2003
TL;DR: This paper describes the methodology behind CASPER, its detailed design and currently supported set of functionalities, and believes CASPER is a useful addition to the performance analysis community for evaluating cache structures and hierarchies of various kinds.
Abstract: The efficient use of cache hierarchies is crucial to the performance of uni-processor (desktop) and multiprocessor (enterprise) platforms. A plethora of research exists on the various structures and protocols that are of interest when considering caches. To enable the performance analysis of various cache hierarchies and associated allocation/coherence protocols, we developed a trace-driven simulation framework called CASPER - cache architecture simulation & performance exploration using refstreams. The CASPER simulation framework provides a rich set of features to model various cache organization alternatives, coherence protocols & optimizations, allocation/replacement policies, prefetching and partitioning techniques. In this paper, we describe the methodology behind CASPER, its detailed design and currently supported set of functionalities. CASPER has been used extensively for various research studies; a brief overview of some of these CASPER-based evaluation studies and their salient results will also be discussed. Based on its wide-ranging applicability, we believe CASPER is a useful addition to the performance analysis community for evaluating cache structures and hierarchies of various kinds.

44 citations

Patent
03 Feb 1998
TL;DR: In this article, a backup copy of only a portion of the cache directory is maintained in non-volatile storage for error recovery purposes, which saves processing time by limiting the number of times that data is copied to the cache.
Abstract: For error recovery purposes, a backup copy of only a portion of the cache directory is maintained in non-volatile storage. Because only a portion of the cache directory is backup copied, a savings in storage space is realized. The partial copy includes an indication of the storage locations on a storage device for which data is in the cache, and an indication of the state of the data, i.e., whether the data is in process of being read from the cache by the processor, is in process of being written to the cache by the processor, is in the process of being destaged from the cache to the storage device, or none of the above. Only certain changes to the state of the cache cause a backup copy of a portion of the cache directory to be saved; other changes to the state of the cache do not cause the portion of the cache directory to be saved in non-volatile storage. This saves processing time by limiting the number of times that data is copied to the cache.

44 citations

Patent
Shisei Fujiwara1, Masabumi Shibata1, Atsushi Nakajima1, Naoki Hamanaka1, Naohiko Irie1 
28 Nov 1997
TL;DR: In this paper, the memory module is provided with a unit for returning a write completion acknowledgement (WRITE_ACK) to a write requesting processor module in a bus or switch coupled system having a plurality of processor modules and a memory module.
Abstract: In a bus or switch coupled system having a plurality of processor modules and a memory module, the memory module is provided with a unit for returning a write completion acknowledgement (WRITE_ACK) to a write requesting processor module. If a processor module PM1 is under execution of write-back of a cache line upon arrival of a cache coherence check (CCC) issued from a processor module with a cache miss of the cache line, an “INVALID” signal is returned to the CCC issued processor module PMO after a write completion acknowledgment from the memory module is confirmed and the cache line is invalidated. After confirming the “INVALID” signals from other processor modules, the CCC issued processor module issues a READ transaction to the memory module to obtain correct latest data reflecting the write-back data of the processor module.

44 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818