scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
27 Dec 2004
TL;DR: In this article, a system and method for the design and operation of a distributed shared cache in a multi-core processor is described, and a cache line brought in from memory may initially be placed into a cache molecule that is not closest to a processor core.
Abstract: A system and method for the design and operation of a distributed shared cache in a multi-core processor is disclosed. In one embodiment, the shared cache may be distributed among multiple cache molecules. Each of the cache molecules may be closest, in terms of access latency time, to one of the processor cores. In one embodiment, a cache line brought in from memory may initially be placed into a cache molecule that is not closest to a requesting processor core. When the requesting processor core makes repeated accesses to that cache line, it may be moved either between cache molecules or within a cache molecule. Due to the ability to move the cache lines within the cache, in various embodiments special search methods may be used to locate a particular cache line.

53 citations

Journal ArticleDOI
TL;DR: This article revisits the performance and overhead of semantic caching using a modern database server and modern hardware and demonstrates that semantic caching works well in a range of applications, especially in network-constrained environments.
Abstract: The emergence of query-based online data services and e-commerce applications has prompted much recent research on data caching. This article studies semantic caching, a caching architecture for such applications, that caches the results of selection queries. The primary contribution of this article is to revisit the performance and overhead of semantic caching using a modern database server and modern hardware. Initially, the performance study focuses on simple workloads and demonstrates several benefits of semantic caching, including low overhead, insensitivity to the physical layout of the database, reduced network traffic, and the ability to answer some queries without contacting the server. With moderately complex workloads, careful coding of remainder queries is required to maintain efficient query processing at the server. Using very complex workloads, we demonstrate that semantic caching works well in a range of applications, especially in network-constrained environments.

53 citations

Patent
25 Apr 2003
TL;DR: In this paper, the L2 cache can be forced to victimize cache lines, by setting tag bits for the cache lines to a value that misses in the L 2 cache (e.g., cache-inhibited space).
Abstract: A method of reducing errors in a cache memory of a computer system (e.g., an L2 cache) by periodically issuing a series of purge commands to the L2 cache, sequentially flushing cache lines from the L2 cache to an L3 cache in response to the purge commands, and correcting errors (single-bit) in the cache lines as they are flushed to the L3 cache. Purge commands are issued only when the processor cores associated with the L2 cache have an idle cycle available in a store pipe to the cache. The flush rate of the purge commands can be programmably set, and the purge mechanism can be implemented either in software running on the computer system, or in hardware integrated with the L2 cache. In the case of the software, the purge mechanism can be incorporated into the operating system. In the case of hardware, a purge engine can be provided which advantageously utilizes the store pipe that is provided between the L1 and L2 caches. The L2 cache can be forced to victimize cache lines, by setting tag bits for the cache lines to a value that misses in the L2 cache (e.g., cache-inhibited space). With the eviction mechanism of the cache placed in a direct-mapped mode, the address misses will result in eviction of the cache lines, thereby flushing them to the L3 cache.

53 citations

Patent
Steven Shultz1, Xenia Tkatschow1
17 Dec 2004
TL;DR: In this paper, a system, computer program product and method for managing a cache of a virtual machine is described, and a record is made of an identity of the cache in storage, such that if the VM subsequently resumes operating, the LPAR can access the cache and its contents.
Abstract: A system, computer program product and method for managing a cache of a virtual machine. A cache is defined in memory, and a virtual machine is assigned to the cache. An identity of the cache is recorded in storage. The virtual machine terminates, and the cache and contents of the cache are preserved despite the termination of the virtual machine, such that if the virtual machine subsequently resumes operating, the virtual machine can access the cache and its contents. There is also a system, method and computer program product for managing a cache of an LPAR. A cache is defined in memory, and assigned to an LPAR. A record is made of an identity of the cache in storage. The LPAR terminates, and the cache and contents of the cache are preserved despite the termination of the LPAR, such that if the LPAR subsequently resumes operating, the LPAR can access the cache and its contents.

53 citations

Patent
25 Jun 2002
TL;DR: In this paper, a new synchronization architecture for synchronization of data between different clients by using a central synchronization server linked to a Back End data store which additionally provides a cache for permanently buffering incoming updates into a permanent store by assigning an unique cache identifier (ID).
Abstract: The present invention discloses a new synchronization architecture for synchronization of data between different clients by using a central synchronization server linked to a Back End data store which additionally provides a cache for permanently buffering incoming updates into a permanent store by assigning an unique cache identifier (ID). Write conflicts between the synchronization server writing new entries to the cache and updates replicated from backend to cache are solved using a blocking mechanism based on the cache IDs, so that the backend updates are blocked as long as incoming updates from the clients having the same cache ID are not completely written into the cache during a synchronization session. The present invention is preferably suited for a synchronization architecture having a high number of clients connected with the central synchronization server as blocking of the Back End data store, and the connection and the transport to the Back End data store are minimized.

52 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818