Topic
Smart Cache
About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.
Papers published on a yearly basis
Papers
More filters
•
18 Dec 2000TL;DR: In this paper, an access request associated with a cache miss to a single cache line having a pending cache fill can be handled in a non-blocking manner by storing the cache miss in a retry queue while the cache fill is pending.
Abstract: In accordance with one aspect of the present invention, an access request associated with a cache miss to a single cache line having a pending cache fill can be handled in a non-blocking manner by storing the cache miss in a retry queue while the cache fill is pending. The retry queue then detects the return of the cache fill and inserts the access request associated with the cache miss onto the cache pipeline for processing.
44 citations
••
20 Jun 2009TL;DR: This paper proposes the Mergeable cache architecture that detects data similarities and merges cache blocks, resulting in substantial savings in cache storage requirements, which leads to reductions in off-chip memory accesses and overall power usage, and increases in application performance.
Abstract: While microprocessor designers turn to multicore architectures to sustain performance expectations, the dramatic increase in parallelism of such architectures will put substantial demands on off-chip bandwidth and make the memory wall more significant than ever. This paper demonstrates that one profitable application of multicore processors is the execution of many similar instantiations of the same program. We identify that this model of execution is used in several practical scenarios and term it as "multi-execution." Often, each such instance utilizes very similar data. In conventional cache hierarchies, each instance would cache its own data independently. We propose the Mergeable cache architecture that detects data similarities and merges cache blocks, resulting in substantial savings in cache storage requirements. This leads to reductions in off-chip memory accesses and overall power usage, and increases in application performance. We present cycle-accurate simulation results of 8 benchmarks (6 from SPEC2000) to demonstrate that our technique provides a scalable solution and leads to significant speedups due to reductions in main memory accesses. For 8 cores running 8 similar executions of the same application and sharing an exclusive 4-MB, 8-way L2 cache, the Mergeable cache shows a speedup in execution by 2.5x on average (ranging from 0.93x to 6.92x), while posing an overhead of only 4.28% on cache area and 5.21% on power when it is used.
44 citations
•
31 Aug 2006TL;DR: In this article, a method for incrementing a counter value associated with a cache line if the cache line is inserted into a first level cache, and storing the cache lines into a second level cache coupled to the first level or a third-level cache coupled with the second-layer cache based on the counter value was described.
Abstract: In one embodiment, the present invention includes a method for incrementing a counter value associated with a cache line if the cache line is inserted into a first level cache, and storing the cache line into a second level cache coupled to the first level cache or a third level cache coupled to the second level cache based on the counter value, after eviction from the first level cache. Other embodiments are described and claimed.
44 citations
•
IBM1
TL;DR: In this paper, a technique for creating and using a structured cache to increase the efficiency of reading persistent objects from a database is presented, which is comprised of an object cache, an associations cache, and a data cache.
Abstract: A technique for creating and using a structured cache to increase the efficiency of reading persistent objects from a database. The structured cache is comprised of an object cache, an associations cache, and a data cache. Data read-ahead is used to retrieve rows from a relational database in advance of an application's need for the data. Entries are created in the data cache and association cache as the rows are processed. The data cache stores data in unstructured binary format, delaying the expense of instantiation until an object is requested by the application. At that time, data is retrieved from the data cache, an object is instantiated from the data, and an entry is created in the object cache. This approach also saves storage space that would be wasted if objects were instantiated upon retrieval, but never used. The association cache stores members of an association, organized by member key within owner key for each association. According to the preferred embodiment, maintaining cache consistency is not required, further increasing the efficiency gains that can be realized using this technique.
44 citations
•
13 May 2009TL;DR: In this article, a data storage system with multiple cache is discussed. But the authors focus on the management of cache activity in data storage systems having multiple cache and not on the storage system itself.
Abstract: The disclosure is related to data storage systems having multiple cache and to management of cache activity in data storage systems having multiple cache. In a particular embodiment, a data storage device includes a volatile memory having a first read cache and a first write cache, a non-volatile memory having a second read cache and a second write cache and a controller coupled to the volatile memory and the non-volatile memory. The memory can be configured to selectively transfer read data from the first read cache to the second read cache based on a least recently used indicator of the read data and selectively transfer write data from the first write cache to the second write cache based on a least recently written indicator of the write data.
43 citations