scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
Bernd Lamberts1
14 Sep 2000
TL;DR: A cooperative disk cache management and rotational positioning optimization (RPO) method for a data storage device such as a disk drive makes cache decisions that decrease the total access times for all data.
Abstract: A cooperative disk cache management and rotational positioning optimization (RPO) method for a data storage device, such as a disk drive, makes cache decisions that decrease the total access times for all data The cache memory provides temporary storage for data either to be written to disk or that has been read from disk Data access times from cache are significantly lower than data access times from the storage device, and it is advantageous to store in cache data that is likely to be referenced again For each data block that is a candidate to store in cache, a cost function is calculated and compared with analogous cost functions for data already in cache The data having the lowest cost function is removed from cache and replaced with data having a higher cost function The cost function C measures the expected additional cost, in time, of not storing the data in cache, and is given by C=(T d −T c )P, where T d is the disk access time, T c is the cache access time, and P is an access probability for the data Access times are calculated according to an RPO algorithm that includes both seek times and rotational latencies

63 citations

Patent
15 Sep 1987
TL;DR: In this paper, a mechanism for determining when the contents of a block in a cache memory have been rendered stale by DMA activity external to a processor and for marking the block stale in response to a positive determination is proposed.
Abstract: A mechanism for determining when the contents of a block in a cache memory have been rendered stale by DMA activity external to a processor and for marking the block stale in response to a positive determination. The commanding unit in the DMA transfer, prior to transmitting an address, asserts a cache control signal which conditions the processor to receive the address and determine whether there is a correspondence to the contents of the cache. If there is a correspondence, the processor marks the contents of that cache location for which there is a correspondence stale.

63 citations

Proceedings ArticleDOI
06 Apr 2008
TL;DR: This work proposes a method to prefetch irregular references accessed through a software cache that is built upon hardware such as Cell, and finds that when applicable, this prefetching can improve the performance of some benchmarks by 2 times on average, and by close to 4 times in the best case.
Abstract: The IBM Single Source Research Compiler for the Cell processor (the SSC Research Compiler) was developed to manage the complexity of programming the heterogeneous multicore Cell processor. The compiler accepts conventional source programs as input, and automatically generates binaries that execute on both the PPU and SPU cores available on a Cell chip. The compiler uses a software cache and direct buffers to manage data in the small local memory of SPUs. However, irregular references, such as a[ind[i]], often become performance bottle-necks. These references are accessed through software cache, usually with high miss rates. To solve this problem, we propose a method to prefetch irregular references accessed through a software cache that is built upon hardware such as Cell. This method includes code transformation in the compiler and a runtime library component for the software cache. Our design simplifies the synchronization required when prefetching into software cache, overlaps DMA operations for misses, and avoids frequent context switching to the miss handler. It also minimizes the cache pollution caused by prefetching, by looking both forwards and backwards through the sequence of addresses to be prefetched. We evaluated our prefetching method using the NAS benchmarks. We found that when applicable, our prefetching can improve the performance of some benchmarks by 2 times on average, and by close to 4 times in the best case. We also present data to show the impact of different configurations and optimizations when prefetching in a software cache.

63 citations

Patent
Robert Yung1
13 Mar 1996
TL;DR: In this article, a cache structure for a microprocessor which provides set-prediction information for a separate, second-level cache, and a method for improving cache accessing, are provided.
Abstract: A cache structure for a microprocessor which provides set-prediction information for a separate, second-level cache, and a method for improving cache accessing, are provided. In the event of a first-level cache miss, the second-level set-prediction information is used to select the set in an N-way off-chip set-associative cache. This allows a set-associative structure to be used in a second-level cache (on or off chip) without requiring a large number of traces and/or pins. Since set-prediction is used, the subsequent access time for a comparison to determine that the correct set was predicted is not in the critical timing path unless there is a mis-prediction or a miss in the second-level cache. Also, a cache memory can be partitioned into M sets, with M being chosen so that the set size is less than or equal to the page size, allowing a cache access before a TLB translation is done, further speeding the access.

63 citations

Journal ArticleDOI
TL;DR: An investigation of the various cache schemes that are practical for a minicomputer has been found to provide considerable insight into cache organization.
Abstract: An investigation of the various cache schemes that are practical for a minicomputer has been found to provide considerable insight into cache organization. Simulations are used to obtain data on the performance and sensitivity of organizational parameters of various writeback and lookahead schemes. Hardware considerations in the construction of the actual cache-minicomputer are also noted and a simple cost/performance analysis is presented.

63 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818