Open AccessProceedings Article
A Design Frame for Hybrid Access Cashes
K. B. Theobald,H. H. J. Hum,Guang R. Gao +2 more
- pp 144-153
About:
This article is published in High-Performance Computer Architecture.The article was published on 1995-01-22 and is currently open access. It has received 8 citations till now. The article focuses on the topics: Frame (networking).read more
Citations
More filters
Proceedings ArticleDOI
A data cache with multiple caching strategies tuned to different types of locality
Proceedings ArticleDOI
The Difference-Bit Cache
TL;DR: The difference-bit cache is a two-way set-associative cache with an access time that is smaller than that of a conventional one and close or equal to that of an direct-mapped cache.
Journal ArticleDOI
Partitioned instruction cache architecture for energy efficiency
TL;DR: The proposed subcache architecture employs a page-based placement strategy, a dynamic page remapping policy, and a subcache prediction policy in order to improve the memory system energy behavior, especially on-chip cache energy.
Journal Article
A power efficient cache structure for embedded processors based on the dual cache structure
TL;DR: The cooperative cache system is adopted as the cache structure for the CalmRISC-32 embedded processor that is going to be manufactured by Samsung Electronics Co. with 0.25µm technology.
Book ChapterDOI
A Power Efficient Cache Structure for Embedded Processors Based on the Dual Cache Structure
TL;DR: In this paper, a cooperative cache system consisting of two caches, i.e., a direct-mapped temporal oriented cache and a four-way set-associative spatial oriented cache, is proposed.
References
More filters
Book
Computer Architecture: A Quantitative Approach
TL;DR: This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today.
Proceedings ArticleDOI
Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers
TL;DR: In this article, a hardware technique to improve the performance of caches is presented, where a small fully-associative cache between a cache and its refill path is used to place prefetched data and not in the cache.
Proceedings ArticleDOI
Column-associative caches: a technique for reducing the miss rate of direct-mapped caches
Anant Agarwal,Stephen D. Pudar +1 more
TL;DR: This paperribes the design of column-ossociotive caches which minhize the cofllcrs that arise in direct-mapped accesses by allowing conflicting addressest to dynamically choose alternate hashing functions, so that most of the cordiicting datacanreside in the cache.
Journal ArticleDOI
A case for direct-mapped caches
TL;DR: Direct-mapped caches are defined, and it is shown that trends toward larger cache sizes and faster hit times favor their use.
Journal ArticleDOI
Cache performance of operating system and multiprogramming workloads
TL;DR: A program tracing technique called ATUM (Address Tracing Using Microcode) is developed that captures realistic traces of multitasking workloads including the operating system that shows that both the operating System and multiprogramming activity significantly degrade cache performance, with an even greater proportional impact on large caches.