scispace - formally typeset
Proceedings ArticleDOI

Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers

Norman P. Jouppi
- Vol. 18, pp 364-373
Reads0
Chats0
TLDR
In this article, a hardware technique to improve the performance of caches is presented, where a small fully-associative cache between a cache and its refill path is used to place prefetched data and not in the cache.
Abstract
Projections of computer technology forecast processors with peak performance of 1,000 MIPS in the relatively near future. These processors could easily lose half or more of their performance in the memory hierarchy if the hierarchy design is based on conventional caching techniques. This paper presents hardware techniques to improve the performance of caches.Miss caching places a small fully-associative cache between a cache and its refill path. Misses in the cache that hit in the miss cache have only a one cycle miss penalty, as opposed to a many cycle miss penalty without the miss cache. Small miss caches of 2 to 5 entries are shown to be very effective in removing mapping conflict misses in first-level direct-mapped caches.Victim caching is an improvement to miss caching that loads the small fully-associative cache with the victim of a miss and not the requested line. Small victim caches of 1 to 5 entries are even more effective at removing conflict misses than miss caching.Stream buffers prefetch cache lines starting at a cache miss address. The prefetched data is placed in the buffer and not in the cache. Stream buffers are useful in removing capacity and compulsory cache misses, as well as some instruction cache conflict misses. Stream buffers are more effective than previously investigated prefetch techniques at using the next slower level in the memory hierarchy when it is pipelined. An extension to the basic stream buffer, called multi-way stream buffers, is introduced. Multi-way stream buffers are useful for prefetching along multiple intertwined data reference streams.Together, victim caches and stream buffers reduce the miss rate of the first level in the cache hierarchy by a factor of two to three on a set of six large benchmarks.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Enabling partial cache line prefetching through data compression

TL;DR: A new prefetching scheme is proposed which improves performance without increasing memory traffic or requiring prefetch buffers, and it is observed that a significant percentage of dynamically appearing values exhibit characteristics that enable their compression using a very simple compression scheme.
Journal ArticleDOI

Routing table partitioning for speedy packet lookups in scalable routers

TL;DR: This work introduces and evaluates a technique for speedy packet lookups, called SPAL, in high-performance routers, and reveals that SPAL leads to improved mean lookup performance by a factor of at least 2.5 for a router with three (or 16) LCs, if the LR-cache contains 4K blocks.
Journal ArticleDOI

Acceleration of XML Parsing through Prefetching

TL;DR: A study of XML parsing is presented and it is determined that memory-side data loading in the parsing stage incurs a significant performance overhead, as much as the computation does, so a memory- side acceleration which incorporates of data prefetching techniques is proposed and applied on top of computation-side acceleration to speed up the XML data parsing.

Software and Hardware Techniques for Efficient Polymorphic Calls

Urs Hölzle, +1 more
TL;DR: In this paper, the authors study techniques to minimize the run-time cost of polymorphic calls in object-oriented code, and show that the use of polymorphism can lead to a more rigid program structure and code duplication, increasing the short term effort required to build a functional prototype and the long term effort of maintaining and adapting a program to changing needs.
Journal ArticleDOI

Dynamic class-based queue management for scalable media servers

TL;DR: A dynamic class-based queue management scheme that effectively captures the trade-off between scalability and QoS granularity in a media server is proposed and the adaptiveness and its integration with the existing schedulers are examined.
References
More filters
Journal ArticleDOI

Cache Memories

TL;DR: Specific aspects of cache memories investigated include: the cache fetch algorithm (demand versus prefetch), the placement and replacement algorithms, line size, store-through versus copy-back updating of main memory, cold-start versus warm-start miss ratios, mulhcache consistency, the effect of input /output through the cache, the behavior of split data/instruction caches, and cache size.

Why Aren't Operating Systems Getting Faster As Fast as Hardware?

TL;DR: This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems to conclude that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware.
Journal ArticleDOI

Available instruction-level parallelism for superscalar and superpipelined machines

TL;DR: A parameterizable code reorganization and simulation system was developed and used to measure instruction-level parallelism and the average degree of superpipelining metric is introduced, suggesting that this metric is already high for many machines.
Journal ArticleDOI

Sequential Program Prefetching in Memory Hierarchies

TL;DR: It is shown that prefetching all memory references in very fast computers can increase the effective CPU speed by 10 to 25 percent.
Proceedings ArticleDOI

On the inclusion properties for multi-level cache hierarchies

TL;DR: The inclusion property is essential in reducing the cache coherence complexity for multiprocessors with multilevel cache hierarchies and a new inclusion-coherence mechanism for two-level bus-based architectures is proposed.