scispace - formally typeset
Open AccessProceedings ArticleDOI

Energy efficient Frequent Value data Cache design

Reads0
Chats0
TLDR
This paper proposes the design of the Frequent Value Cache (FVC), a cache in which storing a frequent value requires few bits as they are stored in encoded form while all other values are storage in unencoded form using 32 bits.
Abstract
Recent work has shown that a small number of distinct frequently occurring values often account for a large portion of memory accesses. In this paper we demonstrate how this frequent value phenomenon can be exploited in designing a cache that trades off performance with energy efficiency. We propose the design of the Frequent Value Cache (FVC) in which storing a frequent value requires few bits as they are stored in encoded form while all other values are stored in unencoded form using 32 bits. The data array is partitioned into two arrays such that if a frequent value is accessed only the first data array is accessed; otherwise an additional cycle is needed to access the second data array. Experiments with some of the SPEC95 benchmarks show that on an average a 64 Kb/64-value FVC provides 28.8% reduction in Ll cache energy and 3.38% increase in execution time delay over a conventional 64 Kb cache.

read more

Citations
More filters
Journal ArticleDOI

BugNet: Continuously Recording Program Execution for Deterministic Replay Debugging

TL;DR: The proposed BugNet architecture provides the ability to replay an application's execution across context switches and interrupts, which obviates the need for tracking program I/O, interrupts and DMA transfers, which would have otherwise required more complex hardware support.
Proceedings ArticleDOI

A highly configurable cache architecture for embedded systems

TL;DR: This work introduces a novel cache architecture intended for embedded microprocessor platforms that can be configured by software to be direct-mapped, two-way, or four-way set associative, using a technique the authors call way concatenation, having very little size or performance overhead.

Frequent Pattern Compression: A Significance-Based Compression Scheme for L2 Caches

TL;DR: This work proposes and evaluates a simple significance-based compression scheme that has a low compression and decompression overhead and provides comparable compression ratios to more complex schemes that have higher cache hit latencies.
Book

Computer Architecture Techniques for Power-Efficiency

TL;DR: This book aims to document some of the most important architectural techniques that were invented, proposed, and applied to reduce both dynamic power and static power dissipation in processors and memory hierarchies by focusing on their common characteristics.
Journal ArticleDOI

A Survey of Architectural Techniques For Improving Cache Power Efficiency

TL;DR: The aim of this survey is to enable engineers and researchers to get insights into the techniques for improving cache power efficiency and motivate them to invent novel solutions for enabling low-power operation of caches.
References
More filters
Proceedings ArticleDOI

Wattch: a framework for architectural-level power analysis and optimizations

TL;DR: Wattch is presented, a framework for analyzing and optimizing microprocessor power dissipation at the architecture-level and opens up the field of power-efficient computing to a wider range of researchers by providing a power evaluation methodology within the portable and familiar SimpleScalar framework.
Proceedings ArticleDOI

Selective cache ways: on-demand cache resource allocation

TL;DR: In this paper, a tradeoff between performance and energy is made between a small performance degradation for energy savings, and the tradeoff can produce a significant reduction in cache energy dissipation.
Proceedings ArticleDOI

Gated-V/sub dd/: a circuit technique to reduce leakage in deep-submicron cache memories

TL;DR: Results indicate that gated-Vdd together with a novel resizable cache architecture reduces energy-delay by 62% with minimal impact on performance.
Proceedings ArticleDOI

The filter cache: an energy efficient memory structure

TL;DR: Experimental results across a wide range of embedded applications show that the filter cache results in improved memory system energy efficiency, and this work proposes to trade performance for power consumption by filtering cache references through an unusually small L1 cache.
Proceedings ArticleDOI

Reconfigurable caches and their application to media processing

TL;DR: A new reconfigurable cache design is proposed that enables the cache SRAM arrays to be dynamically divided into multiple partitions that can be used for different processor activities.
Related Papers (5)