scispace - formally typeset
Proceedings ArticleDOI

A self-tuning configurable cache

TLDR
A self-tuning cache is introduced that performs transparent runtime cache tuning, thus relieving the application designer and/or compiler from predetermining an application's cache configuration.
Abstract
The memory hierarchy of a system can consume up to 50% of microprocessor system power. Previous work has shown that tuning a configurable cache to a particular application can reduce memory subsystem energy by 62% on average. We introduce a self-tuning cache that performs transparent runtime cache tuning, thus relieving the application designer and/or compiler from predetermining an application's cache configuration. The self-tuning cache applies tuning at a determined tuning interval. A good interval balances tuning process energy overhead against the energy overhead of running in a sub-optimal cache configuration, which we show wastes much energy. We present a self-tuning cache that dynamically varies the tuning interval, resulting in average energy reduction of as much as 29%, falling within 13% of an oracle-based optimal method.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Dynamic cache reconfiguration and partitioning for energy optimization in real-time multi-core systems

TL;DR: This paper presents a novel energy optimization technique which employs both dynamic reconfiguration of private caches and partitioning of the shared cache for multicore systems with real-time tasks and can achieve 29.29% energy saving on average.
Journal ArticleDOI

System-Wide Leakage-Aware Energy Minimization Using Dynamic Voltage Scaling and Cache Reconfiguration in Multitasking Systems

TL;DR: This paper efficiently integrate DVS and DCR techniques together to make decisions judiciously so that the total energy consumption is minimized and shows that this approach outperforms existing leakage-aware DVS techniques by 47.6% and leakage-oblivious DVS + DCR technique by up to 23.5%.
Proceedings ArticleDOI

Leakage-Aware Energy Minimization Using Dynamic Voltage Scaling and Cache Reconfiguration in Real-Time Systems

TL;DR: This paper efficiently integrate processor voltage scaling and cache reconfiguration together that is aware of leakage power to minimize overall system energy consumption and outperforms existing techniques by on average 12 - 23%.
Journal ArticleDOI

Cache partitioning for energy-efficient and interference-free embedded multitasking

TL;DR: This work proposes a technique which leverages configurable data caches to address the problem of energy inefficiency and intertask interference in multitasking embedded systems, and introduces a profile-based, off-line algorithm, which identifies a beneficial cache partitioning.
Journal ArticleDOI

A survey on cache tuning from a power/energy perspective

TL;DR: This survey focuses on state-of-the-art offline static and online dynamic cache tuning techniques and summarizes the techniques' attributes, major challenges, and potential research trends to inspire novel ideas and future research avenues.
References
More filters
Proceedings ArticleDOI

MediaBench: a tool for evaluating and synthesizing multimedia and communications systems

TL;DR: The MediaBench benchmark suite as discussed by the authors is a benchmark suite that has been designed to fill the gap between the compiler community and embedded applications developers, which has been constructed through a three-step process: intuition and market driven initial selection, experimental measurement, and integration with system synthesis algorithms to establish usefulness.
Proceedings ArticleDOI

Selective cache ways: on-demand cache resource allocation

TL;DR: In this paper, a tradeoff between performance and energy is made between a small performance degradation for energy savings, and the tradeoff can produce a significant reduction in cache energy dissipation.
Proceedings ArticleDOI

Cache decay: exploiting generational behavior to reduce cache leakage power

TL;DR: This paper discusses policies and implementations for reducing cache leakage by invalidating and “turning off” cache lines when they hold data not likely to be reused, and proposes adaptive policies that effectively reduce LI cache leakage energy by 5x for the SPEC2000 with only negligible degradations in performance.
Proceedings ArticleDOI

Phase tracking and prediction

TL;DR: This paper presents a unified profiling architecture that can efficiently capture, classify, and predict phase-based program behavior on the largest of time scales, and can capture phases that account for over 80% of execution using less that 500 bytes of on-chip memory.
Proceedings ArticleDOI

Memory hierarchy reconfiguration for energy and performance in general-purpose processor architectures

TL;DR: This paper proposes a cache and TLB layout and design that leverages repeater insertion to provide dynamic low-cost configurability trading off size and speed on a per application phase basis and demonstrates that a configurable L2/L3 cache hierarchy coupled with a conventional LI results in an average 43% reduction in memory hierarchy energy in addition to improved performance.
Related Papers (5)