SLOOP: QoS-Supervised Loop Execution to Reduce Energy on Heterogeneous Architectures
Reads0
Chats0
TLDR
A lightweight progress-tracking methodology based on the outer loops of application kernels that builds on online history and uses it to estimate the total execution time and can reduce the energy consumption by more than 20% without missing any computational deadlines.Abstract:
Most systems allocate computational resources to each executing task without any actual knowledge of the application’s Quality-of-Service (QoS) requirements. Such best-effort policies lead to overprovisioning of the resources and increase energy loss. This work assumes applications with soft QoS requirements and exploits the inherent timing slack to minimize the allocated computational resources to reduce energy consumption.We propose a lightweight progress-tracking methodology based on the outer loops of application kernels. It builds on online history and uses it to estimate the total execution time. The prediction of the execution time and the QoS requirements are then used to schedule the application on a heterogeneous architecture with big out-of-order cores and small (LITTLE) in-order cores and select the minimum operating frequency, using DVFS, that meets the deadline. Our scheme is effective in exploiting the timing slack of each application. We show that it can reduce the energy consumption by more than 20% without missing any computational deadlines.read more
Citations
More filters
Proceedings ArticleDOI
SMQoS: Improving Utilization and Energy Efficiency with QoS Awareness on GPUs
TL;DR: A new runtime mechanism SMQoS is proposed that can dynamically adjust the resource allocation during runtime to satisfy the QoS of latency-sensitive tasks and determine the optimal resource allocation for batch tasks to improve GPU utilization and power efficiency.
Journal ArticleDOI
DV-DVFS: merging data variety and DVFS technique to manage the energy consumption of big data processing
TL;DR: In this article, the authors used DVFS to reduce the variation in the consumption of processing resources such as CPU consumption by using two types of deadlines as their constraint, i.e., the processing time and the frequency needed to meet the deadline.
Journal ArticleDOI
Task-RM: A Resource Manager for Energy Reduction in Task-Parallel Applications under Quality of Service Constraints
TL;DR: A general model of task-parallel applications under quality-of-service requirements on the completion time, called Task-RM, exploits the variance in task execution-times and imbalance between tasks to allocate just enough resources so that the application completes before the deadline.
Posted Content
Coordinated Management of Processor Configuration and Cache Partitioning to Optimize Energy under QoS Constraints
TL;DR: Overall, it is shown that up to 18% of energy, and on average 10%, can be saved using the proposed scheme, with a mechanism that estimates the effect of MLP over different processor configurations and LLC allocations.
Proceedings ArticleDOI
Coordinated Management of Processor Configuration and Cache Partitioning to Optimize Energy under QoS Constraints
TL;DR: In this paper, the authors proposed a resource management framework for LLC partitioning, processor adaptation, and per-core VF scaling for a multicore system, and showed that up to 18% of energy can be saved on average.
References
More filters
Journal ArticleDOI
SPEC CPU2006 benchmark descriptions
TL;DR: On August 24, 2006, the Standard Performance Evaluation Corporation (SPEC) announced CPU2006, which replaces CPU2000, and the SPEC CPU benchmarks are widely used in both industry and academia.
Proceedings ArticleDOI
Single-ISA heterogeneous multi-core architectures: the potential for processor power reduction
TL;DR: This paper proposes and evaluates single-ISA heterogeneousmulti-core architectures as a mechanism to reduceprocessor power dissipation and results indicate a 39% average energy reduction while only sacrificing 3% in performance.
Journal ArticleDOI
Scheduling heterogeneous multi-cores through Performance Impact Estimation (PIE)
TL;DR: This paper proposes Performance Impact Estimation (PIE) as a mechanism to predict which workload-to-core mapping is likely to provide the best performance and shows that it requires limited hardware support and can improve system performance by an average of 5.5% over recent state-of-the-art scheduling proposals and by 8.7% over a sampling-based scheduling policy.
Book
Computer Architecture Techniques for Power-Efficiency
TL;DR: This book aims to document some of the most important architectural techniques that were invented, proposed, and applied to reduce both dynamic power and static power dissipation in processors and memory hierarchies by focusing on their common characteristics.
Proceedings ArticleDOI
The ALPBench benchmark suite for complex multimedia applications
TL;DR: The paper provides a performance characterization of the ALPBench benchmarks, with a focus on parallelism, and modified the original applications to expose thread-level and data-level parallelism using POSIX threads and sub-word SIMD instructions respectively.
Related Papers (5)
A hybrid static/dynamic DVS scheduling for real-time systems with (m, k)-guarantee
Linwei Niu,Gang Quan +1 more