scispace - formally typeset
Search or ask a question
Conference

Workshop on Power-Aware Computing and Systems 

About: Workshop on Power-Aware Computing and Systems is an academic conference. The conference publishes majorly in the area(s): Energy consumption & Data center. Over the lifetime, 31 publications have been published by the conference receiving 775 citations.

Papers
More filters
Proceedings ArticleDOI
23 Oct 2011
TL;DR: This paper designs an adaptive data center job scheduler which utilizes short term prediction of solar and wind energy production, which enables the number of jobs to be scaled to the expected energy availability, thus reducingThe number of cancelled jobs by 4x and improving green energy usage efficiency by 3x over just utilizing the immediately available green energy.
Abstract: As brown energy costs grow, renewable energy becomes more widely used. Previous work focused on using immediately available green energy to supplement the non-renewable, or brown energy at the cost of canceling and rescheduling jobs whenever the green energy availability is too low [16]. In this paper we design an adaptive data center job scheduler which utilizes short term prediction of solar and wind energy production. This enables us to scale the number of jobs to the expected energy availability, thus reducing the number of cancelled jobs by 4x and improving green energy usage efficiency by 3x over just utilizing the immediately available green energy.

151 citations

Proceedings ArticleDOI
03 Nov 2013
TL;DR: A thorough measurement study that aims to explore how GPU DVFS affects the system energy consumption and shows that GPU voltage/frequency scaling is an effective approach to conserving energy.
Abstract: Nowadays, GPUs are widely used to accelerate many high performance computing applications. Energy conservation of such computing systems has become an important research topic. Dynamic voltage/frequency scaling (DVFS) is proved to be an appealing method for saving energy for traditional computing centers. However, there is still a lack of firsthand study on the effectiveness of GPU DVFS. This paper presents a thorough measurement study that aims to explore how GPU DVFS affects the system energy consumption. We conduct experiments on a real GPU platform with 37 benchmark applications. Our results show that GPU voltage/frequency scaling is an effective approach to conserving energy. For example, by scaling down the GPU core voltage and frequency, we have achieved an average of 19.28% energy reduction compared with the default setting, while giving up no more than 4% of performance. For all tested GPU applications, core voltage scaling is significantly effective to reduce system energy consumption. Meanwhile the effects of scaling core frequency and memory frequency depend on the characteristics of GPU applications.

80 citations

Proceedings ArticleDOI
23 Oct 2011
TL;DR: The goal of this research is to encourage data center administrators to consider dynamic power management and to spur chip designers to develop useful sleep states for servers.
Abstract: While sleep states have existed for mobile devices and workstations for some time, these sleep states have largely not been incorporated into the servers in today's data centers.Chip designers have been unmotivated to design sleep states because data center administrators haven't expressed any desire to have them. High setup times make administrators fearful of any form of dynamic power management, whereby servers are suspended or shut down when load drops. This general reluctance has stalled research into whether there might be some feasible sleep state (with sufficiently low setup overhead and/or sufficiently low power) that would actually be beneficial in data centers.This paper uses both experimentation and theory to investigate the regime of sleep states that should be advantageous in data centers. Implementation experiments involve a 24-server multi-tier testbed, serving a web site of the type seen in Facebook or Amazon with key-value workload and a range of hypothetical sleep states. Analytical modeling is used to understand the effect of scaling up to larger data centers. The goal of this research is to encourage data center administrators to consider dynamic power management and to spur chip designers to develop useful sleep states for servers.

46 citations

Proceedings ArticleDOI
03 Nov 2013
TL;DR: This work develops a new CPU power model with a high accuracy, 95.6% on average, that helps to better understand the performance of multicore smartphones and paves the way towards better CPU power management on multicore smartphone.
Abstract: Although multicore smartphones have become increasingly mainstream, it is unclear whether and how smartphone applications can utilize multicore CPUs to improve performance. In this paper we study the performance of mobile applications using multicore CPUs, in terms of power and computation cost. Using Web browsing as an example, our preliminary measurement results show that even large applications like Web browsers with multi-threading acceleration cannot fully utilize the multicore CPUs. Furthermore, we find that the existing CPU power models on smartphones are ill-suited for modern multicore CPUs. We develop a new CPU power model with a high accuracy, 95.6% on average. Our work helps to better understand the performance of multicore smartphones and paves the way towards better CPU power management on multicore smartphones.

41 citations

Proceedings ArticleDOI
03 Nov 2013
TL;DR: This work develops a policy in Linux that exploits the fact that core offlining leads to very modest savings in the best circumstances, with a heavy penalty in others, and shows the cause of this to be low per-core idle power.
Abstract: Energy management is a primary consideration in the design of modern smartphones, made more interesting by the recent proliferation of multi-core processors in this space. We investigate how core offlining and DVFS can be used together on these systems to reduce energy consumption. We show that core offlining leads to very modest savings in the best circumstances, with a heavy penalty in others, and show the cause of this to be low per-core idle power. We develop a policy in Linux that exploits this fact, and show that it improves up to 25% on existing implementations.

39 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
20157
201313
20121
201110