scispace - formally typeset
Open AccessJournal ArticleDOI

Polynomial-time algorithms for minimum energy scheduling

Reads0
Chats0
TLDR
The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining a satisfactory level of performance, and it has been an open problem whether a schedule minimizing the overall energy consumption can be computed in polynomial time.
Abstract
The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining a satisfactory level of performance. One common method for saving energy is to simply suspend the system during idle times. No energy is consumed in the suspend mode. However, the process of waking up the system itself requires a certain fixed amount of energy, and thus suspending the system is beneficial only if the idle time is long enough to compensate for this additional energy expenditure. In the specific problem studied in the article, we have a set of jobs with release times and deadlines that need to be executed on a single processor. Preemptions are allowed. The processor requires energy L to be woken up and, when it is on, it uses one unit of energy per one unit of time. It has been an open problem whether a schedule minimizing the overall energy consumption can be computed in polynomial time. We solve this problem in positive, by providing an O(n5)-time algorithm. In addition we provide an O(n4)-time algorithm for computing the minimum energy schedule when all jobs have unit length.

read more

Content maybe subject to copyright    Report

Polynomial Time Algorithms for Minimum Energy Scheduling
Philippe Baptiste
1
, Marek Chrobak
2
, and Christoph D ¨urr
1
1
CNRS, LIX UMR 7161, Ecole Polytechnique 91128 Palaiseau, France. Supported by CNRS/NSF grant 17171 and
ANR Alpage.
2
Department of Computer Science, University of California, Riverside, CA 92521, USA. Supported by NSF grants
OISE-0340752 and CCR-0208856.
Abstract. The
aim of power management policies is to reduce the amount of energy consumed by
computer systems while maintaining satisfactory level of performance. One common method for saving
energy is to simply suspend the system during the idle times. No energy is consumed in the suspend
mode. However, the process of waking up the system itself requires a certain fixed amount of energy,
and thus suspending the system is beneficial only if the idle time is long enough to compensate for this
additional energy expenditure. In the specific problem studied in the paper, we have a set of jobs with
release times and deadlines that need to be executed on a single processor. Preemptions are allowed. The
pro ces sor requires energy L to be woken up and, when it is on, it uses the energy at a rate of R units per
unit of time. It has been an open problem whether a schedule minimizing the overall energy consumption
can be computed in polynomial time. We solve this problem in positive, by providing an O(n
5
)-time
algorithm. In addition we provide an O(n
4
)-time algorithm for computing the minimum energy schedule
when all jobs have unit length.
1 Introduction
Power management strategies. The aim of power management policies is to reduce the amount of energy
consumed by computer systems while maintaining satisfactory level of performance. One common method for
saving energy is a power-down mechanism, which is to simply suspend the system during the idle times. The
amount of energy use d in the suspend mode is negligible. However, during the wake-up process the system
requires a certain fixed amount of start-up energy, and thus suspending the system is beneficial only if the idle
time is long enough to compensate for this extra energy expenditure. The intuition is that we can reduce energy
consumption if we schedule the work to performed so that we reduce the weighted sum of two quantities: the
total number of busy periods and the total length of “short” idle periods, when the system is left on.
Scheduling to minimize energy consumption. The scheduling problem we study in this paper is quite
fundamental. We are given a set of jobs with release times and deadlines that need to be executed on a single
processor. Preemptions are allowed. The processor requires energy L to be woken up and, when it is on, it
uses the energy at a rate of R units per unit of time. The objective is to compute a feasible schedule that
minimizes the overall energy consumption. Denoting by E the ene rgy consumption function, this problem can
be classified using Graham’s notation as 1|r
j
; pmtn|E.
The question whether this problem can be solved in polynomial time was posed by Irani and Pruhs [8],
who write that “. . . Many seemingly more complicated problems in this area can be essentially reduced to
this problem, so a polynomial time algorithm for this problem would have wide application.” Some progress
towards resolving this question has already been reported. Chretienne [3] proved that it is possible to decide
Dagstuhl Seminar Proceedings 10071
Scheduling
http://drops.dagstuhl.de/opus/volltexte/2010/2535
1

in polynomial time whether there is a schedule with no idle time. More recently, Baptiste [2] showed that the
problem can be solved in time O(n
7
) for unit-length jobs.
Our results. We solve the open problem posed by Irani and Pruhs [8], by providing a polynomial-time
algorithm for 1|r
j
; pmtn|E. Our algorithm is based on dynamic programming and it runs in time O(n
5
). Thus
not only our algorithm solves a more general version of the problem, but is also faster than the algorithm for
unit jobs in [2]. For the case of unit jobs (that is, 1|r
j
; p
j
= 1|E), we improve the running time to O(n
4
).
The paper is organized as follows. First, in Section 2, we introduce the necessary terminology and establish
some basic properties. Our algorithms are developed gradually in the se ctions that follow. We start with the
special case of minimizing the number of gaps for unit jobs, that is 1|r
j
; p
j
= 1; L = 1|E, for which we describe
an O(n
4
)-time algorithm in Section 3. Next, in Section 4, we extend this algorithm to jobs of arbitrary length
(1|r
j
; pmtn; L = 1|E), increasing the running time to O(n
5
). Finally, in Section 5, we show how to extend
these algorithms to arbitrary L without increasing their running times.
We remark that our algorithms are sensitive to the structure of the input instance and on typical instances
they are likely to run significantly faster than their worst-case bounds.
Other relevant work. The non-preemptive version of our problem, that is 1|r
j
|E, can be easily shown to be
NP-hard in the strong sense, even for L = 1 (when the objective is to only minimize the number of gaps), by
reduction from 3-Partition [4, problem SS1].
More sophisticated power management systems may involve several sleep states with decreasing rates of
energy consumption and increasing wake-up overheads. In addition, they may also employ a method called
speed scaling that relies on the fact that the speed (or frequency) of processors can be changed on-line. As the
energy required to perform the job increases quickly with the speed of the processor, speed scaling policies tend
to slow down the processor while ensuring that all jobs meet their deadlines (see [8], for example). This problem
is a generalization of 1|r
j
|E and its status remains open. A polynomial-time 2-approximation algorithm for
this problem (with two power states) appeared in [6].
As jobs to be executed are often not known in advance, the on-line version of energy minimization is of
significant interest. Online algorithms for power-down strategies with multiple power states were considered in
[5, 7, 1]. In these works, however, jobs are critical, that is, they must be executed as soon as they are released,
and the online algorithm only needs to determine the appropriate power-down state when the machine is idle.
The work of Gupta, Irani and Shukla [6] on power-down with speed scaling is more relevant to ours, as it
involves aspects of job scheduling. For the specific problem studied in our paper, 1|r
j
|E, it is easy to show that
no online algorithm can have a constant competitive ratio (independent of L), even for unit jobs. We refer the
reader to [8] for a detailed survey on algorithmic problems in power management.
2 Preliminaries
Minimum-energy scheduling. Formally, an instance of the scheduling problem 1|r
j
; pmtn|E consists of n
jobs, where each job j is specified by its processing time p
j
, releas e time r
j
and deadline d
j
. We have one
processor that, at each step, can be on or off. When it is on, it consumes energy at the rate of R units per time
step. When it is off, it does not consume any energy. C hanging the state from off to on (waking up) requires
additional L units of energy. Without loss of generality, we assume that R = 1.
2

The time is discrete, and is divided into unit-length intervals [t, t + 1), where t is an integer, c alled time
slots or steps. For brevity, we often re fer to time step [t, t +1) as time step t. A preemptive schedule S specifies,
for each time slot, whether some job is executed at this time slot and if so, which one. Each job j must be
executed for p
j
time slots, and all its time slots must be within the time interval [r
j
, d
j
).
A block of a schedule S is a maximal interval where S is busy that is, executes a job. The union of all
blocks of S is called its support. A gap of S is a maximal interval where S is idle (does not execute a job). By
C
j
(S) (or simply C
j
, if S is understood from context) we denote the completion time of a job j in a schedule
S. By C
max
(S) = max
j
C
j
(S) we denote the maximum completion time of any job in S. We refer to C
max
(S)
as the completion time of schedule S.
Since the energy used on the support of all schedules is the same , it can be subtracted from the energy
function for the purpose of minimization. The resulting function E(S) is the “wasted energy” (when the
processor is on but idle) plus L times the number of wake-ups. Formally, this can be calculated as follows. Let
[u
1
, t
1
], . . . , [u
q
, t
q
] be the set of all blocks of S, where u
1
< t
1
< u
2
< . . . < t
q
. Then
E(S) =
q
X
i=2
min {u
i
t
i1
, L}.
(We do not charge for the first wake-up at time u
1
, since this term is independent of the schedule.) Intuitively,
this formula reflects the fact that once the support of a schedule is given, the optimal suspension and wake-up
times are easy to determine: we suspend the machine during a gap if and only if its length is more than L, for
otherwise it would be cheaper to keep the processor on during the gap.
Our objective is to find a schedule S that meets all job deadlines and minimizes E(S). (If there is no
feasible schedule, we assume that the energy value is +.) Note that the special case L = 1 corresponds to
simply minimizing the number of gaps.
Simplifying assumptions. Throughout the paper we assume that jobs are ordered according to deadlines,
that is d
1
. . . d
n
. Without loss of generality, we also as sume that all release times are distinct and that
all deadlines are distinct. Indeed, if r
i
= r
j
for some jobs i < j, since the jobs cannot start both at the same
time r
i
, we might as well increase by 1 the release time of j. A similar argument applies to deadlines.
To simplify the presentation, we assume that the job indexed 1 is a special job with p
1
= 1 and d
1
= r
1
+1,
that is job 1 has unit length and must be scheduled at its release time. (Otherwise we can always add such an
extra job, released L + 1 time slots before r
1
. This increases each s chedule’s energy by exactly L and does not
affect the asymptotic running time of our algorithms.)
Without loss of generality, we can als o assume that the input instance is feasible. A feasible schedule
corresponds to a matching between units of jobs and time slots, so Hall’s theorem gives us the following
necessary and sufficient condition for feasibility: for all times u < v,
X
ur
j
,d
j
v
p
j
v u. (1)
We can also restrict our attention to schedules S that satisfy the following earliest-deadline property: at
any time t, either S is idle at t or it schedules a pending job with the earliest deadline. In other words, once
the support of S is fixed, the jobs in the support are scheduled according to the earliest deadline policy. Using
the standard exchange argument, any schedule can be c onverted into one that satisfies the earliest-deadline
property and has the same support.
3

(k, s)-Schedules. We will consider certain partial schedules, that is schedules that execute only some jobs
from the instance. For jobs k and s, a partial schedule S is called a (k, s)-schedule if it schedules all jobs j k
with r
s
r
j
< C
max
(S) (recall that C
max
(S) denotes the completion time of schedule S). From now on, unless
ambiguity arises, we will omit the term “partial” and refer to partial schedules simply as schedules. When we
say that that a (k, s)-schedule S has g gaps, in addition to the gaps between the blocks we also count the
gap (if any) between r
s
and the first block of S. For any k, s, the empty schedule is also considered to be a
(k, s)-schedule. The completion time of an empty (k, s)-schedule is artificially set to r
s
. (Note that, in this
convention, empty (k, s)-schedules, for difference choices of k, s, are considered to b e different schedules.)
The following “compression lemma” will be useful in some proofs.
Lemma 1. Let Q be a (k, s)-schedule w ith C
max
(Q) = u, and let R be a (k, s) schedule with C
max
(R) = v > u
and at most g gaps. Suppose that there is a time t, u < t v, such that there are no jobs i k with u r
i
< t,
and that R executes some job m < k with r
m
u at or after time t. Then there is a (k, s)-schedule R
0
with
completion time t and at most g gaps.
Proof. We can assume that R has the earliest-deadline property. We convert R into R
0
by gradually reducing
the completion time, without increasing the number of gaps.
Call a time slot z of R fixed if R executes some job j at time z and either z = r
j
or all times r
j
, r
j+1
, ..., z 1
are fixed as well. Let [w, v] be the last block of R and let j be the job executed at time v 1. If v = t, we are
done. For v > t we show that we can reduce C
max
(R) while preserving the assumptions of the lemma.
Supp ose first that the slot v 1 is not fixed. In this case, execute the following operation Shift: for each
non-fixed slot in [w, v] move the job unit in this slot to the previous non-fixed slot in R. Shift reduces C
max
(R)
by 1 without increasing the number of gaps. We still need to justify that R is a feas ible (k, s)-schedule. To
this end, it is sufficient only to show that no job will be scheduled before its release time. Indeed, if a job i is
executed at a non-fixed time z, where w z < v, then, by definition, z > r
i
and there is a non-fixed slot in
[r
i
, z 1], and therefore after Shift z will be schedule at or after r
i
.
The other case is when the slot v 1 is fixed. In this case, we claim that there is a job l such that w r
l
< v
and each job i exe cuted in [r
l
, v] satisfies r
i
r
l
. This l can be found as follows. If v 1 = r
j
, let l = j.
Otherwise, from all jobs executed in [r
j
, v 1] pick the job j
0
with minimum r
j
0
. Suppose that j
0
executes at
v
0
, r
j
v
0
v 1. Since, by definition, the slot v
0
is fixed, we can apply this argument recursively, eventually
obtaining the desired job l. We then perform the following operaiton Truncate: replace R by the segment of R
in [r
s
, r
l
]. This decreases C
max
(R) to r
l
, and the new R is a feasible (k, s)-schedule, by the choice of l.
We repeat the process described above as long as v > t. Since the schedule at each step is a (k, s)-schedule,
we end up with a (k, s)-schedule R
0
. Let C
max
(R
0
) = t
0
t. It is thus sufficient to prove that t
0
= t. Indeed,
consider the last step, when C
max
(R) decreases to t
0
. Operation Truncate reduces C
max
(R) to a completion time
of a job released after t, so it cannot reduce it to t
0
. Therefore the last operation applied must have been Shift
that reduces C
max
(R) by 1. Consequently, t
0
= t, as claimed.
The U
k,s,g
function. For any k = 0, ..., n, s = 1, ..., n, and g = 0, ..., n, define U
k,s,g
as the maximum
completion time of a (k, s)-schedule with at most g gaps. Our algorithms will compute the function U
k,s,g
and
use it to determine a minimum energy schedule.
Clearly, U
k,s,g
d
k
and, for any fixed s and g, the function k 7→ U
k,s,g
is increasing (not necessarily
strictly). For all k and s, the function g 7→ U
k,s,g
increases as well. We claim that in fact it increases strictly
4

as long as U
k,s,g
< d
k
. Indeed, suppose that U
k,s,g
= u < d
k
and that U
k,s,g
is realized by a (k, s)-schedule S
with at most g gaps. We show that we can extend S to a schedule S
0
with g + 1 gaps and C
max
(S
0
) > C
max
(S).
If there is a job j k with r
j
u, take j to be such a job with minimum r
j
. We must have r
j
> u, since
otherwise we could add j to S scheduling it at u without increasing the number of gaps, and thus contradicting
the maximality of C
max
(S). We thus obtain S
0
by scheduling j at r
j
. The second case is when r
j
u for all
jobs j k. In particular, r
k
< u. We obtain S
0
by rescheduling k at u. (This creates an additional gap at the
time slot where k was scheduled, for otherwise we would get a contradiction with the maximality of C
max
(S).)
An outline of the algorithms. Our algorithms are based on dynamic programming, and they can be thought
of as consisting of two stage s. First, we compute the table U
k,s,g
, using dynamic programming. From this table
we can determine the minimum number of gaps in the (complete) schedule (it is e qual to the smallest g for
which U
n,1,g
> max
j
r
j
.) The algorithm computing U
k,s,g
for unit jobs is called AlgA and the one for arbitrary
length jobs is called AlgB.
In the second stage, described in Section 5 and called AlgC, we use the table U
k,s,g
to compute the
minimum energy schedule. In other words, we show that the problem of computing the minimum energy reduces
to computing the minimum number of gaps. This reduction, itself, involves again dynamic programming.
When presenting our algorithms, we will only show how to compute the minimum energy value. The algo-
rithms can be modified in a straightforward way to compute the actual optimum schedule, without increasing
the running time. (In fact, we explain how to construct such schedules in the correctness proofs.)
3 Minimizing the Number of Gaps for Unit Jobs
In this section we give an O(n
4
)-time algorithm for minimizing the number of gaps for unit jobs, that is for
1|r
j
; p
j
= 1; L = 1|E. Recall that we assumed all release times to be different and all deadlines to be different,
which implies that there is always a feasible schedule (providing that d
j
> r
j
for all j).
As explained in the previous section, the algorithm computes the table U
k,s,g
. The crucial idea here is this:
Let S be a (k, s)-schedule that realizes U
k,s,g
, that is S has g gaps and C
max
(S) = u is maximized. Suppose
that in S job k is scheduled at some time t < u 1. We show that then, without loss of generality, there is
a job l released and scheduled at time t + 1. Further, the segment of S in [r
s
, t] is a (k 1, s)-schedule with
completion time t, the segment of S in [t + 1, u] is a (k 1, l)-schedule with completion time u, and the total
number of gaps in these two schedules equals g. This naturally leads to a recurrence relation for U
k,s,g
.
Algorithm AlgA. The algorithm computes all values U
k,s,g
, for k = 0, ..., n, s = 1, ..., n and g = 0, ..., n,
using dynamic programming. The minimum number of gaps for the input instance is e qual to the smallest g
for which U
n,1,g
> max
j
r
j
.
To explain how to co mpute all values U
k,s,g
, we give the recurrence relation. For the base case k = 0 we
let U
0,s,g
r
s
for all s and g. For k 1, U
k,s,g
is defined recursively as follows:
U
k,s,g
max
l<k,hg
U
k1,s,g
U
k1,s,g
+ 1 if r
s
r
k
U
k1,s,g
& j < k r
j
6= U
k1,s,g
d
k
if g > 0 & j < k r
j
< U
k1,s,g1
U
k1,l,gh
if r
k
< r
l
= U
k1,s,h
+ 1
(2)
5

Citations
More filters
Proceedings ArticleDOI

Fine-grained power scaling algorithms for energy efficient routers

TL;DR: It is found that three status modes with two working frequencies and a sleep in a router are enough to achieve near-optimal energy efficiency by using the four real-time power scaling algorithms presented, which indicates an easy and practical hardware modification.
Book ChapterDOI

Non-preemptive throughput maximization for speed-scaling with power-down

TL;DR: This work studies the throughput maximization version of the problem of scheduling a set of n jobs on a single processor and proposes polynomial-time algorithms for both cases.
Journal ArticleDOI

Maximizing Common Idle Time on Multicore Processors With Shared Memory

TL;DR: This paper focuses on reducing the energy consumption of the shared main memory in multicore processors by putting the memory into sleep state when all cores are idle, and proposes a series of scheduling schemes to maximize the common idle time of all cores.
Journal ArticleDOI

Experimental study of energy and time constrained task scheduling with irregular speed and power levels

TL;DR: Nine algorithms for energy and time constrained scheduling of independent sequential tasks on a multiprocessor computer with bounded and discrete and irregular clock frequency and supply voltage and execution speed and power consumption levels are proposed and it is found that the combination of the largest execution requirement first method for task selection and the longest timeFirst method for list scheduling yields the best performance.
Posted Content

A scheduling and control approach for an industrial furnace to minimize idle energy consumption.

TL;DR: This article presents a novel scheduling approach to minimise the energy consumption of an industrial furnace during its idle periods by considering the temperature control of a single furnace using the continuous-time bilinear system model.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Book

Scheduling Algorithms

Peter Brucker
TL;DR: Besides scheduling problems for single and parallel machines and shop scheduling problems, this book covers advanced models involving due-dates, sequence dependent changeover times and batching.
Journal ArticleDOI

Algorithmic problems in power management

TL;DR: This survey places more concentration on lines of research of the authors: managing power using the techniques of speed scaling and power-down which are also currently the dominant techniques in practice.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What are the contributions mentioned in the paper "Polynomial time algorithms for minimum energy scheduling" ?

The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance. In the specific problem studied in the paper, the authors have a set of jobs with release times and deadlines that need to be executed on a single processor. In addition the authors provide an O ( n ) -time algorithm for computing the minimum energy schedule when all jobs have unit length. 

The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance. 

The non-preemptive version of their problem, that is 1|rj |E, can be easily shown to be NP-hard in the strong sense, even for L = 1 (when the objective is to only minimize the number of gaps), by reduction from 3-Partition [4, problem SS1]. 

Es to compute, for each s the authors minimize over n values of g, and for fixed s and g the authors can find the index l in time O(log n) with binary search. 

Suppose that there is a time t, u < t ≤ v, such that there are no jobs i ≤ k with u ≤ ri < t, and that R executes some job m < k with rm ≤ u at or after time t. 

(Finding this l can be in fact reduced to amortized time O(1) if the authors process g in increasing order, for then the values of Un,s,g, and thus also of l, increase monotonically as well.) 

In the last choice the authors maximize over pairs (l, h) that satisfy the condition rl = Uk−1,s,h + 1, and thus the authors only have O(n) such pairs. 

As the energy required to perform the job increases quickly with the speed of the processor, speed scaling policies tend to slow down the processor while ensuring that all jobs meet their deadlines (see [8], for example). 

an instance of the scheduling problem 1|rj ; pmtn|E consists of n jobs, where each job j is specified by its processing time pj , release time rj and deadline dj . 

For jobs k and s, a partial schedule S is called a (k, s)-schedule if it schedules all jobs j ≤ k with rs ≤ rj < Cmax(S) (recall that Cmax(S) denotes the completion time of schedule S). 

For k ≥ 1, Uk,s,g is defined recursively as follows:Uk,s,g ← max l<k,h≤g Uk−1,s,gUk−1,s,g + 1 if rs ≤ rk ≤ Uk−1,s,g & ∀j < k rj 6= Uk−1,s,g dk if g > 0 & ∀j < k rj < Uk−1,s,g−1 

In these works, however, jobs are critical, that is, they must be executed as soon as they are released, and the online algorithm only needs to determine the appropriate power-down state when the machine is idle. 

this formula reflects the fact that once the support of a schedule is given, the optimal suspension and wake-up times are easy to determine: the authors suspend the machine during a gap if and only if its length is more than L, for otherwise it would be cheaper to keep the processor on during the gap. 

El otherwise, where u = Un,s,g, rl = min {rj : rj > u}(5)The minimum energy of the whole instance is then E1, where r1 is the first release time.