scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Polynomial-time algorithms for minimum energy scheduling

TL;DR: The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining a satisfactory level of performance, and it has been an open problem whether a schedule minimizing the overall energy consumption can be computed in polynomial time.
Abstract: The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining a satisfactory level of performance. One common method for saving energy is to simply suspend the system during idle times. No energy is consumed in the suspend mode. However, the process of waking up the system itself requires a certain fixed amount of energy, and thus suspending the system is beneficial only if the idle time is long enough to compensate for this additional energy expenditure. In the specific problem studied in the article, we have a set of jobs with release times and deadlines that need to be executed on a single processor. Preemptions are allowed. The processor requires energy L to be woken up and, when it is on, it uses one unit of energy per one unit of time. It has been an open problem whether a schedule minimizing the overall energy consumption can be computed in polynomial time. We solve this problem in positive, by providing an O(n5)-time algorithm. In addition we provide an O(n4)-time algorithm for computing the minimum energy schedule when all jobs have unit length.

Summary (2 min read)

1 Introduction

  • The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance.
  • The intuition is that the authors can reduce energy consumption if they schedule the work to performed so that they reduce the weighted sum of two quantities: the total number of busy periods and the total length of “short” idle periods, when the system is left on.
  • The authors algorithms are developed gradually in the sections that follow.
  • As the energy required to perform the job increases quickly with the speed of the processor, speed scaling policies tend to slow down the processor while ensuring that all jobs meet their deadlines (see [8], for example).

2 Preliminaries

  • Changing the state from off to on (waking up) requires additional L units of energy.
  • Using the standard exchange argument, any schedule can be converted into one that satisfies the earliest-deadline property and has the same support.
  • The authors algorithms will compute the function Uk,s,g and use it to determine a minimum energy schedule.
  • The algorithm computing Uk,s,g for unit jobs is called AlgA and the one for arbitrary length jobs is called AlgB.
  • This reduction, itself, involves again dynamic programming.

3 Minimizing the Number of Gaps for Unit Jobs

  • As explained in the previous section, the algorithm computes the table Uk,s,g.
  • The construction of Sk,s,g depends on which expression realizes the maximum (2).
  • (This property will be useful in the proof below.).
  • Thus each value Uk,s,g can be computed in time O(n), and the overall running time is O(n4).

4 Minimizing the Number of Gaps for Arbitrary Jobs

  • That is for the scheduling problem 1|rj ; pmtn;L = 1|E.the authors.
  • The proof of the lemma will appear in the final version.
  • Then (a) If Uk,s,g(p) < dk, then in the schedule realizing Uk,s,g(p) the last block has at least one job other than k.
  • The cases considered in the algorithm are illustrated in Figure 2.

5 Minimizing the Energy

  • The authors now show how minimize the energy for an arbitrary L. This new algorithm consists of computing the table Uk,s,g (using either Algorithm AlgA or AlgB) and an O(n2)-time post-processing.
  • For those sub-instances, the cost is simply the number of gaps times L. To compute the overall cost, the authors add to this quantity the total size of short gaps.
  • This relation defines a total order on all schedules.
  • The authors now prove the correctness of Algorithm AlgC and analyze its running time.
  • For any job s, the authors now prove that any s-schedule Q has cost at least Es. Proof.

6 Final Comments

  • The authors presented an O(n5)-time algorithm for the minimum energy scheduling problem 1|rj ; pmtn|E, and an O(n4) algorithm for 1|rj ; pj = 1|E.
  • To their knowledge, no work has been done on the multiprocessor case.
  • Another generalization is to allow multiple power-down states [8, 7].

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Polynomial Time Algorithms for Minimum Energy Scheduling
Philippe Baptiste
1
, Marek Chrobak
2
, and Christoph D ¨urr
1
1
CNRS, LIX UMR 7161, Ecole Polytechnique 91128 Palaiseau, France. Supported by CNRS/NSF grant 17171 and
ANR Alpage.
2
Department of Computer Science, University of California, Riverside, CA 92521, USA. Supported by NSF grants
OISE-0340752 and CCR-0208856.
Abstract. The
aim of power management policies is to reduce the amount of energy consumed by
computer systems while maintaining satisfactory level of performance. One common method for saving
energy is to simply suspend the system during the idle times. No energy is consumed in the suspend
mode. However, the process of waking up the system itself requires a certain fixed amount of energy,
and thus suspending the system is beneficial only if the idle time is long enough to compensate for this
additional energy expenditure. In the specific problem studied in the paper, we have a set of jobs with
release times and deadlines that need to be executed on a single processor. Preemptions are allowed. The
pro ces sor requires energy L to be woken up and, when it is on, it uses the energy at a rate of R units per
unit of time. It has been an open problem whether a schedule minimizing the overall energy consumption
can be computed in polynomial time. We solve this problem in positive, by providing an O(n
5
)-time
algorithm. In addition we provide an O(n
4
)-time algorithm for computing the minimum energy schedule
when all jobs have unit length.
1 Introduction
Power management strategies. The aim of power management policies is to reduce the amount of energy
consumed by computer systems while maintaining satisfactory level of performance. One common method for
saving energy is a power-down mechanism, which is to simply suspend the system during the idle times. The
amount of energy use d in the suspend mode is negligible. However, during the wake-up process the system
requires a certain fixed amount of start-up energy, and thus suspending the system is beneficial only if the idle
time is long enough to compensate for this extra energy expenditure. The intuition is that we can reduce energy
consumption if we schedule the work to performed so that we reduce the weighted sum of two quantities: the
total number of busy periods and the total length of “short” idle periods, when the system is left on.
Scheduling to minimize energy consumption. The scheduling problem we study in this paper is quite
fundamental. We are given a set of jobs with release times and deadlines that need to be executed on a single
processor. Preemptions are allowed. The processor requires energy L to be woken up and, when it is on, it
uses the energy at a rate of R units per unit of time. The objective is to compute a feasible schedule that
minimizes the overall energy consumption. Denoting by E the ene rgy consumption function, this problem can
be classified using Graham’s notation as 1|r
j
; pmtn|E.
The question whether this problem can be solved in polynomial time was posed by Irani and Pruhs [8],
who write that “. . . Many seemingly more complicated problems in this area can be essentially reduced to
this problem, so a polynomial time algorithm for this problem would have wide application.” Some progress
towards resolving this question has already been reported. Chretienne [3] proved that it is possible to decide
Dagstuhl Seminar Proceedings 10071
Scheduling
http://drops.dagstuhl.de/opus/volltexte/2010/2535
1

in polynomial time whether there is a schedule with no idle time. More recently, Baptiste [2] showed that the
problem can be solved in time O(n
7
) for unit-length jobs.
Our results. We solve the open problem posed by Irani and Pruhs [8], by providing a polynomial-time
algorithm for 1|r
j
; pmtn|E. Our algorithm is based on dynamic programming and it runs in time O(n
5
). Thus
not only our algorithm solves a more general version of the problem, but is also faster than the algorithm for
unit jobs in [2]. For the case of unit jobs (that is, 1|r
j
; p
j
= 1|E), we improve the running time to O(n
4
).
The paper is organized as follows. First, in Section 2, we introduce the necessary terminology and establish
some basic properties. Our algorithms are developed gradually in the se ctions that follow. We start with the
special case of minimizing the number of gaps for unit jobs, that is 1|r
j
; p
j
= 1; L = 1|E, for which we describe
an O(n
4
)-time algorithm in Section 3. Next, in Section 4, we extend this algorithm to jobs of arbitrary length
(1|r
j
; pmtn; L = 1|E), increasing the running time to O(n
5
). Finally, in Section 5, we show how to extend
these algorithms to arbitrary L without increasing their running times.
We remark that our algorithms are sensitive to the structure of the input instance and on typical instances
they are likely to run significantly faster than their worst-case bounds.
Other relevant work. The non-preemptive version of our problem, that is 1|r
j
|E, can be easily shown to be
NP-hard in the strong sense, even for L = 1 (when the objective is to only minimize the number of gaps), by
reduction from 3-Partition [4, problem SS1].
More sophisticated power management systems may involve several sleep states with decreasing rates of
energy consumption and increasing wake-up overheads. In addition, they may also employ a method called
speed scaling that relies on the fact that the speed (or frequency) of processors can be changed on-line. As the
energy required to perform the job increases quickly with the speed of the processor, speed scaling policies tend
to slow down the processor while ensuring that all jobs meet their deadlines (see [8], for example). This problem
is a generalization of 1|r
j
|E and its status remains open. A polynomial-time 2-approximation algorithm for
this problem (with two power states) appeared in [6].
As jobs to be executed are often not known in advance, the on-line version of energy minimization is of
significant interest. Online algorithms for power-down strategies with multiple power states were considered in
[5, 7, 1]. In these works, however, jobs are critical, that is, they must be executed as soon as they are released,
and the online algorithm only needs to determine the appropriate power-down state when the machine is idle.
The work of Gupta, Irani and Shukla [6] on power-down with speed scaling is more relevant to ours, as it
involves aspects of job scheduling. For the specific problem studied in our paper, 1|r
j
|E, it is easy to show that
no online algorithm can have a constant competitive ratio (independent of L), even for unit jobs. We refer the
reader to [8] for a detailed survey on algorithmic problems in power management.
2 Preliminaries
Minimum-energy scheduling. Formally, an instance of the scheduling problem 1|r
j
; pmtn|E consists of n
jobs, where each job j is specified by its processing time p
j
, releas e time r
j
and deadline d
j
. We have one
processor that, at each step, can be on or off. When it is on, it consumes energy at the rate of R units per time
step. When it is off, it does not consume any energy. C hanging the state from off to on (waking up) requires
additional L units of energy. Without loss of generality, we assume that R = 1.
2

The time is discrete, and is divided into unit-length intervals [t, t + 1), where t is an integer, c alled time
slots or steps. For brevity, we often re fer to time step [t, t +1) as time step t. A preemptive schedule S specifies,
for each time slot, whether some job is executed at this time slot and if so, which one. Each job j must be
executed for p
j
time slots, and all its time slots must be within the time interval [r
j
, d
j
).
A block of a schedule S is a maximal interval where S is busy that is, executes a job. The union of all
blocks of S is called its support. A gap of S is a maximal interval where S is idle (does not execute a job). By
C
j
(S) (or simply C
j
, if S is understood from context) we denote the completion time of a job j in a schedule
S. By C
max
(S) = max
j
C
j
(S) we denote the maximum completion time of any job in S. We refer to C
max
(S)
as the completion time of schedule S.
Since the energy used on the support of all schedules is the same , it can be subtracted from the energy
function for the purpose of minimization. The resulting function E(S) is the “wasted energy” (when the
processor is on but idle) plus L times the number of wake-ups. Formally, this can be calculated as follows. Let
[u
1
, t
1
], . . . , [u
q
, t
q
] be the set of all blocks of S, where u
1
< t
1
< u
2
< . . . < t
q
. Then
E(S) =
q
X
i=2
min {u
i
t
i1
, L}.
(We do not charge for the first wake-up at time u
1
, since this term is independent of the schedule.) Intuitively,
this formula reflects the fact that once the support of a schedule is given, the optimal suspension and wake-up
times are easy to determine: we suspend the machine during a gap if and only if its length is more than L, for
otherwise it would be cheaper to keep the processor on during the gap.
Our objective is to find a schedule S that meets all job deadlines and minimizes E(S). (If there is no
feasible schedule, we assume that the energy value is +.) Note that the special case L = 1 corresponds to
simply minimizing the number of gaps.
Simplifying assumptions. Throughout the paper we assume that jobs are ordered according to deadlines,
that is d
1
. . . d
n
. Without loss of generality, we also as sume that all release times are distinct and that
all deadlines are distinct. Indeed, if r
i
= r
j
for some jobs i < j, since the jobs cannot start both at the same
time r
i
, we might as well increase by 1 the release time of j. A similar argument applies to deadlines.
To simplify the presentation, we assume that the job indexed 1 is a special job with p
1
= 1 and d
1
= r
1
+1,
that is job 1 has unit length and must be scheduled at its release time. (Otherwise we can always add such an
extra job, released L + 1 time slots before r
1
. This increases each s chedule’s energy by exactly L and does not
affect the asymptotic running time of our algorithms.)
Without loss of generality, we can als o assume that the input instance is feasible. A feasible schedule
corresponds to a matching between units of jobs and time slots, so Hall’s theorem gives us the following
necessary and sufficient condition for feasibility: for all times u < v,
X
ur
j
,d
j
v
p
j
v u. (1)
We can also restrict our attention to schedules S that satisfy the following earliest-deadline property: at
any time t, either S is idle at t or it schedules a pending job with the earliest deadline. In other words, once
the support of S is fixed, the jobs in the support are scheduled according to the earliest deadline policy. Using
the standard exchange argument, any schedule can be c onverted into one that satisfies the earliest-deadline
property and has the same support.
3

(k, s)-Schedules. We will consider certain partial schedules, that is schedules that execute only some jobs
from the instance. For jobs k and s, a partial schedule S is called a (k, s)-schedule if it schedules all jobs j k
with r
s
r
j
< C
max
(S) (recall that C
max
(S) denotes the completion time of schedule S). From now on, unless
ambiguity arises, we will omit the term “partial” and refer to partial schedules simply as schedules. When we
say that that a (k, s)-schedule S has g gaps, in addition to the gaps between the blocks we also count the
gap (if any) between r
s
and the first block of S. For any k, s, the empty schedule is also considered to be a
(k, s)-schedule. The completion time of an empty (k, s)-schedule is artificially set to r
s
. (Note that, in this
convention, empty (k, s)-schedules, for difference choices of k, s, are considered to b e different schedules.)
The following “compression lemma” will be useful in some proofs.
Lemma 1. Let Q be a (k, s)-schedule w ith C
max
(Q) = u, and let R be a (k, s) schedule with C
max
(R) = v > u
and at most g gaps. Suppose that there is a time t, u < t v, such that there are no jobs i k with u r
i
< t,
and that R executes some job m < k with r
m
u at or after time t. Then there is a (k, s)-schedule R
0
with
completion time t and at most g gaps.
Proof. We can assume that R has the earliest-deadline property. We convert R into R
0
by gradually reducing
the completion time, without increasing the number of gaps.
Call a time slot z of R fixed if R executes some job j at time z and either z = r
j
or all times r
j
, r
j+1
, ..., z 1
are fixed as well. Let [w, v] be the last block of R and let j be the job executed at time v 1. If v = t, we are
done. For v > t we show that we can reduce C
max
(R) while preserving the assumptions of the lemma.
Supp ose first that the slot v 1 is not fixed. In this case, execute the following operation Shift: for each
non-fixed slot in [w, v] move the job unit in this slot to the previous non-fixed slot in R. Shift reduces C
max
(R)
by 1 without increasing the number of gaps. We still need to justify that R is a feas ible (k, s)-schedule. To
this end, it is sufficient only to show that no job will be scheduled before its release time. Indeed, if a job i is
executed at a non-fixed time z, where w z < v, then, by definition, z > r
i
and there is a non-fixed slot in
[r
i
, z 1], and therefore after Shift z will be schedule at or after r
i
.
The other case is when the slot v 1 is fixed. In this case, we claim that there is a job l such that w r
l
< v
and each job i exe cuted in [r
l
, v] satisfies r
i
r
l
. This l can be found as follows. If v 1 = r
j
, let l = j.
Otherwise, from all jobs executed in [r
j
, v 1] pick the job j
0
with minimum r
j
0
. Suppose that j
0
executes at
v
0
, r
j
v
0
v 1. Since, by definition, the slot v
0
is fixed, we can apply this argument recursively, eventually
obtaining the desired job l. We then perform the following operaiton Truncate: replace R by the segment of R
in [r
s
, r
l
]. This decreases C
max
(R) to r
l
, and the new R is a feasible (k, s)-schedule, by the choice of l.
We repeat the process described above as long as v > t. Since the schedule at each step is a (k, s)-schedule,
we end up with a (k, s)-schedule R
0
. Let C
max
(R
0
) = t
0
t. It is thus sufficient to prove that t
0
= t. Indeed,
consider the last step, when C
max
(R) decreases to t
0
. Operation Truncate reduces C
max
(R) to a completion time
of a job released after t, so it cannot reduce it to t
0
. Therefore the last operation applied must have been Shift
that reduces C
max
(R) by 1. Consequently, t
0
= t, as claimed.
The U
k,s,g
function. For any k = 0, ..., n, s = 1, ..., n, and g = 0, ..., n, define U
k,s,g
as the maximum
completion time of a (k, s)-schedule with at most g gaps. Our algorithms will compute the function U
k,s,g
and
use it to determine a minimum energy schedule.
Clearly, U
k,s,g
d
k
and, for any fixed s and g, the function k 7→ U
k,s,g
is increasing (not necessarily
strictly). For all k and s, the function g 7→ U
k,s,g
increases as well. We claim that in fact it increases strictly
4

as long as U
k,s,g
< d
k
. Indeed, suppose that U
k,s,g
= u < d
k
and that U
k,s,g
is realized by a (k, s)-schedule S
with at most g gaps. We show that we can extend S to a schedule S
0
with g + 1 gaps and C
max
(S
0
) > C
max
(S).
If there is a job j k with r
j
u, take j to be such a job with minimum r
j
. We must have r
j
> u, since
otherwise we could add j to S scheduling it at u without increasing the number of gaps, and thus contradicting
the maximality of C
max
(S). We thus obtain S
0
by scheduling j at r
j
. The second case is when r
j
u for all
jobs j k. In particular, r
k
< u. We obtain S
0
by rescheduling k at u. (This creates an additional gap at the
time slot where k was scheduled, for otherwise we would get a contradiction with the maximality of C
max
(S).)
An outline of the algorithms. Our algorithms are based on dynamic programming, and they can be thought
of as consisting of two stage s. First, we compute the table U
k,s,g
, using dynamic programming. From this table
we can determine the minimum number of gaps in the (complete) schedule (it is e qual to the smallest g for
which U
n,1,g
> max
j
r
j
.) The algorithm computing U
k,s,g
for unit jobs is called AlgA and the one for arbitrary
length jobs is called AlgB.
In the second stage, described in Section 5 and called AlgC, we use the table U
k,s,g
to compute the
minimum energy schedule. In other words, we show that the problem of computing the minimum energy reduces
to computing the minimum number of gaps. This reduction, itself, involves again dynamic programming.
When presenting our algorithms, we will only show how to compute the minimum energy value. The algo-
rithms can be modified in a straightforward way to compute the actual optimum schedule, without increasing
the running time. (In fact, we explain how to construct such schedules in the correctness proofs.)
3 Minimizing the Number of Gaps for Unit Jobs
In this section we give an O(n
4
)-time algorithm for minimizing the number of gaps for unit jobs, that is for
1|r
j
; p
j
= 1; L = 1|E. Recall that we assumed all release times to be different and all deadlines to be different,
which implies that there is always a feasible schedule (providing that d
j
> r
j
for all j).
As explained in the previous section, the algorithm computes the table U
k,s,g
. The crucial idea here is this:
Let S be a (k, s)-schedule that realizes U
k,s,g
, that is S has g gaps and C
max
(S) = u is maximized. Suppose
that in S job k is scheduled at some time t < u 1. We show that then, without loss of generality, there is
a job l released and scheduled at time t + 1. Further, the segment of S in [r
s
, t] is a (k 1, s)-schedule with
completion time t, the segment of S in [t + 1, u] is a (k 1, l)-schedule with completion time u, and the total
number of gaps in these two schedules equals g. This naturally leads to a recurrence relation for U
k,s,g
.
Algorithm AlgA. The algorithm computes all values U
k,s,g
, for k = 0, ..., n, s = 1, ..., n and g = 0, ..., n,
using dynamic programming. The minimum number of gaps for the input instance is e qual to the smallest g
for which U
n,1,g
> max
j
r
j
.
To explain how to co mpute all values U
k,s,g
, we give the recurrence relation. For the base case k = 0 we
let U
0,s,g
r
s
for all s and g. For k 1, U
k,s,g
is defined recursively as follows:
U
k,s,g
max
l<k,hg
U
k1,s,g
U
k1,s,g
+ 1 if r
s
r
k
U
k1,s,g
& j < k r
j
6= U
k1,s,g
d
k
if g > 0 & j < k r
j
< U
k1,s,g1
U
k1,l,gh
if r
k
< r
l
= U
k1,s,h
+ 1
(2)
5

Citations
More filters
Journal ArticleDOI
TL;DR: Algorithmic solutions can help reduce energy consumption in computing environs by automating the very labor-intensive and therefore time-heavy and expensive process of designing and implementing algorithms.
Abstract: Algorithmic solutions can help reduce energy consumption in computing environs.

436 citations

Proceedings ArticleDOI
17 Jan 2012
TL;DR: In this article, the authors considered the energy conservation problem with a variable-speed processor equipped with a sleep state and derived an approximation factor of 4/3 for general convex power functions, and showed that no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2.
Abstract: We study an energy conservation problem where a variable-speed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadline-based scheduling, i.e. each job is specified by a release time, a deadline and a processing volume. For general convex power functions, Irani et al. [12] devised an offline 2-approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed Scrit that yields the smallest energy consumption while jobs are processed. For power functions P(s) = sα + γ, where s is the processor speed, Han et al. [11] gave an (αα + 2)-competitive online algorithm.We investigate the offline setting of speed scaling with a sleep state. First we prove NP-hardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs Scrit-schedules, which execute jobs at speeds of at least scrit, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2.We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4/3. For power functions P(s) = βsα + γ, we obtain an approximation of 137/117

87 citations

Proceedings ArticleDOI
10 Mar 2011
TL;DR: This paper surveys algorithmic results on dynamic speed scaling in settings where (1) jobs have strict deadlines and (2) job flow times are to be minimized.
Abstract: Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variable-speed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have strict deadlines and (2) job flow times are to be minimized.

64 citations

Journal ArticleDOI
TL;DR: This work addresses scheduling independent and precedence constrained parallel tasks on multiple homogeneous processors in a data center with dynamically variable voltage and speed as combinatorial optimization problems by adopting a two-level energy/time/power allocation scheme.

56 citations

Journal ArticleDOI
20 Jan 2013
TL;DR: It is shown that it is not sufficient—as some authors argue—to consider only individual invocations of a task and defined a schedule that also takes interactions between invocations into account and proves—in a theoretical fashion—that this schedule is optimal.
Abstract: Dynamic Power Management (DPM) and Dynamic Voltage and Frequency Scaling (DVFS) are popular techniques for reducing energy consumption. Algorithms for optimal DVFS exist, but optimal DPM and the optimal combination of DVFS and DPM are not yet solved.In this article we use well-established models of DPM and DVFS for frame-based systems. We show that it is not sufficient—as some authors argue—to consider only individual invocations of a task. We define a schedule that also takes interactions between invocations into account and prove—in a theoretical fashion—that this schedule is optimal.

52 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Sep 1995
TL;DR: Besides scheduling problems for single and parallel machines and shop scheduling problems, this book covers advanced models involving due-dates, sequence dependent changeover times and batching.
Abstract: Besides scheduling problems for single and parallel machines and shop scheduling problems, this book covers advanced models involving due-dates, sequence dependent changeover times and batching. Discussion also extends to multiprocessor task scheduling and problems with multi-purpose machines. Among the methods used to solve these problems are linear programming, dynamic programming, branch-and-bound algorithms, and local search heuristics. The text goes on to summarize complexity results for different classes of deterministic scheduling problems.

1,828 citations

Journal ArticleDOI
TL;DR: This survey places more concentration on lines of research of the authors: managing power using the techniques of speed scaling and power-down which are also currently the dominant techniques in practice.
Abstract: We survey recent research that has appeared in the theoretical computer science literature on algorithmic problems related to power management. We will try to highlight some open problem that we feel are interesting. This survey places more concentration on lines of research of the authors: managing power using the techniques of speed scaling and power-down which are also currently the dominant techniques in practice.

286 citations

Frequently Asked Questions (14)
Q1. What are the contributions mentioned in the paper "Polynomial time algorithms for minimum energy scheduling" ?

The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance. In the specific problem studied in the paper, the authors have a set of jobs with release times and deadlines that need to be executed on a single processor. In addition the authors provide an O ( n ) -time algorithm for computing the minimum energy schedule when all jobs have unit length. 

The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance. 

The non-preemptive version of their problem, that is 1|rj |E, can be easily shown to be NP-hard in the strong sense, even for L = 1 (when the objective is to only minimize the number of gaps), by reduction from 3-Partition [4, problem SS1]. 

Es to compute, for each s the authors minimize over n values of g, and for fixed s and g the authors can find the index l in time O(log n) with binary search. 

Suppose that there is a time t, u < t ≤ v, such that there are no jobs i ≤ k with u ≤ ri < t, and that R executes some job m < k with rm ≤ u at or after time t. 

(Finding this l can be in fact reduced to amortized time O(1) if the authors process g in increasing order, for then the values of Un,s,g, and thus also of l, increase monotonically as well.) 

In the last choice the authors maximize over pairs (l, h) that satisfy the condition rl = Uk−1,s,h + 1, and thus the authors only have O(n) such pairs. 

As the energy required to perform the job increases quickly with the speed of the processor, speed scaling policies tend to slow down the processor while ensuring that all jobs meet their deadlines (see [8], for example). 

an instance of the scheduling problem 1|rj ; pmtn|E consists of n jobs, where each job j is specified by its processing time pj , release time rj and deadline dj . 

For jobs k and s, a partial schedule S is called a (k, s)-schedule if it schedules all jobs j ≤ k with rs ≤ rj < Cmax(S) (recall that Cmax(S) denotes the completion time of schedule S). 

For k ≥ 1, Uk,s,g is defined recursively as follows:Uk,s,g ← max l<k,h≤g Uk−1,s,gUk−1,s,g + 1 if rs ≤ rk ≤ Uk−1,s,g & ∀j < k rj 6= Uk−1,s,g dk if g > 0 & ∀j < k rj < Uk−1,s,g−1 

In these works, however, jobs are critical, that is, they must be executed as soon as they are released, and the online algorithm only needs to determine the appropriate power-down state when the machine is idle. 

this formula reflects the fact that once the support of a schedule is given, the optimal suspension and wake-up times are easy to determine: the authors suspend the machine during a gap if and only if its length is more than L, for otherwise it would be cheaper to keep the processor on during the gap. 

El otherwise, where u = Un,s,g, rl = min {rj : rj > u}(5)The minimum energy of the whole instance is then E1, where r1 is the first release time.