scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Scheduling in 2000"


Journal ArticleDOI
TL;DR: Empirical results based on 52 weeks of live data show how features of the structure of the constraints are able to improve an unsuccessful canonical GA to the point where it is able to provide a practical solution to the problem.
Abstract: There is considerable interest in the use of genetic algorithms to solve problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle the conflict between objectives and constraints that typically occurs in such problems. In order to overcome this, successful implementations frequently make use of problem specific knowledge. This paper is concerned with the development of a GA for a nurse rostering problem at a major UK hospital. The structure of the constraints is used as the basis for a co-evolutionary strategy using co-operating sub-populations. Problem specific knowledge is also used to define a system of incentives and disincentives, and a complementary mutation operator. Empirical results based on 52 weeks of live data show how these features are able to improve an unsuccessful canonical GA to the point where it is able to provide a practical solution to the problem.

252 citations


Journal ArticleDOI
TL;DR: A new on-line algorithm, MR, is presented for non-preemptive scheduling of jobs with known processing times on m identical machines which beats the best previous algorithm for m⩾64.
Abstract: We present a new on-line algorithm, MR, for non-preemptive scheduling of jobs with known processing times on m identical machines which beats the best previous algorithm for m⩾64. For m∞ its competitive ratio approaches 1+\sqrt{(1+1{\rm n} 2)/2}<1.9201. Copyright 2000 © John Wiley & Sons, Ltd.

149 citations


Journal ArticleDOI
TL;DR: A model and a solution approach for integration of steel continuous casters and hot strip mills to provide more responsive steel production at lower unit cost is presented and has demonstrated significant savings.
Abstract: In this paper we present a model and a solution approach for integration of steel continuous casters and hot strip mills to provide more responsive steel production at lower unit cost. We describe the production environment and survey existing methods for planning continuous casters and hot strip mills. Since these processes lie at the solid/liquid interface we use ‘virtual’ slabs, corresponding to possible (but not yet cast) solid forms of liquid steel, as a means of communication between hot strip mill and continuous caster. We model the planning problem as a hybrid network. Our model is solved using a combination of mathematical programming and heuristic techniques and we show that the solutions provided are very nearly optimal. The approach which we describe has been implemented at several steel mills worldwide and has demonstrated significant savings. Copyright © 2000 John Wiley & Sons, Ltd.

93 citations


Journal ArticleDOI
TL;DR: In this article, a large step random walk is proposed to minimize the total weighted tardiness of the n jobs in a job shop with m machines, where each job has a specified sequence to be processed by the machines.
Abstract: We consider a job shop with m machines. There are n jobs and each job has a specified sequence to be processed by the machines. Job j has release date rj, due date dj, weight wj and processing time pij on machine i (1,…, m). The objective is to minimize the total weighted tardiness of the n jobs. We describe and analyse a large step random walk which uses different neighbourhood sizes depending on whether the algorithm performs a small step or a large step. The small step consists of iterative improvement while the large step consists of a metropolis algorithm. Computational testing of the large step random walk on 66 instances with 10 jobs and 10 machines shows that the large step random walk achieves better results for the given problem structure compared to an existing shifting bottleneck algorithm. We further show results for large instances with up to 50 jobs and 15 machines. Copyright © 2000 John Wiley & Sons, Ltd.

86 citations


Journal ArticleDOI
TL;DR: This work considers the problem of scheduling a reentrant flow shop with sequence-dependent setup times to minimize maximum lateness and proposes an enhanced DM which gives promising results for these difficult scheduling problems.
Abstract: We consider the problem of scheduling a reentrant flow shop with sequence-dependent setup times to minimize maximum lateness. We develop a series of decomposition methods (DMs) exploring the importance of components such as subproblem solution method, bottleneck identification and reoptimization method on the solution time/quality tradeoff. Based on these results, we propose an enhanced DM which gives promising results for these difficult scheduling problems. Our results also yield interesting insights into the strengths and limitations of DMs. Copyright © 2000 John Wiley & Sons, Ltd.

81 citations


Journal ArticleDOI
TL;DR: A new preemptive algorithm that is well suited for fair on-line scheduling of parallel jobs and achieves a constant competitive ratio for both the makespan and the weighted completion time for the given weight selection is introduced.
Abstract: This paper introduces a new preemptive algorithm that is well suited for fair on-line scheduling of parallel jobs. Fairness is achieved by selecting job weights to be equal to the resource consumption of the job and by limiting the time span a job can be delayed by other jobs submitted after it. Further, the processing time of a job is not known when the job is released. It is proven that the algorithm achieves a constant competitive ratio for both the makespan and the weighted completion time for the given weight selection. Finally, the algorithm is also experimentally evaluated with the help of workload traces. Copyright © 2000 John Wiley & Sons, Ltd.

79 citations


Journal ArticleDOI
TL;DR: A decomposition algorithm that solves the problem by decomposing the flexible flow shop problem into a series of single-stage scheduling subproblems and an algorithm based on local search that combines the first two.
Abstract: Consider a flexible flow shop with s stages in series and at each stage a number of identical machines in parallel. There are n jobs to be processed and each job has to go through the stages following the same route. Job j has release date rj, due date dj, weight wj and a processing time pjl at stage l, l=1,…,s. The objective is to minimize the total weighted tardiness of the n jobs. In this paper we describe and analyse three heuristics. The first one is a decomposition algorithm that solves the problem by decomposing the flexible flow shop problem into a series of single-stage scheduling subproblems. The second one is an algorithm based on local search. The third heuristic is a hybrid algorithm that combines the first two. We conclude with a comparative study of the three heuristics. Copyright © 2000 John Wiley & Sons, Ltd.

37 citations


Journal ArticleDOI
TL;DR: An on-line algorithm is proposed and its performance bound is equal to 1.5, which matches a known lower bound due to Vestjens, the first example of a situation in which the possibility of applying restarts reduces the worst-case performance bound, even though the processing times are known.
Abstract: We consider a single-machine on-line scheduling problem where jobs arrive over time. A set of independent jobs has to be scheduled on a single machine. Each job becomes available at its release date, which is not known in advance, and its characteristics, i.e. processing requirement and delivery time, become known at its arrival. The objective is to minimize the time by which all jobs have been delivered. In our model preemption is not allowed, but we are allowed to restart a job, that is, the processing of a job can be broken off to have the machine available to process an urgent job, but the time already spent on processing this interrupted job is considered to be lost. We propose an on-line algorithm and show that its performance bound is equal to 1.5, which matches a known lower bound due to Vestjens. For the same problem without restarts the optimal worst-case bound is known to be equal to (\sqrt{5}+1)/2 \thickapprox 1.61803; this is the first example of a situation in which the possibility of applying restarts reduces the worst-case performance bound, even though the processing times are known. Copyright © 2000 John Wiley & Sons, Ltd.

36 citations


Journal ArticleDOI
TL;DR: This paper examines the single-machine model with family (or group) set-up times and a criterion of minimizing total weighted job completion time (weighted owtime) and proposes new lower bounds for this problem, and then turns to renement of a previously proposed branch-and-bound algorithm.
Abstract: SUMMARY A recent trend in the analysis of scheduling models integrates batching decisions with sequencing decisions. The interplay between batching and sequencing reects the realities of the small-volume, high-variety manufacturing environment and adds a new feature to traditional scheduling problems. Practical interest in this topic has given rise to new research eorts, and there has been a series of articles in the research literature surveying the rapidly developing state of knowledge. Exam- ples include Ghosh (1), Liaee and Emmons (2), Potts and Van Wassenhove (3), and Webster and Baker (4). This paper deals with an important theoretical and practical problem in this area. We examine the single-machine model with family (or group) set-up times and a criterion of minimizing total weighted job completion time (weighted owtime). We propose new lower bounds for this problem, and then turn our attention to renement of a previously proposed branch-and-bound algorithm. The benets of our renements are illustrated by computational experiments.

27 citations


Journal ArticleDOI
TL;DR: It is concluded that it is possible to reasonably extend traditional heuristics to include dynamic phenomena from the real world, and modelling the secondary impact of events is a significant factor in schedule generation.
Abstract: Real schedulers are observed to avoid scheduling rare and expensive jobs immediately after preventative maintenance or a machine repair. The repairs or similar events produce ‘aversion’ in these jobs. The authors create a class of heuristics called aversion dynamics (AD) which focus on similar ideas of attraction and aversion in production planning. A preliminary heuristic (Averse-1) introducing the topic and related issues is formulated and the computational analysis is presented. AD is specially designed to piggy-back on a traditional heuristic and functions for a limited time after which the traditional heuristic regains control. The study shows that when impact results from an event such as a repair, Averse-1 significantly out-performs other dispatching heuristics for a wide range of scheduling problems. Since there is uncertainty about the expected impact, the heuristic is also analysed for the situations when the schedule is adjusted but the impact does not occur. In these cases, the heuristic takes a conservative posture (in hindsight) and sub-optimizes for a limited time. The study shows that while there is an added cost, the cost is relatively small. We conclude that (i) it is possible to reasonably extend traditional heuristics to include dynamic phenomena from the real world, and (ii) modelling the secondary impact of events is a significant factor in schedule generation. Copyright © 2000 John Wiley & Sons, Ltd.

26 citations


Journal ArticleDOI
TL;DR: This work derives a qualitative divergence between off-line and on-line algorithms for the load-balancing problem, the problem of assigning a list of jobs on m identical machines to minimize the makespan, the maximum load on any machine.
Abstract: Previously, extra-resource analysis has been used to argue that certain on-line algorithms are good choices for solving specific problems because these algorithms perform well with respect to the optimal off-line algorithm when given extra resources. We now introduce a new application for extra-resource analysis: deriving a qualitative divergence between off-line and on-line algorithms. We do this for the load-balancing problem, the problem of assigning a list of jobs on m identical machines to minimize the makespan, the maximum load on any machine. We analyze the worst-case performance of on-line and off-line approximation algorithms relative to performance of the optimal off-line algorithm when the approximation algorithms have k extra machines. Our main result are the following: The Longest-Processing-Time (ℒ) algorithm will produce a schedule with makespan no larger than that of the optimal off-line algorithm if ℒ has at least (4m−1) /3 machines while the optimal off-line algorithm has m machines. In contrast, no on-line algorithm can guarantee the same with any number of extra machines. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work gives a pseudopolynomial-time algorithm that, given an n-job instance whose optimal schedule has optimality criterion of value OPT, schedules a constant fraction of the n jobs within a constant factor times OPT.
Abstract: We consider a class of scheduling problems which includes a variety of problems that are exceedingly diicult to approximate (unless P=NP). In the face of very strong hardness results, we consider a relaxed notion of approximability and show that under this notion the problems yield constant-factor approximation algorithms (of a kind). Speciically we give a pseudopolynomial-time algorithm that, given an n-job instance whose optimal schedule has optimality criterion of value OPT, schedules a constant fraction of the n jobs within a constant factor times OPT. In many cases this can be converted to a fully polynomial-time algorithm. We then study the experimental performance of this algorithm and some additional heuristics. Speciically, we consider a set of instances of a one-machine scheduling problem that we have studied previously in the context of traditional approximation algorithms, where the goal is to optimize average weighted ow time. We show that for the instances that were hardest empirically for previous traditional approximation algorithms, a large fraction of the set of jobs can be scheduled using these techniques, with good performance. Our results are based on the existence of approximation algorithms for the nonpreemptive scheduling of jobs with release dates and due dates on one machine so as to maximize the (weighted) number of on-time jobs. As an additional contribution, we generalize the state of the art for such problems, giving the rst constant-factor approximation algorithms for the problem of scheduling jobs with resource requirements and release and due dates so as to optimize the weighted number of on-time jobs. In turn, this result further broadens the class of problems to which we can apply our relaxed-approximation result.

Journal ArticleDOI
TL;DR: Two new algorithms are presented: a 6 + 2 p 5 10:47-competitive deterministic algorithm and a 9:572-competitive randomized algorithm, both of which solve the on-line problem of assigning temporary jobs to related machines.
Abstract: We consider the on-line problem of assigning temporary jobs to related machines. In this model machines have speeds and the jobs are weighted and may be temporary (i.e. may expire after some unknown nite amount of time). The cost of an assignment is the maximum load on a machine at any time. We present two new algorithms: a 6 + 2 p 5 10:47-competitive deterministic algorithm and a 9:572-competitive randomized algorithm. The previously known best upper bound is achieved by a 20-competitive deterministic algorithm whose randomized version is 5e = 13:59-competitive.

Journal ArticleDOI
TL;DR: This work is in the process of building a proof-of-concept automated system for scheduling all the transportation for the United States military down to a low level of detail, using a multiagent society, with each agent performing a particular role for a particular organization.
Abstract: We are in the process of building a proof-of-concept automated system for scheduling all the transportation for the United States military down to a low level of detail This is a huge problem currently handled by many hundreds of people across a large number and variety of organizations Our approach is to use a multiagent society, with each agent performing a particular role for a particular organization Use of a common multiagent infrastructure allows easy communication between agents, both within the transportation society and with external agents generating transportation requirements We have demonstrated the feasi-bility of this approach on several large-scale deployment scenarios Copyright © 2000 John Wiley & Sons, Ltd


Journal ArticleDOI
TL;DR: It is proved that for n=m+1, the greedy algorithm is optimal, and an on-line algorithm with a competitive ratio of 1+e−(n/m)(1+o(1)).
Abstract: We consider load balancing in the following setting. The on-line algorithm is allowed to use n machines, whereas the optimal off-line algorithm is limited to m machines, for some fixed m

Journal ArticleDOI
TL;DR: In this paper, the problem of scheduling n independent weighted jobs on a constant number of unrelated parallel machines so as to minimize the weighted sum of job completion times Rm∣∣ ∑wjCj was studied.
Abstract: We study the problem of scheduling n independent weighted jobs on a constant number of unrelated parallel machines so as to minimize the weighted sum of job completion times Rm∣∣∑wjCj. We present an O(n log n) time approximation scheme for the non-preemptive case of this problem. Notice that when the number of machines is not a constant (i.e. R∣∣∑wjCj) then no PTAS is possible, unless =. Copyright © 2000 John Wiley & Sons, Ltd.


Journal ArticleDOI
TL;DR: The paper deals with the classical problem of minimizing the makespan in a three-machine flow shop and presents a new sufficient condition for identifying the intermediate non-bottleneck machine which is weaker than all conditions proposed so far.
Abstract: The paper deals with the classical problem of minimizing the makespan in a three-machine flow shop. When any one of the three machines is a non-bottleneck machine, the problem is efficiently solvable by one of three algorithms from the literature. We show that even if one chooses the best solution, the worst-case performance ratio of these algorithms is 2, and the bound of 2 is tight. We also present a new sufficient condition for identifying the intermediate non-bottleneck machine which is weaker than all conditions proposed so far. Copyright © 2000 John Wiley & Sons, Ltd.


Journal ArticleDOI
TL;DR: For the classic dynamic storage/spectrum allocation problem, it is shown that knowledge of the durations of the requests is of no great use to an on-line algorithm in the worst case and the competitive ratio of every randomized algorithm against an oblivious adversary is Ω(log x/log log x).
Abstract: For the classic dynamic storage/spectrum allocation problem, we show that knowledge of the durations of the requests is of no great use to an on-line algorithm in the worst case. This answers an open question posed by Naor et al. [9]. More precisely, we show that the competitive ratio of every randomized algorithm against an oblivious adversary is Ω(log x/log log x), where x may be any of several different parameters used in the literature. It is known that First Fit, which does not require knowledge of the durations of the task, is logarithmically competitive in these parameters. Copyright © 2000 John Wiley & Sons, Ltd.