scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Scheduling in 2003"


Journal ArticleDOI
TL;DR: A framework for understanding rescheduling strategies, policies, and methods based on a wide variety of experimental and practical approaches that have been described in the rescheduled literature is described.
Abstract: Many manufacturing facilities generate and update production schedules, which are plans that state when certain controllable activities (e.g., processing of jobs by resources) should take place. Production schedules help managers and supervisors coordinate activities to increase productivity and reduce operating costs. Because a manufacturing system is dynamic and unexpected events occur, rescheduling is necessary to update a production schedule when the state of the manufacturing system makes it infeasible. Rescheduling updates an existing production schedule in response to disruptions or other changes. Though many studies discuss rescheduling, there are no standard definitions or classification of the strategies, policies, and methods presented in the rescheduling literature. This paper presents definitions appropriate for most applications of rescheduling manufacturing systems and describes a framework for understanding rescheduling strategies, policies, and methods. This framework is based on a wide variety of experimental and practical approaches that have been described in the rescheduling literature. The paper also discusses studies that show how rescheduling affects the performance of a manufacturing system, and it concludes with a discussion of how understanding rescheduling can bring closer some aspects of scheduling theory and practice.

818 citations


Journal ArticleDOI
TL;DR: This work examines the implications of minimizing an aggregate scheduling objective function in which jobs belonging to different customers are evaluated based on their individual criteria, and examines three basic scheduling criteria: minimizing makespan, minimizing maximum lateness, and minimizing total weighted completion time.
Abstract: We consider a scheduling problem involving a single processor being utilized by two or more customers. Traditionally, such scenarios are modeled by assuming that each customer has the same criterion. In practice, this assumption may not hold. Instead of using a single criterion, we examine the implications of minimizing an aggregate scheduling objective function in which jobs belonging to different customers are evaluated based on their individual criteria. We examine three basic scheduling criteria: minimizing makespan, minimizing maximum lateness, and minimizing total weighted completion time. Although determining a minimum-cost schedule according to any one of these criteria is polynomially solvable, we demonstrate that when minimizing a mix of these criteria, the problem becomes NP-hard.

320 citations


Journal ArticleDOI
TL;DR: The network generator meets the shortcomings of former network generators since it employs a wide range of different parameters which have been shown to serve as possible predictors of the hardness of different project scheduling problems.
Abstract: In this paper, we describe RanGen, a random network generator for generating activity-on-the-node networks and accompanying data for different classes of project scheduling problems The objective is to construct random networks which satisfy preset values of the parameters used to control the hardness of a problem instance Both parameters which are related to the network topology and resource-related parameters are implemented The network generator meets the shortcomings of former network generators since it employs a wide range of different parameters which have been shown to serve as possible predictors of the hardness of different project scheduling problems Some of them have been implemented in former network generators while others have not

261 citations



Journal ArticleDOI
TL;DR: This paper deals with models, relaxations, and algorithms for an integrated approach to vehicle and crew scheduling for an urban mass transit system with a single depot, and proposes new mathematical formulations for integrated vehicle andCrew scheduling problems and corresponding Lagrangian relaxations and Lagrangia heuristics.
Abstract: This paper deals with models, relaxations, and algorithms for an integrated approach to vehicle and crew scheduling for an urban mass transit system with a single depot. We discuss potential benefits of integration and provide an overview of the literature which considers mainly partial integration. Our approach is new in the sense that we can tackle integrated vehicle and crew scheduling problems of practical size. We propose new mathematical formulations for integrated vehicle and crew scheduling problems and we discuss corresponding Lagrangian relaxations and Lagrangian heuristics. To solve the Lagrangian relaxations, we use column generation applied to set partitioning type of models. The paper is concluded with a computational study using real life data, which shows the applicability of the proposed techniques to practical problems. Furthermore, we also address the effectiveness of integration in different situations.

158 citations


Journal ArticleDOI
TL;DR: This work addresses the one-machine problem in which the jobs have distinct due dates, earliness costs, and tardiness costs and proposes a new lower bound based on the decomposition of each job in unary operations that is then assigned to the time slots, which gives a preemptive schedule.
Abstract: We address the one-machine problem in which the jobs have distinct due dates, earliness costs, and tardiness costs. In order to determine the minimal cost of such a problem, a new lower bound is proposed. It is based on the decomposition of each job in unary operations that are then assigned to the time slots, which gives a preemptive schedule. Assignment costs are defined so that the minimum assignment cost is a valid lower bound. A branch-and-bound algorithm based on this lower bound and on some new dominance rules is experimentally tested.

96 citations


Journal ArticleDOI
TL;DR: Motwani, Phillips, and Torng as discussed by the authors theoretically proved that Equi-partition efficiently schedules multiprocessor batch jobs with different execution characteristics, and they extended this result by considering jobs with multiple phases of arbitrary nondecreasing and sublinear speedup functions.
Abstract: This work theoretically proves that Equi-partition efficiently schedules multiprocessor batch jobs with different execution characteristics. Motwani, Phillips, and Torng (Proc. 4th Annu. ACMISIAM Symp. on Discrete Algorithms, pp. 422-431, Austin, 1993) show that the mean response time of jobs is within two of optimal for fully parallelizable jobs. We extend this result by considering jobs with multiple phases of arbitrary nondecreasing and sublinear speedup functions. Having no knowledge of the jobs being scheduled (non-clairvoyant) one would not expect it to perform well. However, our main result shows that the mean response time obtained with Equi-partition is no more than 2 + √ ≈ 3.73 times the optimal. The paper also considers schedulers with different numbers of preemptions and jobs with more general classes of speedup functions. Matching lower bounds are also proved.

57 citations


Journal ArticleDOI
TL;DR: The MINSPACE problem minimizes the maximum fullness among all slots in a feasible schedule where the fullness of a slot is the sum of the sizes of ads assigned to the slot.
Abstract: Consider a set of n advertisements (hereafter called "ads") A = {A1,...,An} competing to be placed in a planning horizon which is divided into N time intervals called slots. An ad Ai is specified by its size si and frequency wi. The size si represents the amount of space the ad occupies in a slot. Ad Ai is said to be scheduled if exactly wi copies of Ai are placed in the slots subject to the restriction that a slot contains at most one copy of an ad. In this paper, we consider two problems. The MINSPACE problem minimizes the maximum fullness among all slots in a feasible schedule where the fullness of a slot is the sum of the sizes of ads assigned to the slot. For the MAXSPACE problem, in addition, we are given a common maximum fullness S for all slots. The total size of the ads placed in a slot cannot exceed S. The objective is to find a feasible schedule A' ⊆ A of ads such that the total occupied slot space ΣAi∈A WiSi is maximized. We examine the complexity status of both problems and provide heuristics with performance guarantees.

55 citations


Journal ArticleDOI
TL;DR: The problem of finding the shortest route the robot should take when moving parts between machines in a flow-shop is proved to be NP-hard in the strong sense when the travel times between the machines of the cell are symmetric and satisfy the triangle inequality.
Abstract: We study the computational complexity of finding the shortest route the robot should take when moving parts between machines in a flow-shop. Though this complexity has already been addressed in the literature, the existing attempts made crucial assumptions which were not part of the original problem. Therefore, they cannot be deemed satisfactory. We drop these assumptions in this paper and prove that the problem is NP-hard in the strong sense when the travel times between the machines of the cell are symmetric and satisfy the triangle inequality. We also impose no restrictions on the times of robot arrival at and departure from machines as it is the case in the related, but different, hoist scheduling problem. Our results hold for processing times equal on all machines in the cell. However, the equidistant case for equal processing times can be solved in O(1) time.

53 citations


Journal ArticleDOI
TL;DR: This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice.
Abstract: A substantial part of the operating costs of public transport is attributable to drivers, whose efficient use therefore is important. The compilation of optimal work packages is difficult, being NP-hard. In practice, algorithmic advances and enhanced computing power have led to significant progress in achieving better schedules. However, differences in labor practices among modes of transport and operating companies make production of a truly general system with acceptable performance a difficult proposition. TRACS II has overcome these difficulties, being used with success by a substantial number of bus and train operators. Many theoretical aspects of the system have been published previously. This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice. We discuss the extent to which users have been involved in system development, leading to many practical successes, and we summarize some recent achievements.

51 citations


Journal ArticleDOI
TL;DR: A polynomial time approximation scheme whose running time depends only linearly on n is developed, which gives a substantial improvement of the best previously knownPolynomial bound.
Abstract: We consider the problem of scheduling n independent jobs on m identical machines that operate in parallel. Each job must be processed without interruption for a given amount of time on any one of the m machines. In addition, each job has a release date, when it becomes available for processing, and, after completing its processing, requires an additional delivery time. The objective is to minimize the time by which all jobs are delivered. In the notation of Graham et al. (1979), this problem is noted PvrjvLmax. We develop a polynomial time approximation scheme whose running time depends only linearly on n. This linear complexity bound gives a substantial improvement of the best previously known polynomial bound (Hall and Shmoys, 1989). Finally, we discuss the special case of this problem in which there is a single machine and present an improved approximation scheme.

Journal ArticleDOI
TL;DR: This work develops a dynamic programming algorithm to generate all the nondominated schedule profiles for each product that are required to formulate the flowshop problem as a generalized traveling salesman problem, and develops and computationally test an efficient heuristic for this problem.
Abstract: Lot streaming involves splitting a production lot into a number of sublots, in order to allow the overlapping of successive operations, in multi-machine manufacturing systems. In no-wait flowshop scheduling, sublots are necessarily consistent, that is, they remain the same over all machines. The benefits of lot streaming include reductions in lead times and work-in-process, and increases in machine utilization rates. We study the problem of minimizing the makespan in no-wait flowshops producing multiple products with attached setup times, using lot streaming. Our study of the single product problem resolves an open question from the lot streaming literature. The intractable multiple product problem requires finding the optimal number of sublots, sublot sizes, and a product sequence for each machine. We develop a dynamic programming algorithm to generate all the nondominated schedule profiles for each product that are required to formulate the flowshop problem as a generalized traveling salesman problem. This problem is equivalent to a classical traveling salesman problem with a pseudopolynomial number of cities. We develop and computationally test an efficient heuristic for this problem. Our results indicate that solutions can quickly be found for flowshops with up to 10 machines and 50 products. Moreover, the solutions found by our heuristic provide a substantial improvement over previously published results.

Journal ArticleDOI
TL;DR: This work considers a relaxed version of the open shop scheduling problem–the “concurrent open shop” scheduling problem, in which any two operations of the same job on distinct machines are allowed to be processed concurrently, and gives a (1 + d)-approximation algorithm for the problem.
Abstract: We consider a relaxed version of the open shop scheduling problem--the "concurrent open shop" scheduling problem, in which any two operations of the same job on distinct machines are allowed to be processed concurrently. The completion time of a job is the maximum completion time of its operations. The objective is to schedule the jobs so as to minimize the weighted number of tardy jobs, with 0-1 operation processing times and a common due date d. We show that, even when the weights are identical, the problem has no (1-e)ln m-approximation algorithm for any e > 0 if NP is not a subset of DTIME(nlog log n), and has no c ċ ln m-approximation algorithm for some constant c > 0 if P ≠ NP, where m is the number of machines. This also implies that the problem is strongly NP-hard. We also give a (1+d)- approximation algorithm for the problem.

Journal ArticleDOI
TL;DR: The effect of the notification model on the non-preemptive scheduling of a single resource in order to maximize utilization is studied and alternate algorithms which provide immediate notification are presented, while matching most of the performance guarantees which are possible by schedulers which provide no such notification.
Abstract: When admission control is used, an on-line scheduler chooses whether or not to complete each individual job successfully by its deadline. An important consideration is at what point in time the scheduler determines if a job request will be satisfied, and thus at what point the scheduler is able to provide notification to the job owner as to the fate of the request. In the loosest model, often seen in real-time systems, such a decision can be deferred up until the job's deadline passes. In the strictest model, more suitable for customer-based applications, a scheduler would be required to give notification at the instant that a job request arrives.Unfortunately there seems to be little existing research which explicitly studies the effect of the notification model on the performance guarantees of a scheduler. We undertake such a study by reexamining a problem from the literature. Specifically, we study the effect of the notification model on the non-preemptive scheduling of a single resource in order to maximize utilization. At first glance, it appears severely more restrictive to compare a scheduler required to give immediate notification to one which need not give any notification. Yet we are able to present alternate algorithms which provide immediate notification, while matching most of the performance guarantees which are possible by schedulers which provide no such notification. In only one case are we able to give evidence that providing immediate notification may be more difficult.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the case where all tickets have the same price and requests are treated fairly, that is, a request which can be fulfilled must be granted, and they provided an asymptotically matching upper bound to the existing lower bound which states that all fair algorithms for this problem are ½-competitive on accommodating sequences, when there are at least three seats.
Abstract: The unit price seat reservation problem is investigated. The seat reservation problem is the problem of assigning seat numbers on-line to requests for reservations in a train traveling through k stations. We are considering the version where all tickets have the same price and where requests are treated fairly, that is, a request which can be fulfilled must be granted.For fair deterministic algorithms, we provide an asymptotically matching upper bound to the existing lower bound which states that all fair algorithms for this problem are ½-competitive on accommodating sequences, when there are at least three seats.Additionally, we give an asymptotic upper bound of 7/9 for fair randomized algorithms against oblivious adversaries.We also examine concrete on-line algorithms, First-Fit and Random for the special case of two seats. Tight analyses of their performance are given.

Journal ArticleDOI
TL;DR: This paper considers the on-line scheduling of jobs that may be competing for mutually exclusive resources, and devise algorithms which maintain a set of invariants which bound the accumulation of jobs on cliques (in the case of bipartite graphs, edges) in the graph.
Abstract: In this paper, we consider the on-line scheduling of jobs that may be competing for mutually exclusive resources We model the conflicts between jobs with a conflict graph, so that the set of all concurrently running jobs must form an independent set in the graph This model is natural and general enough to have applications in a variety of settings; however, we are motivated by the following two specific applications: traffic intersection control and session scheduling in high speed local area networks with spatial reuse Our results focus on two special classes of graphs motivated by our applications: bipartite graphs and interval graphs The cost function we use is maximum response time In all of the upper bounds, we devise algorithms which maintain a set of invariants which bound the accumulation of jobs on cliques (in the case of bipartite graphs, edges) in the graph The lower bounds show that the invariants maintained by the algorithms are tight to within a constant factor For a specific graph which arises in the traffic intersection control problem, we show a simple algorithm which achieves the optimal competitive ratio

Journal ArticleDOI
TL;DR: This paper shows that the single machine batching problem with family setup times to minimize maximum lateness is strongly NP-hard.
Abstract: In this paper, we consider the single machine batching problem with family setup times to minimize maximum lateness. While the problem was proved to be binary NP-hard in 1978, whether the problem is strongly NP-hard is a long-standing open question. We show that this problem is strongly NP-hard.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a slotted queueing system with C servers (processors) that can handle tasks and examine the impact of various task allocation strategies on the mean number of tasks in the system and the mean response time of tasks.
Abstract: We consider a slotted queueing system with C servers (processors) that can handle tasks (jobs). Tasks arrive in batches of random size at the start of every slot. Any task can be executed by any server in one slot with success probability α. If a task execution fails, then the task must be handled in some later time slot until it has been completed successfully. Tasks may be processed by several servers simultaneously. In that case, the task is completed successfully if the task execution is successful on at least one of the servers.We examine the impact of various allocation strategies on the mean number of tasks in the system and the mean response time of tasks. It is proven that both these performance measures are minimized by the strategy which always distributes the tasks over the servers as evenly as possible. Subsequently, we determine the distribution of the number of tasks in the system for a broad class of task allocation strategies, which includes the above optimal strategy as a special case. Some numerical experiments are performed to illustrate the performance characteristics of the various strategies.

Journal ArticleDOI
TL;DR: This work reconsiders the version of the traveling salesman problem (TSP) first studied in a well-known paper by Gilmore and Gomory (1964), and solves this TSP variant by a considerably simpler algorithm considerably simpler than previously known algorithms.
Abstract: We reconsider the version of the traveling salesman problem (TSP) first studied in a well-known paper by Gilmore and Gomory (1964). In this, the distance between two cities A and B, is an integrable function of the x-coordinate of A and the y-coordinate of B. This problem finds important applications in machine scheduling, workforce planning, and combinatorial optimization. We solve this TSP variant by a {\mathcal O}(n log n) algorithm considerably simpler than previously known algorithms. The new algorithm demonstrates and exploits the structure of an optimal solution, and recreates it using minimal storage space without the use of edge interchanges.


Journal ArticleDOI
TL;DR: An abstract on-line scheduling problem where the size of each requested service can be scaled down by the scheduler is studied, which embodies a notion of “Level of Service” that is increasingly important in multimedia applications.
Abstract: Motivated by an application in thinwire visualization, we study an abstract on-line scheduling problem where the size of each requested service can be scaled down by the scheduler. Thus, our problem embodies a notion of "Level of Service" that is increasingly important in multimedia applications. We give two schedulers FirstFit and EndFit based on two simple heuristics, and generalize them into a class of greedy schedulers. We show that both FirstFit and EndFit are 2-competitive, and any greedy scheduler is 3-competitive. These bounds are shown to be tight.

Journal ArticleDOI
TL;DR: The scheduling situation where n tasks, subjected to release dates and due dates, have to be scheduled on m parallel processors is studied and it is shown that the minimum maximal tardiness can be computed in polynomial time.
Abstract: We study the scheduling situation where n tasks, subjected to release dates and due dates, have to be scheduled on m parallel processors We show that, when tasks have unit processing times and either require 1 or m processors simultaneously, the minimum maximal tardiness can be computed in polynomia time Two algorithms are described The first one is based on a linear programming formulation of the problem while the second one is a combinatorial algorithm The complexity status of this "tall/small" task scheduling problem P|ri, pi = 1, sizei ∈ {1,m}|Tmax was unknown before, even for two processors

Journal ArticleDOI
TL;DR: In this article, the on-line caching problem in a restricted cache where each memory item can be placed in only a restricted subset of cache locations has been studied, and the results show that restricted caches are significantly more complex than identical caches.
Abstract: We study the on-line caching problem in a restricted cache where each memory item can be placed in only a restricted subset of cache locations. Examples of restricted caches in practice include victim caches, assist caches, and skew caches. To the best of our knowledge, all previous on-line caching studies have considered on-line caching in identical or fully-associative caches where every memory item can be placed in any cache location.In this paper, we focus on companion caches, a simple restricted cache that includes victim caches and assist caches as special cases. Our results show that restricted caches are significantly more complex than identical caches. For example, we show that the commonly studied Least Recently Used algorithm is not competitive unless cache reorganization is allowed while the performance of the First In First Out algorithm is competitive but not optimal. We also present two near optimal algorithms for this problem as well as lower bound arguments.

Journal ArticleDOI
TL;DR: In this article, the problem of batching parts and scheduling their operations in flexible manufacturing cells is considered, where the objective is to minimize the total number of setups, given that each part requires a sequence of operations, and each operation requires a given tool.
Abstract: In this paper we consider the problem of batching parts and scheduling their operations in flexible manufacturing cells. We consider the case in which there is only one processor and no more than k parts may be present in the system at the same time. The objective is to minimize the total number of setups, given that each part requires a sequence of operations, and each operation requires a given tool. We prove that even for ke3 the problem is NP-hard and we develop a branch-and-price scheme for its solution. Moreover, we present an extensive computational experience. Finally, we analyze some special cases and related problems.

Journal ArticleDOI
TL;DR: The layered preemptive priority scheduling policy is defined which generalizes fixed preemptive priorities by combination with other policies in a layered structure and the concept of majorizing work arrival function is introduced to synthesize essential ideas used in existing analysis of the fixedemption priority policy.
Abstract: The analysis of fixed priority preemptive scheduling has been extended in various ways to improve its usefulness for the design of real-time systems. In this paper, we define the layered preemptive priority scheduling policy which generalizes fixed preemptive priorities by combination with other policies in a layered structure. In particular, the combination with the Round Robin scheduling policy is studied. Its compliance with Posix 1003.1b requirements is shown and its timing analysis is provided. For this purpose and as a basis for the analysis of other policies, the concept of majorizing work arrival function, is introduced to synthesize essential ideas used in existing analysis of the fixed preemptive priority policy.If critical resources are protected by semaphores, the Priority Ceiling Protocol (PCP) can be used under fixed preemptive priorities to control resulting priority inversions. An extension of the PCP is proposed for Round Robin, to allow a global control of priority inversions under the layered priority policy and to prevent deadlocks. The initial timing analysis is extended to account for the effects of the protocol. The results are illustrated by a small test case.

Journal ArticleDOI
TL;DR: The algorithms of Bartal et al. and Seiden are not barely random–in fact, these algorithms potentially make a random choice for each job scheduled, asymptotically.
Abstract: We consider randomized algorithms for on-line scheduling on identical machines. For two machines, a randomized algorithm achieving a competitive ratio of 4/3 was found by Bartal et al. (1995). Seiden has presented a randomized algorithm which achieves competitive ratios of 1.55665, 1.65888, 1.73376, 1.78295, and 1.81681, for m = 3, 4, 5, 6, 7, respectively (Seiden, 2000). A barely random algorithm is one which is a distribution over a constant number of deterministic strategies. The algorithms of Bartal et al. and Seiden are not barely random--in fact, these algorithms potentially make a random choice for each job scheduled. We present the first barely random on-line scheduling algorithms. In addition, our algorithms use less space and time than the previous algorithms, asymptotically.

Journal ArticleDOI
TL;DR: It is shown that the single machine multi-operation jobs scheduling problem remains strongly NP-hard even when the due-dates are common and all jobs have the same processing time.
Abstract: We consider the single machine multi-operation jobs scheduling problem to minimize the number of tardy jobs. Each job consists of several operations that belong to different families. In a schedule, each family of job operations may be processed in batches with each batch incurring a setup time. A job completes when all of its operations have been processed. The objective is to minimize the number of tardy jobs. In the literature, this problem has been proved to be strongly NP-hard for arbitrary due-dates. We show in this paper that the problem remains strongly NP-hard even when the due-dates are common and all jobs have the same processing time.


Journal ArticleDOI
TL;DR: The classical objective functions of minimizing makespan and minimizing average completion time of the jobs are studied and one of the highlights is that List Scheduling is a best possible algorithm for the makespan problem under the real-time model if the number of machines does not exceed thenumber of threads by more than 1.
Abstract: On-line scheduling problems are studied with jobs organized in a number of sequences called threads. Each job becomes available as soon as a scheduling decision is made on all preceding jobs in the same thread.We consider two different on-line paradigms. The first one models a sort of batch process: a schedule is constructed, in an on-line way, which is to be executed later. The other one models a real-time planning situation: jobs are immediately executed at the moment they are assigned to a machine.The classical objective functions of minimizing makespan and minimizing average completion time of the jobs are studied.We establish a fairly complete set of results for these problems. One of the highlights is that List Scheduling is a best possible algorithm for the makespan problem under the real-time model if the number of machines does not exceed the number of threads by more than 1. Another one is a polynomial time best possible algorithm for minimizing the average completion time on a single machine under both on-line paradigms.

Journal ArticleDOI
TL;DR: It is shown that using additional unit-speed processors instead of a faster processor is a possible but not cost effective way to achieve an O(1) competitive ratio.
Abstract: This paper studies on-line scheduling in a single-processor system that allows preemption. The aim is to maximize the total value of jobs completed by their deadlines. It is known that if the on-line scheduler is given a processor faster (say, two times faster) than the off-line scheduler, then there exists an on-line algorithm called SLACKER that can achieve an O(1) competitive ratio. In this paper, we show that using additional unit-speed processors instead of a faster processor is a possible but not cost effective way to achieve an O(1) competitive ratio. Specifically, we find that-zTheta;(log k) unit-speed processors are required, where k is the importance ratio. Another contribution of this paper is an improved analysis of the competitiveness of SLACKER; this new analysis enables us to show that SLACKER, when exteaded to multi-processor systems, can still guarantee an O(1) competitive ratio.