scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 1997"


Journal ArticleDOI
TL;DR: The paper argues that the incorporation of value judgements in DEA was motivated by applications of the method in real life organisations, and concentrates on the implications of weights restrictions on the efficiency, targets and peer comparators of inefficient Decision Making Units.
Abstract: This paper provides a review of the evolution, development and future research directions on the use of weights restrictions and value judgements in Data Envelopment Analysis. The paper argues that the incorporation of value judgements in DEA was motivated by applications of the method in real life organisations. The application driven development of the methods has led to a number of different approaches in the literature which have inevitably different uses and interpretations. The paper concentrates on the implications of weights restrictions on the efficiency, targets and peer comparators of inefficient Decision Making Units. The paper concludes with future research directions in the area of value judgements and weights restrictions.

699 citations


Journal ArticleDOI
TL;DR: This paper summarizes several studies of such systems, and derives a set of general principles that artificial multi-agent systems can use to support overall system behavior significantly more complex than the behavior of the individuals agents.
Abstract: Agent architectures need to organize themselves and adapt dynamically to changing circumstances without top-down control from a system operator. Some researchers provide this capability with complex agents that emulate human intelligence and reason explicitly about their coordination, reintroducing many of the problems of complex system design and implementation that motivated increasing software localization in the first place. Naturally occurring systems of simple agents (such as populations of insects or other animals) suggest that this retreat is not necessary. This paper summarizes several studies of such systems, and derives from them a set of general principles that artificial multi-agent systems can use to support overall system behavior significantly more complex than the behavior of the individuals agents.

504 citations


Journal ArticleDOI
TL;DR: An extended version of the disjunctive graph model is introduced, that is able to take into account the fact that operations have to be assigned to machines, that allows for an integrated approach to the classical job-shop scheduling problem.
Abstract: The problem considered in this paper is an important extension of the classical job-shop scheduling problem, where the same operation can be performed on more than one machine. The problem is to assign each operation to a machine and to sequence the operations on the machines, such that the makespan of a set of jobs is minimized. We introduce an extended version of the disjunctive graph model, that is able to take into account the fact that operations have to be assigned to machines. This allows us to present an integrated approach, by defining a neighborhood structure for the problem where there is no distinction between reassigning or resequencing an operation. This neighborhood is proved to be connected. A tabu search procedure is proposed and computational results are provided.

398 citations


Journal ArticleDOI
TL;DR: A methodology is developed to identify the efficiency of IT utilization and the importance of IT-related activities and their effect on firm performance, within the DEA framework, and evaluates the marginal benefits of IT.
Abstract: The purpose of this paper is to consider the effect of Information Technology on the performance of a firm We use Data Envelopment Analysis (DEA) to study this problem In the paper, we outline DEA and address its advantages over parametric approaches We then develop a methodology to identify the efficiency of IT utilization and the importance of IT-related activities and their effect on firm performance, within the DEA framework Our methodology also evaluates the marginal benefits of IT We provide an application of our methodology through an illustration

253 citations


Journal ArticleDOI
TL;DR: The paper concludes that the dangers of misspecification are most serious when simple models are used and sample sizes are small, and it will usually be to the modeller's advantage to err on the side of including possibly irrelevant variables rather than run the risk of excluding a potentially important variable from the model.
Abstract: The use of Data Envelopment Analysis for estimating comparative efficiency has become widespread, and there has been considerable academic attention paid to the development of variants of the basic DEA model. However, one of the principal weaknesses of DEA is that - unlike statistically based methods - it yields no diagnostics to help the user determine whether or not the chosen model is appropriate. In particular, the choice of inputs and out-puts depends solely on the judgement of the user. The purpose of this paper is to examine the implications for efficiency scores of using a misspecified model. A simple production process is set up. Simulation models are then used to explore the effects of applying misspecified DEA models to this process. The phenomena investigated are: the omission of significant variables; the inclusion of irrelevant variables; and the adoption of an inappropriate variable returns to scale assumption. The robustness of the results is investigated in relation to sample size; variations in the number of inputs; correlation between inputs; and variations in the importance of inputs. The paper concludes that the dangers of misspecification are most serious when simple models are used and sample sizes are small. In such circumstances, it is concluded that it will usually be to the modeller's advantage to err on the side of including possibly irrelevant variables rather than run the risk of excluding a potentially important variable from the model.

232 citations


Journal ArticleDOI
TL;DR: This paper surveys two recent extensions of theory: scheduling with a 1-job-on-r-machine pattern and machine scheduling with availability constraints, and several local search techniques, including simulated annealing, tabu search, genetic algorithms and constraint guided heuristic search.
Abstract: Scheduling is concerned with allocating limited resources to tasks to optimize certain objective functions. Due to the popularity of the Total Quality Management concept, on-time delivery of jobs has become one of the crucial factors for customer satisfaction. Scheduling plays an important role in achieving this goal. Recent developments in scheduling theory have focused on extending the models to include more practical constraints. Furthermore, due to the complexity studies conducted during the last two decades, it is now widely understood that most practical problems are NP-hard. This is one of the reasons why local search methods have been studied so extensively during the last decade. In this paper, we review briefly some of the recent extensions of scheduling theory, the recent developments in local search techniques and the new developments of scheduling in practice. Particularly, we survey two recent extensions of theory: scheduling with a 1-job-on-r-machine pattern and machine scheduling with availability constraints. We also review several local search techniques, including simulated annealing, tabu search, genetic algorithms and constraint guided heuristic search. Finally, we study the robotic cell scheduling problem, the auto-mated guided vehicles scheduling problem, and the hoist scheduling problem.

222 citations


Journal ArticleDOI
TL;DR: The aircraft rotation problem (ARP) is solved by Lagrangian relaxation and subgradient optimi-zation and computational results on real data from a major airline are presented.
Abstract: Given a set of flights to be flown for a specific aircraft type, with specified maintenance locations, durations, and needed frequency, the aircraft rotation problem is to determine the specific route flown by each aircraft. The objective is to maximize the benefit derived from making specific connections. In this paper, we present a mathematical formulation for the aircraft rotation problem (ARP) and discuss its similarity with the asymmetric traveling salesman problem. We solve the ARP by Lagrangian relaxation and subgradient optimization and present computational results on real data from a major airline.

211 citations


Journal ArticleDOI
TL;DR: This version compiles over 800 published articles and dissertations related to Data Envelopment Analysis (DEA) over the years 1978-1996 from various sources to facilitate comprehensive growth in the field.
Abstract: Since the original DEA study by Charnes, Cooper and Rhodes (1978), there has been a rapid growth in the field. Due to the interdisciplinary nature of much of the research, there is a need for a single source referencing the wide range of articles appearing in the literature. The author's intention in maintaining a bibliography of DEA-related articles is to provide such a single source and thus facilitate comprehensive growth in the field. This version compiles over 800 published articles and dissertations related to Data Envelopment Analysis (DEA). The bibliography covers the years 1978-1996. Due to the archival nature of the publication, working papers and technical reports were excluded. While the bibliography was compiled from various sources, it represents the effort of a single person and consequently no claim can be made as to its completeness. Any corrections, additions, and/or suggestions will be welcomed in order that the bibliography may be revised and redistributed in a more complete and correct form. In particular, the author, with the assistance of Joe Zhu, is in the process of creating a WWW version of the DEA bibliography. When completed, it will be accessible from the author's homepage at the URL http://www.ecs.umass.edu/mie/faculty/seiford.html The author wishes to thank all colleagues who continue to send copies of their papers and would appreciate receiving additional DEA-related articles for incorporation into future versions of the bibliography.

210 citations


Journal ArticleDOI
TL;DR: The notion of "best practice" or "minimal extrapolation" regulation is introduced, and it is shown that cost reimbursement based on best practice norms may be (second best) optimal when the regulated firms have superior technological information and make non-verifiable cost reductions.
Abstract: In this paper, we introduce the notion of "best practice" or "minimal extrapolation" regulation, and we show that cost reimbursement based on best practice norms may be (second best) optimal when the regulated firms have superior technological information and make non-verifiable cost reductions. In particular, we investigate the use of Data Envelopment Analysis (DEA) in regulatory environments with considerable technological uncertainty. A series of DEA models, including the crs, drs, vrs, fdh and frh models, are considered, and it is shown that schemes which reimburse actual costs plus a fraction of DEA estimated cost reductions will (1) induce the firms to minimize costs and (2) minimize the informational rents of the firms.

132 citations


Journal ArticleDOI
TL;DR: Extensive computational tests indicate that some of the heuristics consistently generate optimal or near-optimal solutions in a non-preemptive two-stage hybrid flow shop problem.
Abstract: This paper considers a non-preemptive two-stage hybrid flow shop problem in which the first stage contains several identical machines, and the second stage contains a single machine Each job is to be processed on one of the first-stage machines, and then on the second-stage machine The objective is to find a schedule which minimizes the maximum completion time or makespan The problem is NP-hard in the strong sense, even when there are two machines at the first stage Several lower bounds are derived and are tested in a branch and bound algorithm Also, constructive heuristics are presented, and a descent algorithm is proposed Extensive computational tests with up to 250 jobs, and up to 10 machines in the first stage, indicate that some of the heuristics consistently generate optimal or near-optimal solutions

132 citations


Journal ArticleDOI
TL;DR: The problem of rescheduling a facility modeled as a single machine in the face of newly arrived jobs with part-type dependent setup times is considered and a polynomial-time algorithm is provided for the maximum completion time problem and it is proved that the total weighted completionTime problem is NP-hard in the strong sense.
Abstract: We consider the problem of rescheduling a facility modeled as a single machine in the face of newly arrived jobs with part-type dependent setup times. The facility contains a number of jobs that have been assigned due dates and scheduled so as to meet them. We wish to insert the new jobs into the existing schedule in a manner that will minimize the disruption of the jobs in the system and minimize the total weighted completion time or the maximum completion time of the new jobs. We provide a polynomial-time algorithm for the maximum completion time problem, prove that the total weighted completion time problem is NP-hard in the strong sense and study several of its special cases. In particular, we show that the case with reverse-agreeable weights (of which the unit weight problem is a special case) can be solved in polynomial time when the number of part types is fixed. We also present two heuristics for the problem with arbitrary weights and develop data-dependent worst-case error bounds. Extensive computational experiments show that the heuristics consistently obtain near-optimal solutions in very reasonable CPU times.

Journal ArticleDOI
TL;DR: This paper study the computational complexity of multi-purpose machine scheduling problems with identical and uniform machines as well as problems with shop characteristics.
Abstract: In a multi-purpose machine scheduling problem, jobs or operations can be processed by any machine of prespecified subsets of the machine set. In this paper, we study the computational complexity of such scheduling problems. We study scheduling problems with identical and uniform machines as well as problems with shop characteristics.

Journal ArticleDOI
TL;DR: This work presents and test an extension to DEA for mitigating the effect of noise in evaluating a batter's true "skill" and investigates this effect by creating noisy data sets based on actual data sets and compared the results, which revealed a negative bias in the majority of cases.
Abstract: Data Envelopment Analysis (DEA) is used to create an alternative to traditional batting statistics called the Composite Batter Index (CBI). Advantages of CBI over traditional statistics include the fact that players are judged on the basis of what they accomplish relative to other players and that it automatically accounts for changing conditions of the game that raise or lower batting statistics. Historical results are examined to show how the industry of baseball batting has matured and potential uses of CBI are discussed. The application of baseball suggests that random variation may have an effect on CBI. We investigated this effect by creating noisy data sets based on actual data sets and then compared the results, which revealed a negative bias in the majority of cases. We then present and test an extension to DEA for mitigating this effect of noise in evaluating a batter's true "skill".

Journal ArticleDOI
TL;DR: A graph-theoretic model is proposed for both the medium- and the short-term sequencing of satellite shot sequencing and algorithmic solutions are presented by using properties of the model.
Abstract: The satellite shot sequencing problem consists in choosing the pictures to be completed by defining sequences of shots which must respect technical constraints and limits. We propose a graph-theoretic model for both the medium- and the short-term sequencing and present algorithmic solutions by using properties of the model.

Journal ArticleDOI
TL;DR: Proposals to integrate a number of the most important Lp-norm methods proposed to date within a unified framework are made, emphasizing their conceptual differences and similarities, rather than focusing on mathematical detail.
Abstract: The body of literature on classification methods which estimate boundaries between the groups (classes) by optimizing a function of the L p -norm distances of observations in each group from these boundaries, is maturing fast. The number of published research articles on this topic, especially on mathematical programming (MP) formulations and techniques for L p -norm classification, is now sizable. This paper highlights historical developments that have defined the field, and looks ahead at challenges that may shape new research directions in the next decade. In the first part, the paper summarizes basic concepts and ideas, and briefly reviews past research. Throughout, an attempt is made to integrate a number of the most important L p -norm methods proposed to date within a unified framework, emphasizing their conceptual differences and similarities, rather than focusing on mathematical detail. In the second part, the paper discusses several potential directions for future research in this area. The long-term prospects of L p -norm classification (and discriminant) research may well hinge upon whether or not the channels of communication between on the one hand researchers active in L p -norm classification, who tend to have their roots primarily in the decision sciences, the management sciences, computer science and engineering, and on the other hand practitioners and researchers in the statistical classification community, will be improved. This paper offers potential reasons for the lack of communication between these groups, and suggests ways in which L p -norm research may be strengthened from a statistical viewpoint. The results obtained in L p -norm classification studies are clearly relevant and of importance to all researchers and practitioners active in classification and discriminant analysis. The paper also briefly discusses artificial neural networks, a promising non-traditional method for classification which has recently emerged, and suggests that it may be useful to explore hybrid classification methods that take advantage of the complementary strengths of different methods, e.g., neural network and L p -norm methods.

Journal ArticleDOI
TL;DR: Modifications in the concept of efficiency that occur in Data Envelopment Analysis when best practice selection is subjected to additional constraints reflecting institutional circumstances, externalities, equity considerations or other extraneous information are discussed.
Abstract: We discuss modifications in the concept of efficiency that occur in Data Envelopment Analysis when best practice selection is subjected to additional constraints reflecting institutional circumstances, externalities, equity considerations or other extraneous information. Such additional constraints restrict the feasible production possibility set on the envelopment side problem. We provide an overview of constraints that may be present on the envelopment side; some of them mimic the well-known cone-ratio and assurance region models on the multiplier side problem. The discussion is mainly in terms of policy-based constraints that are external to the physical input-output relationships and instead reflect the institutional setting of the efficiency rankings, including considerations of the economic and social policy. A numerical example which rates the socio-economic performance of both developed and developing nations is provided to illustrate our model developments.

Journal ArticleDOI
TL;DR: The human resource planning and scheduling system described in this paper assists in the rostering of approximately 500 staff for the airport operations of a major international airline at one of the busiest international airports.
Abstract: The human resource planning and scheduling system described in this paper assists in the rostering of approximately 500 staff for the airport operations of a major international airline at one of the busiest international airports. The system rosters airline ground staff over a monthly planning horizon so that the work load is evenly distributed among the staff and idle time, the main productivity measure, is minimised. The rosters are subject to a large number of rules designed to ensure reasonable working conditions and service stand-ards. The system then allocates individual tasks to the staff for any particular day, and effectively manages, in real-time, disruptions that occur due to aircraft delays and unplanned staff absences on the day of operations. The system is also designed to reduce the number of staff needed to run the present rostering system. In this paper, we provide a description of the overall system and an algorithm for solving the rostering problem associated with the system.

Journal ArticleDOI
TL;DR: This paper introduces DEA as a tool to profile and evaluate practice patterns of primary care physicians and demonstrates how a cone ratio DEA model can incorporate strategic thinking and executive accountability when establishing clinical benchmarks.
Abstract: Evaluating the practice patterns of the newly dominant force in managed care, the primary care gatekeeper, will be one of the toughest challenges facing health reformers in the United States. This paper introduces DEA as a tool to profile and evaluate practice patterns of primary care physicians. To illustrate these ideas, the practice behavior of 326 primary care physicians in a large Health Maintenance Organization were studied for one year. When two DEA models were compared, a cone ratio DEA model projected the excess utilization of more hospital days and fewer office visits than a DEA model without defined preferred practice regions. The application demonstrates how a cone ratio DEA model can incorporate strategic thinking and executive accountability when establishing clinical benchmarks.

Journal ArticleDOI
TL;DR: A new problem decomposition procedure is described which dramatically expedites the solution of these computationally intense problems and fully exploits parallel processing environments.
Abstract: Accompanying the increasing popularity of DEA are computationally challenging applications: large-scale problems involving the solution of thousands of linear programs. This paper describes a new problem decomposition procedure which dramatically expedites the solution of these computationally intense problems and fully exploits parallel processing environments. Testing of a new DEA code based on this approach is reported for a wide range of problems, including the largest reported to date: an 8,700-LP banking-industry application.

Journal ArticleDOI
TL;DR: It is shown that the multi-level, min-max problem is NP-hard in the strong sense and a dynamic programming algorithm is developed which permits optimal schedules to be determined for large, multi- level problems.
Abstract: Solving the level (or balanced) schedule problem is the most important scheduling goal for just-in-time production assembly systems. No previous methods have been presented for determining optimal balanced schedules in multi-level facilities. In this paper, it is shown that the multi-level, min-max problem is NP-hard in the strong sense. A dynamic programming algorithm (DP) is developed for both the min-max and min-sum problems which, for the first time, permits optimal schedules to be determined for large, multi-level problems. The time and space requirements of the DP are analyzed and several techniques for reducing the DP's computational requirements are described. A filtering scheme is proposed to eliminate dominated solutions from a problem's potentially vast state space. Extensive computational testing of the min-max algorithm is reported and the conclusions from this testing are presented.

Journal ArticleDOI
TL;DR: A new model of software production is introduced that considers more outputs than those previously cited in the literature and shows some of the misleading results that can be obtained when the simple, traditional, ratio definition of productivity is used for this purpose.
Abstract: This paper presents two empirical studies of software production conducted at two large Canadian banks. For this purpose, we introduce a new model of software production that considers more outputs than those previously cited in the literature. The first study analyses a group of software development projects and compares the ratio approach to performance measurement to the results of DEA. It is shown that the main deficiencies of the performance ratio method can be avoided with the latter. Two different approaches are employed to constrain the DEA multipliers with respect to subjective managerial goals. As is further shown, incorporating subjective values into efficiency measures must be done in a careful and rigorous manner, within a framework familiar to management. The second study investigates the effect of quality on software maintenance (enhancement) projects. Quality appears to have a significant impact on the efficiency and cost of software projects in the data set. We further show the problems that may result when quality is excluded from the production models for efficiency assessment. In particular, we show some of the misleading results that can be obtained when the simple, traditional, ratio definition of productivity is used for this purpose.

Journal ArticleDOI
TL;DR: A family of rolling horizon procedures for the problem of minimizing total completion time on a single machine with release times is presented, which provides an excellent compromise between the excessive computation time required by exact solution methods and the poor solution quality that myopic dispatching rules may yield.
Abstract: We present a family of rolling horizon procedures for the problem of minimizing total completion time on a single machine with release times. These procedures develop solutions by using information about future job arrivals at each decision point. This approach provides an excellent compromise between the excessive computation time required by exact solution methods and the poor solution quality that myopic dispatching rules may yield. Extensive computational experiments show that these procedures consistently obtain high-quality solutions in very reasonable CPU times.

Journal ArticleDOI
TL;DR: This paper considers a problem of scheduling N jobs on a single machine to minimize the maximum lateness, and proposes a single-batch heuristic in which all jobs of a family form a batch, and a double-batchHeuristics in which each family is partitioned into at most two batches according to the due dates of its jobs.
Abstract: This paper considers a problem of scheduling N jobs on a single machine to minimize the maximum lateness. A partitioning of the jobs into F families is given. A set-up time is required at the start of each batch, where a batch is a largest set of contiguously scheduled jobs from the same family. We propose a single-batch heuristic in which all jobs of a family form a batch, and a double-batch heuristic in which each family is partitioned into at most two batches according to the due dates of its jobs. Both heuristics require O(N log N) time. It is shown that the single-batch heuristic has a worst-case performance ratio of 2 -1/F, whereas a composite heuristic which selects the better of the schedules generated by the single- and double-batch heuristics has a worst-case performance ratio of 5/3 for arbitrary F. Lower bounds are derived and are incorporated in a branch and bound algorithm. This algorithm uses a procedure to reduce the size of the problem, and employs a branching rule which forces pairs of jobs to lie in the same batch or in different batches. Computational tests show that the algorithm is effective in solving problems with up to 50 jobs.

Journal ArticleDOI
TL;DR: In this article, the authors consider the control of a single batch processing machine with random processing times and incompatible job families (jobs from different families cannot be processed together in the same batch).
Abstract: We consider the control of a single batch processing machine with random processing times and incompatible job families (jobs from different families cannot be processed together in the same batch). Holding costs are incurred for each unit of time that a job waits in the system before being served, and the objective is to minimize the long-run average cost per unit time. We first determine optimal policies for the static problem where all jobs are available simultaneously. We next characterize the optimal policies for certain problems with dynamic arrivals of jobs under the restriction that the machine is not allowed to idle. Finally, we develop a simple heuristic scheduling policy to control the machine. Simulation results are provided to demonstrate the effectiveness of our heuristic over a wide range of problem instances and to compare its performance with existing heuristics.

Journal ArticleDOI
TL;DR: The minimal number of robots needed to meet a given cyclic schedule, for all possible cycle lengths, is found, the complex-ity of the suggested algorithm being O(m 5 ), independently of the range within which the cycle length value may vary.
Abstract: We study a problem of cyclic no-wait scheduling of identical parts on m sequential machines. A number of robots are used to transport the parts from one machine to another. We consider the problem that has two performance measures: one is the number of robots to be used, the other is the period of a cyclic schedule. We find the minimal number of robots needed to meet a given cyclic schedule, for all possible cycle lengths, the complex-ity of the suggested algorithm being O(m 5 ), independently of the range within which the cycle length value may vary.

Journal ArticleDOI
TL;DR: These heuristics are tested through intensive computational experiments on a 480-instance RCPS data set recently generated by Kolisch et al. and show comparable performance to the branch-and-bound algorithm.
Abstract: The Resource-Constrained Project Scheduling (RCPS) problem is a well known and challenging combinatorial optimization problem. It is a generalization of the Job Shop Scheduling problem and thus is NP-hard in the strong sense. Problem Space Search is a local search "metaheuristic" which has been shown to be effective for a variety of combinatorial optimization problems including Job Shop Scheduling. In this paper, we propose two problem space search heuristics for the RCPS problem. These heuristics are tested through intensive computational experiments on a 480-instance RCPS data set recently generated by Kolisch et al. [12]. Using this data set we compare our heuristics with a branch-and-bound algorithm developed by Demuelemeester and Herreolen [9]. The results produced by the heuristics are extremely encouraging, showing comparable performance to the branch-and-bound algorithm.

Journal ArticleDOI
TL;DR: An evaluation of managerial performance is proposed, not on the usual basis of static efficiency, but on the intertemporal basis of change in efficiency and adaptation to the bias of technical change.
Abstract: DEA is typically applied to cross-section data to analyze productive efficiency. DEA is infrequently applied to panel data to analyze the variation of productive efficiency over time, but when it is, the technique of choice has been window analysis. Here, we adopt a different approach to the use of DEA with panel data. Following the lead of Fare and others, we use DEA to construct a Malmquist index of productivity change, and we provide a new decomposition of the Malmquist productivity index. Our new decomposition allocates productivity change to change in productive efficiency, the magnitude of technical change, and the bias of technical change. We then propose an evaluation of managerial performance, not on the usual basis of static efficiency, but on the intertemporal basis of change in efficiency and adaptation to the bias of technical change. We illustrate our approach with an examination of the recent productivity change experience in Spanish savings banks.

Journal ArticleDOI
TL;DR: It is proved that TREE-VSP(C) with a depth-first routing constraint can be exactly solved in O(n log n) time, and it is shown that, if this exact algorithm is used as an approximate algorithm for the original TREE, its worst-case performance ratio is at most two.
Abstract: In this paper, we consider a single-vehicle scheduling problem on a tree-shaped road net-work. Let T =(V,E) be a tree, where V is a set of n vertices and E is a set of edges. A task is located at each vertex v, which is also denoted as v. Each task v has release time r(v) and handling time h(v). The travel times c(u, v) and c(v, u) are associated with each edge (u, v) of E. The vehicle starts from an initial vertex v_0 of V, visits all tasks v in V for their processing, and returns to v_0 . The objective is to find a routing schedule of the vehicle that minimizes the completion time, denoted as C (i.e., the time to return to v_0 after processing all tasks). We call this problem TREE-VSP(C). We first prove that TREE-VSP(C) is NP-hard. However, we then show that TREE-VSP(C) with a depth-first routing constraint can be exactly solved in O(n log n) time. Moreover, we show that, if this exact algorithm is used as an approximate algorithm for the original TREE-VSP(C), its worst-case performance ratio is at most two.

Journal ArticleDOI
TL;DR: Four alternative neighbourhood search methods are developed: multi-start descent, simulated annealing, threshold accepting and tabu search, which generate high quality schedules at relatively modest computational expense.
Abstract: Local search heuristics are developed for a problem of scheduling a single machine to minimize the total weighted completion time. The jobs are partitioned into families, and a set-up time is necessary when there is a switch in processing jobs from one family to jobs of another family. Four alternative neighbourhood search methods are developed: multi-start descent, simulated annealing, threshold accepting and tabu search. The performance of these heuristics is evaluated on a large set of test problems, and the results are also compared with those obtained by a genetic algorithm. The best results are obtained with the tabu search method for smaller numbers of families and with the genetic algorithm for larger numbers of families. In combination, these methods generate high quality schedules at relatively modest computational expense.

Journal ArticleDOI
TL;DR: This paper considers a job shop scheduling problem in which the setup times of jobs are sequence dependent and separable from their processes, and presents a simple, polynomial time heuristic procedure for solving it.
Abstract: In this paper, we consider a job shop scheduling problem in which the setup times of jobs are sequence dependent and separable from their processes. The objective of the problem is to minimize the time required to complete all jobs in the system. We formulate this problem as a mixed integer program and present a simple, polynomial time heuristic procedure for solving it. The procedure is based upon sequentially identifying a pair of operations that provide a minimum lower bound on the makespan of the associated two-jobym-machine problem with release times. A computational study demonstrates the superior performance of the new heuristic over the one developed by Zhou and Egbelu.