scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 1988"


Journal ArticleDOI
TL;DR: A massively parallelizable algorithm for the classical assignment problem was proposed in this article, where unassigned persons bid simultaneously for objects thereby raising their prices. Once all bids are in, objects are awarded to the highest bidder.
Abstract: We propose a massively parallelizable algorithm for the classical assignment problem. The algorithm operates like an auction whereby unassigned persons bid simultaneously for objects thereby raising their prices. Once all bids are in, objects are awarded to the highest bidder. The algorithm can also be interpreted as a Jacobi — like relaxation method for solving a dual problem. Its (sequential) worst — case complexity, for a particular implementation that uses scaling, is O(NAlog(NC)), where N is the number of persons, A is the number of pairs of persons and objects that can be assigned to each other, and C is the maximum absolute object value. Computational results show that, for large problems, the algorithm is competitive with existing methods even without the benefit of parallelism. When executed on a parallel machine, the algorithm exhibits substantial speedup.

649 citations


Journal ArticleDOI
TL;DR: This work considers the problem to schedule project networks subject to arbitrary resource constraints in order to minimize an arbitrary regular performance measure (i.e. a non-decreasing function of the vector of completion times).
Abstract: Project networks with time windows are generalizations of the well-known CPM and MPM networks that allow for the introduction of arbitrary minimal and maximal time lags between the starting and completion times of any pair of activities. We consider the problem to schedule such networks subject to arbitrary (even time dependent) resource constraints in order to minimize an arbitrary regular performance measure (i.e. a non-decreasing function of the vector of completion times). This problem arises in many standard industrial construction or production processes and is therefore particularly suited as a background model in general purpose decision support systems. The treatment is done by a structural approach that involves a generalization of both the disjunctive graph method in job shop scheduling [1] and the order theoretic methods for precedence constrained scheduling [18,23,24]. Besides theoretical insights into the problem structure, this approach also leads to rather powerful branch-and-bound algorithms. Computational experience with this algorithm is reported.

403 citations


Journal ArticleDOI
TL;DR: Eight algorithms which solve the shortest path tree problem on directed graphs are presented, together with the results of wide-ranging experimentation designed to compare their relative performances on different graph topologies.
Abstract: Theshortest path problem is considered from a computational point of view. Eight algorithms which solve theshortest path tree problem on directed graphs are presented, together with the results of wide-ranging experimentation designed to compare their relative performances on different graph topologies. The focus of this paper is on the implementation of the different data structures used in the algorithms. A "Pidgin Pascal" description of the algorithms is given, containing enough details to allow for almost direct implementation in any programming language. In addition, Fortran codes of the algorithms and of the graph generators used in the experimentation are provided on the diskette.

364 citations


Journal ArticleDOI
TL;DR: In this article, the use of Boolean techniques in a systematic study of cause-effect relationships is investigated, and procedures are provided to extrapolate from limited observations, concise and meaningful theories to explain the effect under study, and to prevent (or provoke) its occurrence.
Abstract: This paper investigates the use of Boolean techniques in a systematic study of cause-effect relationships. The model uses partially defined Boolean functions. Procedures are provided to extrapolate from limited observations, concise and meaningful theories to explain the effect under study, and to prevent (or provoke) its occurrence.

240 citations


Journal ArticleDOI
TL;DR: This paper analyzes the most efficient algorithms for the Linear Min-Sum Assignment Problem and shows that they derive from a common basic procedure, and evaluates the computational complexity and the average performance on randomly-generated test problems.
Abstract: This paper analyzes the most efficient algorithms for the Linear Min-Sum Assignment Problem and shows that they derive from a common basic procedure. For each algorithm, we evaluate the computational complexity and the average performance on randomly-generated test problems. Efficient FORTRAN implementations for the case of complete and sparse matrices are given.

163 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm to assemble large jigsaw puzzles using curve matching and combinatorial optimization techniques is presented, where pieces are photographed one by one and then the assembly algorithm, which uses only the puzzle piece shape information, is applied.
Abstract: An algorithm to assemble large jigsaw puzzles using curve matching and combinatorial optimization techniques is presented. The pieces are photographed one by one and then the assembly algorithm, which uses only the puzzle piece shape information, is applied. The algorithm was experimented successfully in the assembly of 104-piece puzzles with many almost similar pieces. It was also extended to solve an intermixed puzzle assembly problem and has successfully solved a 208-piece puzzle consisting of two intermixed 104-piece puzzles. Previous results solved puzzles with about 10 pieces, which were substantially different in shape.

129 citations


Journal ArticleDOI
TL;DR: This work developed and performed limited testing of a scheduling system, called OPIS 0, that exhibits opportunistic behavior and it is believed that such opportunistic views of scheduling would lead to systems that allow more flexibility in terms of designing scheduling procedures and supporting the scheduling function.
Abstract: In a search for more efficient yet effective ways of solving combinatorially complex problems such as jobshop scheduling, we move towards opportunistic approaches that attempt to exploit the structure of a given problem. Rather than adhere to a single problem-solving plan, such approaches are characterized by almost continual surveillance of the current problem-solving state to possibly modify plans so that activity is consistently directed toward those actions that currently seem most promising. Opportunistic behavior may occur in problem decomposition down to selective application of scheduling heuristics. We developed and performed limited testing of a scheduling system, called OPIS 0, that exhibits such behavior to some extent. The results are encouraging when compared to ISIS and a dispatching system. It is believed that such opportunistic views of scheduling would lead to systems that allow more flexibility in terms of designing scheduling procedures and supporting the scheduling function.

122 citations



Journal ArticleDOI
TL;DR: This paper illustrates how the application of integer programming to logic can reveal parallels between logic and mathematics and lead to new algorithms for inference in knowledge-based systems.
Abstract: This paper illustrates how the application of integer programming to logic can reveal parallels between logic and mathematics and lead to new algorithms for inference in knowledge-based systems. If logical clauses (stating that at least one of a set of literals is true) are written as inequalities, then the resolvent of two clauses corresponds to a certain cutting plane in integer programming. By properly enlarging the class of cutting planes to cover clauses that state that at least a specified number of literals are true, we obtain a generalization of resolution that involves both cancellation-type and circulant-type sums. We show its completeness by proving that it generates all prime implications, generalizing an early result by Quine. This leads to a cutting-plane algorithm as well as a generalized resolution algorithm for checking whether a set of propositions, perhaps representing a knowledge base, logically implies a given proposition. The paper is intended to be readable by persons with either an operations research or an artificial intelligence background.

97 citations


Journal ArticleDOI
TL;DR: The implementation of two fundamentally different algorithms for solving the maximum flow problem: Dinic's method and the network simplex method are studied and a "steepest-edge" pivot selection criterion is developed that is easy to include in an existing networksimplex implementation.
Abstract: We study the implementation of two fundamentally different algorithms for solving the maximum flow problem: Dinic's method and the network simplex method. For the former, we present the design of a storage-efficient implementation. For the latter, we develop a "steepest-edge" pivot selection criterion that is easy to include in an existing network simplex implementation. We compare the computational efficiency of these two methods on a personal computer with a set of generated problems of up to 4 600 nodes and 27 000 arcs.

95 citations



Journal ArticleDOI
TL;DR: The implementation is compared with mature state-of-the-art primal simplex and primal-dual codes and is found to be several times faster on all types of randomly generated network flow problems and the speed-up factor increases with problem dimension.
Abstract: We describe a relaxation algorithm [1,2] for solving the classical minimum cost network flow problem. Our implementation is compared with mature state-of-the-art primal simplex and primal-dual codes and is found to be several times faster on all types of randomly generated network flow problems. Furthermore, the speed-up factor increases with problem dimension. The codes, called RELAX-II and RELAXT-II, have a facility for efficient reoptimization and sensitivity analysis, and are in the public domain.

Journal ArticleDOI
T. Kampke1
TL;DR: In this article, simulated annealing (statistical cooling) is applied to bin packing problems and different cooling strategies are compared empirically and for a particular 100 item problem a solution is given which is most likely the best known so far.
Abstract: Simulated annealing (statistical cooling) is applied to bin packing problems. Different cooling strategies are compared empirically and for a particular 100 item problem a solution is given which is most likely the best known so far.

Journal ArticleDOI
TL;DR: In this article, multi-attribute utility theory is extended to accommodate adversarial problem solving situations involving multiple interacting agents, where partial goal satisfaction and persuasion are used to resolve conflict among multiple agents.
Abstract: In this paper, multi-attribute utility theory is extended to accommodate adversarial problem solving situations involving multiple interacting agents. Such situations are resolved by partial goal satisfaction and persuasion, and have only scantily been described in the AI literature. Utility theory is shown to provide a computational framework to (a) generate a compromise solution that partially satisfies the conflicting goals of the agents, (b) evaluate whether a solution is an improvement on a previously rejected one, and (c) determine the effectiveness of persuasive arguments. Our examples are taken from the domain of labor mediation and are implemented in a computer program, called the PERSUADER.

Journal ArticleDOI
TL;DR: An efficient method to determine the optimal configuration of a flexible manufacturing system, modelled as a closed queueing network, is presented and shows that it performs very well.
Abstract: A frequently encountered design issue for a flexible manufacturing system (FMS) is to find the lowest cost configuration, i.e. the number of resources of each type (machines, pallets, ...), which achieves a given production rate. In this paper, an efficient method to determine this optimal configuration is presented. The FMS is modelled as a closed queueing network. The proposed procedure first derives a heuristic solution and then the optimal solution. The computational complexity for finding the optimal solution is very reasonable even for large systems, except in some extreme cases. Moreover, the heuristic solution can always be determined and is very close (and often equal) to the optimal solution. A comparison with the previous method of Vinod and Solberg shows that our method performs very well.

Journal ArticleDOI
TL;DR: This paper describes the implementation of a distributed relaxation algorithm for strictly convex network problems on a massively parallel computer and reports computational results with a series of stick percolation and quadratic transportation problems.
Abstract: Optimization problems with network constraints arise in several instances in engineering, management, statistical and economic applications. The (usually) large size of such problems motivated research in designing efficient algorithms and software for this problem class. The introduction of parallelism in the design of computer systems adds a new element of complexity to the field. This paper describes the implementation of a distributed relaxation algorithm for strictly convex network problems on a massively parallel computer. A Connection Machine CM-1 configured with 16,384 processing elements serves as the testbed of the implementation. We report computational results with a series of stick percolation and quadratic transportation problems. The algorithm is compared with an implementation of the primal truncated Newton on an IBM 3081-D mainframe, an Alliant FX/8 shared memory vector multiprocessor and the IBM 3090-600 vector supercomputer. One of the larger test problems with approximately 2500 nodes and 8000 arcs requires 1.5 minutes of CPU time on the vector supercomputer. The same problem is solved using relaxation on the CM-1 in less that a second.

Journal ArticleDOI
TL;DR: A review of the literature on parallel computers and algorithms that is relevant for combinatorial optimization can be found in this paper, where the authors describe theoretical as well as realistic machine models for parallel computations.
Abstract: This is a review of the literature on parallel computers and algorithms that is relevant for combinatorial optimization. We start by describing theoretical as well as realistic machine models for parallel computations. Next, we deal with the complexity theory for parallel computations and illustrate the resulting concepts by presenting a number of polylog parallel algorithms andP-completeness results. Finally, we discuss the use of parallelism in enumerative methods.

Journal ArticleDOI
TL;DR: It is described in this paper how large-scale system methods for solving multi-staged systems, such as Bender's Decomposition, high-speed sampling or Monte Carlo simulation, and parallel processors can be combined to solve some important planning problems involving uncertainty.
Abstract: Industry and government routinely solve deterministic mathematical programs for planning and schelduling purposes, some involving thousands of variables with a linear or non-linear objective and inequality constraints. The solutions obtained are often ignored because they do not properly hedge against future contingencies. It is relatively easy to reformulate models to include uncertainty. The bottleneck has been (and is) our capability to solve them. The time is now ripe for finding a way to do so. To this end, we describe in this paper how large-scale system methods for solving multi-staged systems, such as Bender's Decomposition, high-speed sampling or Monte Carlo simulation, and parallel processors can be combined to solve some important planning problems involving uncertainty. For example, parallel processors may make it possible to come to better grips with the fundamental problems of planning, scheduling, design, and control of complex systems such as the economy, an industrial enterprise, an energy system, a water-resource system, military models for planning-and-control, decisions about investment, innovation, employment, and health-delivery systems.

Journal ArticleDOI
TL;DR: The procedure UFAP is presented which allows a decision maker to interactively assess his von Neumann/Morgenstern single attribute utility function and puts special emphasis on potential biases in the assessment process.
Abstract: The procedure UFAP is presented which allows a decision maker to interactively assess his von Neumann/Morgenstern single attribute utility function. UFAP puts special emphasis on potential biases in the assessment process. In the first part of the procedure three different assessment methods are used to derive ranges for the utility function. Using different methods enables us to point out a possible bias in the elicitation process. In the second part a consistent class of utility functions is derived based on the ranges assessed in the first part. In case inconsistencies between methods arise the decision maker has to reconsider selected preference statements previously given.

Journal ArticleDOI
TL;DR: In this paper, an interactive and iterative fuzzy programming method for solving a quasi-optimization problem in complex decisions under constraints involving a multiple objective function is proposed.
Abstract: Multicriteria analysis is one of the analytical functions in the problem processing system of decision support systems (DSS). In this paper, an interactive and iterative fuzzy programming method for solving a quasi-optimization problem in complex decisions under constraints involving a multiple objective function is proposed. Comparing with an adapted gradient search method, a surrogate worth tradeoff method, and a Zionts—Wallenius method, an approximate preference structure is emphasized in the proposed method.

Journal ArticleDOI
TL;DR: It is found that achieved machine utilizations are a strong function of some of the factors ignored in the MP methodology, ranging from 9.1% to 22.9% less than those theoretically attainable under the mathematical programming assumptions.
Abstract: Stecke [21] has developed mathematical programming approaches for determining, from a set of part type requirements, the production ratios (part types to be produced next, and their proportions) which maximize overall machine utilizations by balancing machine workloads in a flexible manufacturing system (FMS). These mathematical programming (MP) approaches are aggregate in the sense that they do not take into account such things as contention for transportation resources, travel time for work-in-process, contention for machines, finite buffer space, and dispatching rules. In the current study, the sensitivity of machine utilizations to these aggregations is investigated through simulation modeling. For the situation examined, it is found that achieved machine utilizations are a strong function of some of the factors ignored in the MP methodology, ranging from 9.1% to 22.9% less than those theoretically attainable under the mathematical programming assumptions. The 9.1% degradation results from modeling with nonzero work-in-process travel times (i.e. 2 minutes per transfer) and using only central work-in-process buffers. Resource levels (e.g. the number of automated guided vehicles; the amount of work-in-process; the number of slack buffers) needed to limit the degradation to 9.1% correspond to FMS operating conditions which are feasible in practice.

Journal ArticleDOI
TL;DR: A gradient projection successive overrelaxation (GP-SOR) algorithm is proposed for the solution of symmetric linear complementary problems and linear programs that solves a general linear program by finding its least 2-norm solution.
Abstract: A gradient projection successive overrelaxation (GP-SOR) algorithm is proposed for the solution of symmetric linear complementary problems and linear programs. A key distinguishing feature of this algorithm is that when appropriately parallelized, the relaxation factor interval (0, 2) isnot reduced. In a previously proposed parallel SOR scheme, the substantially reduced relaxation interval mandated by the coupling terms of the problem often led to slow convergence. The proposed parallel algorithm solves a general linear program by finding its least 2-norm solution. Efficiency of the algorithm is in the 50 to 100 percent range as demonstrated by computational results on the CRYSTAL token-ring multicomputer and the Sequent Balance 21000 multiprocessor.

Journal ArticleDOI
TL;DR: A more realistic paradigm for real-time FMS control is presented, based on explicit engineering of human and automated control functions and system interfaces, and to illustrate design principles within the conceptual model, an example of algorithmic and operator function models are developed.
Abstract: Most of the current academic flexible manufacturing system (FMS) scheduling research has focused on the derivation of algorithms or knowledge-based techniques for efficient FMS real-time control. Here, the limitations of this view are outlined with respect to effective control of actual real-time FMS operation. A more realistic paradigm for real-time FMS control is presented, based on explicit engineering of human and automated control functions and system interfaces. To illustrate design principles within the conceptual model, an example of algorithmic and operator function models for a specific real-time FMS control problem are developed.

Journal ArticleDOI
TL;DR: The pattern-directed approach is presented which incorporates a nonlinear planning method developed in the artificial intelligence field and thus can handle scheduling requirements unique to the FMS environment, such as dynamic scheduling, failure-recovery scheduling, or prioritized scheduling for meeting deadlines.
Abstract: Scheduling in flexible manufacturing systems (FMS) must take into account the shorter lead time, the multiprocessing environment, and the dynamically changing states. In this paper, a pattern-directed approach is presented which incorporates a nonlinear planning method developed in the artificial intelligence field. The scheduling system described here is knowledge-based and utilizes both forward-and backward-chaining for generating schedules (treated as state-space plans). The pattern-directed approach is dynamically adjustable and thus can handle scheduling requirements unique to the FMS environment, such as dynamic scheduling, failure-recovery scheduling, or prioritized scheduling for meeting deadlines.

Journal ArticleDOI
TL;DR: While a number of both theoretical and practical problems remain to be resolved, a heuristic version of the SQG method appears to be a reasonable technique for analyzing optimization problems for certain complex manufacturing systems.
Abstract: This paper presents the application of the stochastic quasigradient method (SQG) of Ermoliev and Gaivaronski to the performance optimization of asynchronous flexible assembly systems (AFAS). These systems are subject to blocking and starvation effects that make complete analytic performance modeling difficult. A hybrid algorithm is presented in this paper which uses a queueing network model to set the number of pallets in the system and then an SQG algorithm is used to set the buffer spacings to obtain optimal system throughput. Different forms of the SQG algorithm are examined and the specification of optimal buffer sizes and pallet numbers for a variety of assembly systems with ten stations are discussed. The combined Network-SQG method appears to perform well in obtaining a near optimal solution in this discrete optimization example, even though the SQG method was primarily designed for application to differentiable performance functionals. While a number of both theoretical and practical problems remain to be resolved, a heuristic version of the SQG method appears to be a reasonable technique for analyzing optimization problems for certain complex manufacturing systems.

Journal ArticleDOI
Andrew Kusiak1
TL;DR: In this paper, a heuristic two-level scheduling algorithm for a system consisting of a machining and an assembly subsystem is developed and it is shown that the upper level problem is equivalent to the two machine flow shop problem.
Abstract: Most scheduling papers consider flexible machining and assembly systems as being independent. In this paper, a heuristic two-level scheduling algorithm for a system consisting of a machining and an assembly subsystem is developed. It is shown that the upper level problem is equivalent to the two machine flow shop problem. The algorithm at the lower level schedules jobs according to the established product and part priorities. Related issues, such as batching, due dates, process planning and alternative routes, are discussed. The algorithm and associated concepts are illustrated on a number of numerical examples.

Journal ArticleDOI
TL;DR: A hybrid algorithm is presented which is O(n) in the worst case and efficient in the average case and helps solve the problem of finding thekth smallest ofn elements.
Abstract: The problem of finding thekth smallest ofn elements can be solved either with O(n) algorithms or with O(n 2) algorithms. Although they require a higher number of operations in the worst case, O(n 2) algorithms are generally preferred to O(n) algorithms because of their better average performance. We present a hybrid algorithm which is O(n) in the worst case and efficient in the average case.

Journal ArticleDOI
TL;DR: The construction of an expert-like system for machine scheduling called SCHEDULE, which consists of the components data base, knowledge base, inference engine, explanation facility, dialog component, and knowledge acquisition component, is presented.
Abstract: The construction of an expert-like system for machine scheduling called SCHEDULE is presented. Essential parts of SCHEDULE were developed by students in a laboratory course “Operations Research on Microcomputers” at the University of Karlsruhe, Germany. SCHEDULE consists of the components data base, knowledge base, inference engine, explanation facility, dialog component, and knowledge acquisition component. The knowledge base contains an algorithm base for solving different types of scheduling problems. To establish the rules of the knowledge base the well-known three-field classification of deterministic machine scheduling problems and the concept of the reduction digraph are exploited. Experiences gained during building and demonstrating SCHEDULE are reported.

Journal ArticleDOI
TL;DR: It is found that the product mix, setup and changeover times, and scheduling rules affect the performance significantly, in particular at high levels of machine utilisation.
Abstract: Although the problem of scheduling dynamic job shops is well studied, setup and changeover times are often assumed to be negligibly small and therefore ignored. In cases where the product mix changes occur frequently, setup and changeover times are of critical importance. This paper applies some known results from the study of multi-class single-server queues with setup and changeover times to develop an approximation for evaluating the performance of job shops. It is found that the product mix, setup and changeover times, and scheduling rules affect the performance significantly, in particular at high levels of machine utilisation. This approach could be used to determine the required level of flexibility of machines and to choose an appropriate scheduling policy such that production rates remain within acceptable limits for foreseeable changes in the product mix.

Journal ArticleDOI
TL;DR: In this paper, the authors present a new class of methods for solving unconstrained optimization problems on parallel computers, which utilize multiple processors to evaluate the function, (finite difference) gradient, and a portion of the finite difference Hessian simultaneously at each iteration.
Abstract: This paper presents a new class of methods for solving unconstrained optimization problems on parallel computers. The methods are intended to solve small to moderate dimensional problems where function and derivative evaluation is the dominant cost. They utilize multiple processors to evaluate the function, (finite difference) gradient, and a portion of the finite difference Hessian simultaneously at each iterate. We introduce three types of new methods, which all utilize the new finite difference Hessian information in forming the new Hessian approximation at each iteration; they differ in whether and how they utilize the standard secant information from the current step as well. We present theoretical analyses of the rate of convergence of several of these methods. We also present computational results which illustrate their performance on parallel computers when function evaluation is expensive.