scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 1990"


Journal ArticleDOI
TL;DR: Results indicate that tabu search consistently outperforms simulated annealing with respect to computation time while giving comparable solutions to traveling salesman problem problems.
Abstract: This paper describes serial and parallel implementations of two different search techniques applied to the traveling salesman problem. A novel approach has been taken to parallelize simulated annealing and the results are compared with the traditional annealing algorithm. This approach uses abbreviated cooling schedule and achieves a superlinear speedup. Also a new search technique, called tabu search, has been adapted to execute in a parallel computing environment. Comparison between simulated annealing and tabu search indicate that tabu search consistently outperforms simulated annealing with respect to computation time while giving comparable solutions. Examples include 25, 33, 42, 50, 57, 75 and 100 city problems.

192 citations


Journal ArticleDOI
TL;DR: This work demonstrates for an important class of multistage stochastic models that three techniques — namely nested decomposition, Monte Carlo importance sampling, and parallel computing — can be effectively combined to solve this fundamental problem of large-scale linear programming.
Abstract: Our goal is to demonstrate for an important class of multistage stochastic models that three techniques — namely nested decomposition, Monte Carlo importance sampling, and parallel computing — can be effectively combined to solve this fundamental problem of large-scale linear programming.

181 citations


Journal ArticleDOI
TL;DR: Two new models derived from the probabilistic location set covering problem allow the examination of the relationships between the number of facilities being located, the reliability that a vehicle will be available, and a coverage standard.
Abstract: In the last several years, the modeling of emergency vehicle location has focussed on the temporal availability of the vehicles. Vehicles are not available for service when they are engaged in earlier calls. To incorporate this dynamic aspect into facility location decisions, models have been developed which provide additional levels of coverage. In this paper, two new models are derived from the probabilistic location set covering problem. These models allow the examination of the relationships between the number of facilities being located, the reliability that a vehicle will be available, and a coverage standard. In addition, these models incorporate sectoral specific estimates of the availability of the vehicles. Solution of these models reveals that the use of sectoral estimates leads to facility locations which are distributed to a greater spatial extent over the region to be serviced.

159 citations


Journal ArticleDOI
TL;DR: Five successful applications and three classes of problems for which genetic algorithms are ill suited are illustrated: ordering problems, smooth optimization problems, and “totally indecomposable” problems.
Abstract: Genetic algorithms are defined. Attention is directed to why they work: schemas and building blocks, implicit parallelism, and exponentially biased sampling of the better schema. Why they fail and how undesirable behavior can be overcome is discussed. Current genetic algorithm practice is summarized. Five successful applications are illustrated: image registration, AEGIS surveillance, network configuration, prisoner's dilemma, and gas pipeline control. Three classes of problems for which genetic algorithms are ill suited are illustrated: ordering problems, smooth optimization problems, and “totally indecomposable” problems.

152 citations


Journal ArticleDOI
TL;DR: In this paper, a 3-stage method was developed for solving to optimality a general 0-1 formulation for uncapacitated location problems, which solved large problems in reasonable computing times.
Abstract: In this paper we develop a method for solving to optimality a general 0–1 formulation for uncapacitated location problems. This is a 3-stage method that solves large problems in reasonable computing times.

112 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of finding the minimum number of viewpoints to see the entire surface, and of locating a fixed number of views to maximize the area visible, and possible extensions.
Abstract: The viewshed of a point on an irregular topographic surface is defined as the area visible from the point. The area visible from a set of points is the union of their viewsheds. We consider the problems of locating the minimum number of viewpoints to see the entire surface, and of locating a fixed number of viewpoints to maximize the area visible, and possible extensions. We discuss alternative methods of representing the surface in digital form, and adopt a TIN or triangulated irregular network as the most suitable data structure. The space is tesselated into a network of irregular triangles whose vertices have known elevations and whose edges join vertices which are Thiessen neighbours, and the surface is represented in each one by a plane. Visibility is approximated as a property of each triangle: a triangle is defined as visible from a point if all of its edges are fully visible. We present algorithms for determination of visibility, and thus reduce the problems to variants of the location set covering and maximal set covering problems. We examine the performance of a variety of heuristics.

92 citations


Journal ArticleDOI
TL;DR: A new controlled search simulated annealing method is developed for addressing the single machine weighted tardiness problem and is experimentally shown to solve optimally 99% of fifteen job problems with less than 0.2 CPU seconds and to solve one hundred job problems as accurately as any existing methods, but with far less computational effort.
Abstract: In this paper, a new controlled search simulated annealing method is developed for addressing the single machine weighted tardiness problem. The proposed method is experimentally shown to solve optimally 99% of fifteen job problems with less than 0.2 CPU seconds, and to solve one hundred job problems as accurately as any existing methods, but with far less computational effort. This superior performance is achieved by using controlled search strategies that employ a good initial solution, a small neighborhood for local search, and acceptance probabilities of inferior solutions that are independent of the change in the objective function value.

88 citations


Journal ArticleDOI
TL;DR: Fuzzytrace theory as mentioned in this paper is a theory designed to account for such findings, which assumes that reasoners rely on global patterns, or gist, and explores its implications for different theoretical conceptions of logical competence, concluding that young children possess transitivity competence.
Abstract: We review the literature on the development of transitive reasoning, and note three historical stages. Stage 1 was dominated by the Piagetian idea that transitive inference is logical reasoning in which relationships between adjacent terms figure as premises. Stage 2 was dominated by the information-processing view that memory for relationships between adjacent terms is determinative in transitivity performance. Stage 3 has produced data that are inconsistent with both the logic and memery positions, leading to a new theory that is designed to account for such findings, fuzzy-trace theory. The basic assumption of fuzzytrace theory is that reasoners rely on global patterns, or gist. We describe the tenets of fuzzytrace theory, and explore its implications for different theoretical conceptions of logical competence, concluding that young children possess transitivity competence. We discuss the connection between transitivity competence (cognition) and intransitive preferences (metacognition).

87 citations


Journal ArticleDOI
TL;DR: This work proposes a class of models which has the form of a two-echelon multimode multicommodity location formulation with interdepot balancing requirements and analyses the models and their properties to determine their underlying structure and characteristics to be used in subsequent algorithmic developments.
Abstract: We present the problem of locating vehicle depots in an intercity freight transportation system with the objective of satisfying client demand for empty vehicles, while minimizing depot opening and operating costs, bidirectional client-depot transportation costs, as well as the costs of the interdepot movements necessary to counter the unbalancing of demand. We propose a class of models which has the form of a two-echelon multimode multicommodity location formulation with interdepot balancing requirements. We analyse the models and their properties to determine their underlying structure and characteristics to be used in subsequent algorithmic developments.

84 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the effects of three alternative nodal aggregation schemes on the model's solution times, the locational decisions indicated by the maximum covering model, and the coverage provided by the aggregated solutions compared with the optimal solutions.
Abstract: The level of aggregation is critical in discrete location analyses as it affects the level of data collection required, computation times and the usefulness of the analyses. We examine the effects of three alternative nodal aggregation schemes on (i) the model's solution times, (ii) the locational decisions indicated by the maximum covering model, (iii) the coverage provided by the “aggregate” solutions compared with the optimal solutions, and (iv) the coverage predicted by the aggregate model compared with the coverage that results from using the aggregate model's facility sites and the disaggregate demands. The results suggest that considerable aggregation can be tolerated without incurring large errors in total coverage, but that location errors are introduced at moderate levels of aggregation. The magnitude of these errors is significantly affected by the aggregation scheme employed.

80 citations


Journal ArticleDOI
TL;DR: Choice behavior in an interactive multiple-criteria decision making environment is examined experimentally, and an unexpectedly rapid degree of convergence of the reference direction approach on a preferred solution is revealed.
Abstract: Choice behavior in an interactive multiple-criteria decision making environment is examined experimentally. A “free search” discrete visual interactive reference direction approach was used on a microcomputer by management students to solve two realistic and relevant multiple-criteria decision problems. The results revealed persistent patterns of intransitive choice behavior, and an unexpectedly rapid degree of convergence of the reference direction approach on a preferred solution. The results can be explained using Tversky' [20] additive utility difference model and Kahneman-Tversky's [5] prospect theory. The implications of the results for the design of interactive multiple-criteria decision procedures are discussed.

Journal ArticleDOI
TL;DR: This paper introduces several efficient heuristic sequential inspection procedures for solving the problem of minimizing the expected cost of computing the correct value of a discrete-valued function when it is costly to determine (“inspect”) the values of its arguments.
Abstract: We consider the problem of minimizing the expected cost of computing the correct value of a discrete-valued function when it is costly to determine (“inspect”) the values of its arguments. This type of problem arises in distributed computing, in the design of interactive expert systems, in reliability analysis of coherent systems, in classification of pattern vectors, and in many other applications. In this paper, we first show that the general problem is NP-hard and then introduce several efficient heuristic sequential inspection procedures for solving it approximately. We obtain theoretical results showing that the heuristics are optimal in important special cases; moreover, their computational structures make them well suited for parallel implementation. Next, for the special case of linear threshold (or “discrete linear discriminant”) functions, which are widely used in statistical classification procedures, we use Monte Carlo simulation to analyze the performances of the heuristics and to compare the heuristic solutions with the exact (true minimum expected cost) solutions over a wide range of randomly generated test problems. All of the heuristics give average relative errors, compared to the exact optimal solutions, of less than 2%. The best heuristic for this class of functions gives an average relative error of less than 0.05% and runs two to four orders of magnitude faster than the exact solution procedure, for functions with ten arguments.

Journal ArticleDOI
TL;DR: Various characteristics of the delivery man problem for tree networks are explored and a pseudo-polynomial time solution algorithm is proposed.
Abstract: This note considers a variant of the traveling salesman problem in which we seek a route that minimizes the total of the vertex arrival times. This problem is called the deliverly man problem. The traveling salesman problem is NP-complete on a general network and trivial on a tree network. The delivery man problem is also NP-complete on a general network but far from trivial on a tree network. Various characteristics of the delivery man problem for tree networks are explored and a pseudo-polynomial time solution algorithm is proposed.

Journal ArticleDOI
TL;DR: This paper develops a model which captures the need for a backup response facility for each demand point and a reasonable limit on each facility's workload as well as presenting an efficient solution procedure using Lagrangian relaxation.
Abstract: The maximal covering location problem has been shown to be a useful tool in siting emergency services. In this paper we expand the model along two dimensions — workload capacities on facilities and the allocation of multiple levels of backup or prioritized service for all demand points. In emergency service facility location decisions such as ambulance sitting, when all of a facility's resources are needed to meet each call for service and the demand cannot be queued, the need for a backup unit may be required. This need is especially significant in areas of high demand. These areas also will often result in excessive workload for some facilities. Effective siting decisions, therefore, must address both the need for a backup response facility for each demand point and a reasonable limit on each facility's workload. In this paper, we develop a model which captures these concerns as well as present an efficient solution procedure using Lagrangian relaxation. Results of extensive computational experiments are presented to demonstrate the viability of the approach.

Journal ArticleDOI
M. Yue1
TL;DR: It is proved first that the bound cannot exceed 13/11 and then it is proved that it is exactly 12/11, which is close to the true bound of 1.2.
Abstract: We consider the well-known problem of schedulingn independent tasks nonpre-emptively onm identical processors with the aim of minimizing the makespan. Coffman, Garey and Johnson [1] described an algorithm, MULTIFIT, based on techniques from binpacking, with better worst performance than the LPT algorithm and proved that it satisfies a bound of 1.22. It has been claimed by Friesen [2] that this bound can be improved upon to 1.2. However, we found his proof, in particular his lemma 4.9, difficult to understand. Yue, Kellerer and Yu [3] have presented the bound 1.2 in a simpler way. In this paper, we prove first that the bound cannot exceed 13/11 and then prove that it is exactly 13/11.

Journal ArticleDOI
TL;DR: In this article, a General Optimal Market Area (GOMA) model is proposed to model the tradeoff between economies-of-scale from larger facilities and the higher costs of transport to more distant markets.
Abstract: Market area models determine the optimal size of market for a facility. These models are grounded in classical location theory, and express the fundamental tradeoff between economies-of-scale from larger facilities and the higher costs of transport to more distant markets. The simpler market area models have been discovered and rediscovered, and applied and reapplied, in a number of different settings. We review the development and use of market area models, and formulate a General Optimal Market Area model that accommodates both economies-of-scale in facilities costs and economies-of-distance in transport costs as well as different market shapes and distance norms. Simple expressions are derived for both optimal market size and optimal average cost, and also the sensitivity of average cost to a non-optimal choice of size. The market area model is used to explore the implications of some recently proposed distance measures and to approximate a large discrete location model, and an extension to price-sensitive demands is provided.

Journal ArticleDOI
TL;DR: In this article, it was shown that the weak consistency property does not hold for large classes of decision procedures, and the geometric reasons why such a property fails to hold are described, and a general theorem is given to characterize the procedures that satisfy this property.
Abstract: If a statistical or a voting decision procedure is used by several subpopulations and if each reaches an identical conclusion, then one might expect this conclusion to be the outcome for the full group. It is shown that this property fails to hold for large classes of decision procedures. The geometric reasons why the consistency does not hold are described. A general theorem is given to characterize the procedures that satisfy this property of “weak consistency”.

Journal ArticleDOI
TL;DR: A parallel branch and bound algorithm for unconstrained quadratic zero-one programs on the hypercube architecture that parallelize well without the need of a shared data structure to store expanded nodes of the search tree.
Abstract: We present a parallel branch and bound algorithm for unconstrained quadratic zero-one programs on the hypercube architecture. Subproblems parallelize well without the need of a shared data structure to store expanded nodes of the search tree. Load balancing is achieved by demand splitting of neighboring subproblems. Computational results on a variety of large-scale problems are reported on an iPSC/1 32-node hypercube and an iPSC/2 16-node hypercube.

Journal ArticleDOI
TL;DR: In this article, a model for locating a firm's production facilities while simultaneously determining production levels at these facilities and shipping patterns so as to maximize the firm's profits is presented, and a proof of existence of a solution to the combined location-equilibrium problem is provided.
Abstract: Models for locating a firm's production facilities while simultaneously determining production levels at these facilities and shipping patterns so as to maximize the firm's profits are presented. In these models, existing firms, are assumed to act in accordance with an appropriate model of spatial equilibrium. A proof of existence of a solution to the combined location-equilibrium problem is provided.

Journal ArticleDOI
TL;DR: A modelling language for Integer Programming (IP) based on the Predicate Calculus is described, which is particularly suitable for building models with logical conditions.
Abstract: A modelling language for Integer Programming (IP) based on the Predicate Calculus is described. This is particularly suitable for building models with logical conditions. Using this language a model is specified in terms of predicates. This is then converted automatically by a series of transformation rules into a normal form from which an IP model can be created. There is also some discussion of alternative IP formulations which can be incorporated into the system as options. Further practical considerations are discussed briefly concerning implementation language and incorporation into practical Mathematical Programming Systems.

Journal ArticleDOI
TL;DR: Criteria which can be used to guide decisions that influences the quality of the application are presented and research that would better inform these decisions is described.
Abstract: The degree to which locational complexity and geographical complexity is represented in a location model is a critical decision that influences the quality of the application. Criteria which can be used to guide these decisions are presented and research that would better inform these decisions is described.

Journal ArticleDOI
John J. Forrest1, John A. Tomlin1
TL;DR: In this article, the authors discuss the application of vector processing to various phases of simplex and interior point methods for linear programming and present preliminary computational results of experiments on the IBM 3090 vector facility.
Abstract: We discuss the application of vector processing to various phases of simplex and interior point methods for linear programming. Preliminary computational results of experiments on the IBM 3090 vector facility will be presented.

Journal ArticleDOI
TL;DR: This paper presents parallel bundle-based decomposition algorithms to solve a class of structured large-scale convex optimization problems, and presents computational experience with block-angular linear programming problems.
Abstract: In this paper, we present parallel bundle-based decomposition algorithms to solve a class of structured large-scale convex optimization problems. An example in this class of problems is the block-angular linear programming problem. By dualizing, we transform the original problem to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. Further, this dual problem consists of a collection of smaller independent subproblems which give rise to the parallel algorithms. We discuss the implementation on the CRYSTAL multi-computer. Finally, we present computational experience with block-angular linear programming problems and observe that more than 70% efficiency can be obtained using up to eleven processors for one group of test problems, and more than 60% efficiency can be obtained for relatively smaller problems using up to five processors for another group of problems.

Journal ArticleDOI
TL;DR: In this article, it was shown that Karmarkar's algorithm is equivalent to a classical logarithmic barrier method applied to a problem in standard form, i.e., the method of centers.
Abstract: The paper shows how various interior point methods for linear programming may all be derived from logarithmic barrier methods. These methods include primal and dual projective methods, affine methods, and methods based on the method of centers. In particular, the paper demonstrates that Karmarkar's algorithm is equivalent to a classical logarithmic barrier method applied to a problem in standard form.

Journal ArticleDOI
TL;DR: A methodology which uses a collection of workstations connected by an Ethernet network as a parallel processor for solving large-scale linear programming problems and achieves linear and super-linear speedups on the largest problems.
Abstract: We present a methodology which uses a collection of workstations connected by an Ethernet network as a parallel processor for solving large-scale linear programming problems. On the largest problems we tested, linear and super-linear speedups have been achieved. Using the “branch-and-cut” approach of Hoffman, Padberg and Rinaldi, eight workstations connected in parallel solve problems from the test set documented in the Crowder, Johnson and Padberg 1983Operations Research article. Very inexpensive, networked workstations are now solving in minutes problems which were once considered not solvable in economically feasible times. In this peer-to-peer (as opposed to master-worker) implementation, interprocess communication was accomplished by using shared files and resource locks. Effective communication between processes was accomplished with a minimum of overhead (never more than 8% of total processing time). The implementation procedures and computational results will be presented.

Journal ArticleDOI
TL;DR: The algorithm generalizes the heuristic procedure given by Johnson, 1974 and randomizes the Boolean variables, assign probabilities to their possible values and presents a deterministic procedure to obtain solution to the maximum satisfiability problem.
Abstract: In this paper we present a new method to analyze and solve the maximum satisfiability problem. We randomize the Boolean variables, assign probabilities to their possible values and, by using recently developed probabilistic bounds of the authors, present a deterministic procedure to obtain solution to the maximum satisfiability problem. Our algorithm generalizes the heuristic procedure given by Johnson, 1974.

Journal ArticleDOI
TL;DR: A survey of research on transitivity can be found in this paper, where a survey is given of transitivity research on the Internet and transitivity. But this survey is incomplete.
Abstract: A survey is given of research on transitivity.

Journal ArticleDOI
TL;DR: The application of the finite domain part of CHIP to the solving of discrete combinatorial problems occurring in Operations Research is discussed.
Abstract: CHIP (Constraint Handling In Prolog) is a new logic programming language combining the declarative aspect of logic programming for stating search problems with the efficiency of constraint handling techniques for solving them. CHIP has been applied to many real-life problems in Operations Research and hardware design with an efficiency comparable to specific programs written in procedural languages. The main advantage of CHIP is the short development time of the programs and their great modifiability and extensibility. In this paper, we discuss the application of the finite domain part of CHIP to the solving of discrete combinatorial problems occurring in Operations Research. The basic mechanisms underlying CHIP are explained through simple examples. Solutions in CHIP of several real-life problems (e.g., cutting stock, warehouses location problems) are presented and compared with usual approaches, showing the versatility and the interest of the approach.

Journal ArticleDOI
TL;DR: This paper describes a logic based approach to mechanically construct Linear Programming models from qualitative problem specifications and illustrates it in the context of production, distribution and inventory planning problems.
Abstract: Attempts to integrate Artificial Intelligence (AI) techniques into Decision Support Systems (DSS) have received much attention in recent years. Significant among these has been the application of knowledge-based techniques to support various phases of the modeling process. This paper describes a logic based approach to mechanically construct Linear Programming (LP) models from qualitative problem specifications and illustrates it in the context of production, distribution and inventory planning problems. Specifically, we describe the features of a first-order logic based formal language called PM which is at the heart of an implemented knowledge-based tool for model construction. Problems specified in PM define a logic model which is then used to generate problem-specific inferences, and as input to a set of logic programming procedures that perform model construction.

Journal ArticleDOI
TL;DR: An algebraic representation of timed marked graphs in terms of reccurrence equations is provided, linear in a nonconventional algebra, that is useful to analyze asynchronous and repetitive production processes.
Abstract: A model to analyze certain classes of discrete event dynamic systems is presented. Previous research on timed marked graphs is reviewed and extended. This model is useful to analyze asynchronous and repetitive production processes. In particular, applications to certain classes of flexible manufacturing systems are provided in a companion paper. Here, an algebraic representation of timed marked graphs in terms of reccurrence equations is provided. These equations are linear in a nonconventional algebra, that is described. Also, an algorithm to properly characterize the periodic behavior of repetitive production processes is descrbed. This model extends the concepts from PERT/CPM analysis to repetitive production processes.