scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 1990"


Book ChapterDOI
David S. Johnson1
01 Jul 1990
TL;DR: This paper surveys the state of the art with respect to the TSP, with emphasis on the performance of traditional local optimization algorithms and their new competitors, and on what insights complexity theory does, or does not, provide.
Abstract: The Traveling Salesman Problem (TSP) is often cited as the prototypical “hard” combinatorial optimization problem. As such, it would seem to be an ideal candidate for nonstandard algorithmic approaches, such as simulated annealing, and, more recently, genetic algorithms. Both of these approaches can be viewed as variants on the traditional technique called local optimization. This paper surveys the state of the art with respect to the TSP, with emphasis on the performance of traditional local optimization algorithms and their new competitors, and on what insights complexity theory does, or does not, provide.

399 citations


Proceedings ArticleDOI
01 May 1990
TL;DR: A new Two Phase Optimization algorithm is inspired, which is a combination of Simulated Annealing and Iterative Improvement, which outperforms the original algorithms in terms of both output quality and running time.
Abstract: Query optimization for relational database systems is a combinatorial optimization problem, which makes exhaustive search unacceptable as the query size grows. Randomized algorithms, such as Simulated Annealing (SA) and Iterative Improvement (II), are viable alternatives to exhaustive search. We have adapted these algorithms to the optimization of project-select-join queries. We have tested them on large queries of various types with different databases, concluding that in most cases SA identifies a lower cost access plan than II. To explain this result, we have studied the shape of the cost function over the solution space associated with such queries and we have conjectured that it resembles a 'cup' with relatively small variations at the bottom. This has inspired a new Two Phase Optimization algorithm, which is a combination of Simulated Annealing and Iterative Improvement. Experimental results show that Two Phase Optimization outperforms the original algorithms in terms of both output quality and running time.

387 citations


Book ChapterDOI
01 Oct 1990
TL;DR: Data collected concerning execution times show that the GENITOR genetic algorithm using multiple subpopulations may execute much faster than the single population version when the cost of the evaluation function is low; thus, total number of evaluations is not always a good metric for making performance comparisons.
Abstract: A distributed genetic algorithm is tested on several difficult optimization problems using a variety of different subpopulation sizes. Contrary to our previous results, the more comprehensive tests presented in this paper show the distributed genetic algorithm is often, but not always superior to genetic algorithms using a single large population when the total number of evaluations is held constant. Data collected concerning execution times show that the GENITOR genetic algorithm using multiple subpopulations may execute much faster than the single population version when the cost of the evaluation function is low; thus, total number of evaluations is not always a good metric for making performance comparisons. Finally, our results suggest that "adaptive mutation" may be an important factor in obtaining superior results using a distributed version of GENITOR.

201 citations


Journal ArticleDOI
TL;DR: Results indicate that tabu search consistently outperforms simulated annealing with respect to computation time while giving comparable solutions to traveling salesman problem problems.
Abstract: This paper describes serial and parallel implementations of two different search techniques applied to the traveling salesman problem. A novel approach has been taken to parallelize simulated annealing and the results are compared with the traditional annealing algorithm. This approach uses abbreviated cooling schedule and achieves a superlinear speedup. Also a new search technique, called tabu search, has been adapted to execute in a parallel computing environment. Comparison between simulated annealing and tabu search indicate that tabu search consistently outperforms simulated annealing with respect to computation time while giving comparable solutions. Examples include 25, 33, 42, 50, 57, 75 and 100 city problems.

192 citations


Journal ArticleDOI
TL;DR: A general description of tabu search is given and various applications to optimization problems are presented and some guidelines for applying the tabu metaheuristic are exhibited.
Abstract: A general description of tabu search is given and various applications to optimization problems are presented. Some guidelines for applying the tabu metaheuristic are exhibited.

164 citations



Journal ArticleDOI
TL;DR: A mathematical model for design optimization of engineering systems is defined in this article, and several computational aspects, such as robust implementation of algorithms, use of knowledge base, interactive use of optimization, and use of a database and database management system, are discussed.

56 citations


Journal ArticleDOI
TL;DR: The application of multiobjective optimization techniques to the selection of system parameters and large scale structural design optimization problems is the main purpose of this paper.
Abstract: SUMMARY The use of multiobjective optimization techniques, which may be regarded as a systematic sensitivity analysis, for the selection and modification of system parameters is presented. A minimax multiobjective optimization model for structural optimization is proposed. Three typical multiobjective optimization techniques-goal programming, compromise programming and the surrogate worth trade-off method-are used to solve such a problem. The application of multiobjective optimization techniques to the selection of system parameters and large scale structural design optimization problems is the main purpose of this paper.

54 citations


Journal ArticleDOI
TL;DR: The efficiency of optimization procedures for the resulting constrained and unconstrained vector-optimization problems is examined and it is shown that for an efficient optimization procedure it is necessary to combine suitable methods for the transformation of vector optimization problems to scalar ones.
Abstract: Numerical field calculation and mathematical optimization methods are used to find the most appropriate design of synchronous machines with rare-earth permanent-magnet excitation. The efficiency of optimization procedures for the resulting constrained and unconstrained vector-optimization problems is examined. It is shown that for an efficient optimization procedure it is necessary to combine suitable methods for the transformation of vector optimization problems to scalar ones and methods for the transformation of constrained to unconstrained problems, and algorithms for unconstrained optimization. As a first attempt, it is recommended to use the preference-function method with a search algorithm such as Rosenbrock's algorithm. When the Marglin method is used, the combination of SUMT with a search algorithm, and the combination of the augmented Lagrange multiplier method with the conjugate gradient algorithm, are appropriate. The combination of this optimization procedure with numerical field calculations yields a powerful tool for the design of permanent-magnet machines. >

40 citations



01 Jan 1990
TL;DR: A simple analysis of SA is presented that will provide a time bound for convergence with overwhelming probability and a simpler and more general proof of convergence for Nested Annealing, a heuristic algorithm developed in [12].
Abstract: Simulated Annealing is a family of randomized algorithms used to solve many combinatorial optimization problems. In practice they have been applied to solve some presumably hard (e.g., NP-complete) problems. The level of performance obtained has been promised [5, 2, 6, 14]. The success of its heuristic technique has motivated analysis of this algorithm from a theoretical point of view. In particularly, people have looked at the convergence of this algorithm. They have shown (see e.g., [10]) that this algorithm converges in the limit to a globally optimal solution with probability 1. However few of these convergence results specify a time limit within which the algorithm is guaranteed to converge(with some high probability, say). We present, for the first time, a simple analysis of SA that will provide a time bound for convergence with overwhelming probability. The analysis will hold no matter what annealing schedule is used. Convergence of Simulated Annealing in the limit will follow as a corollary to our time convergence proof. In this paper we also look at optimization problems for which the cost function has some special properties. We prove that for these problems the convergence is much faster. In particular, we give a simpler and more general proof of convergence for Nested Annealing, a heuristic algorithm developed in [12]. Nested Annealing is based on defining a graph corresponding to the given optimization problem. If this graph is 'small separable', they [12] show that Nested Annealing will converge 'faster'. For arbitrary optimization problem, we may not have any knowledge about the 'separability' of its graph. In this paper we give tight bounds for the 'separability' of a random graph. We then use these bounds to analyze the expected behavior of Nested Annealing on an arbitrary optimization problem. The 'separability' bounds we derive in this paper are of independent interest and have the potential of finding other applications.

Book ChapterDOI
01 Sep 1990
TL;DR: This paper presents a new method for a load balanced and communication optimized process distribution onto an arbitrary processor (network) topology that is fully distributed and based on a purely local method.
Abstract: Generating an efficient program for a parallel computer requires that the distribution of the processes on the processors comprising the parallel computer is most optimal. This paper presents a new method for a load balanced and communication optimized process distribution onto an arbitrary processor (network) topology. As opposed to many other approaches for this problem, the presented algorithm is fully distributed and based on a purely local method. It has shown to be much faster compared to the classical methods like simulated annealing, heuristic search, etc.

Journal ArticleDOI
TL;DR: An “intelligent front-end” or “logic assistant” is an interactive program devised to assist the users of an information retrieval system in the formulation of their queries, and a problem of query optimization with an average efficiency criterion is studied.
Abstract: An "intelligent front-end" or "logic assistant" is an interactive program devised to assist the users of an information retrieval system in the formulation of their queries. In order to provide knowledge usable in such a program, we study a problem of query optimization with an average efficiency criterion. We formulate it as a new combinatorial optimization problem, which we call 0-1 hyperbolic sum, and provide an exact branch-and-bound algorithm and two heuristics (of simulated annealing and tabu search type) to solve it. Computational experience illustrating the effectiveness of the tabu search technique is reported.

Journal ArticleDOI
TL;DR: An approach for solving optimization problems of chemical processes in which some search variables must take only some standard discrete values is described, based on joint application of branch and bound procedure and nonlinear programming algorithms.

Journal ArticleDOI
TL;DR: A new solution method, which is called modular approach (MA), is presented to solve the optimization problem and extends the Morin-Marsten hybrid idea to solve troublesome problems of dynamic programming.
Abstract: A generalized optimization system with a discrete decision space, is described, and on an optimization problem is defined which is associated with the system. A new solution method, which is called modular approach (MA), is presented to solve the optimization problem. This method extends the Morin-Marsten hybrid idea to solve troublesome problems of dynamic programming. The present method is also an extension of the branch-and-bound method using breadth-first search.


Proceedings ArticleDOI
01 Dec 1990
TL;DR: Investigates the finite-time behavior of two specific simulation optimization algorithms: a Robbins-Monro procedure applied in a conventional way and a more recently proposed single-run optimization algorithm and provides some basic insight into the behavior of such algorithms.
Abstract: Investigates the finite-time behavior of two specific simulation optimization algorithms: a Robbins-Monro procedure applied in a conventional way and a more recently proposed single-run optimization algorithm. By applying these algorithms to simple systems it is shown that, in practice, convergence of the former algorithm can be slow while that of the latter is very fast. The authors also provide evidence that the choice of projection operator (to deal with constraints in the optimization problem) has a significant effect on the finite-time performance of the latter algorithm. These results provide some basic insight into the behavior of such algorithms. >



01 Apr 1990
TL;DR: The dynamic programming method due to Bellman was augmented with an optimum sensitivity analysis that provides a mathematical basis for the above decomposition, and overcomes the curse of dimensionality that limited the original formulation of DP.
Abstract: Decomposition of large problems into a hierarchic pyramid of subproblems was proposed in the literature as a means for optimization of engineering systems too large for all-in-one optimization. This decomposition was established heuristically. The dynamic programming (DP) method due to Bellman was augmented with an optimum sensitivity analysis that provides a mathematical basis for the above decomposition, and overcomes the curse of dimensionality that limited the original formulation of DP. Numerical examples are cited.

Proceedings ArticleDOI
01 Nov 1990
TL;DR: A method of distributed optimization using the ALOPEX algorithm and a paradigm of temperature spreading and applications to the problem of pattern recognition are presented.
Abstract: A method of distributed optimization using the ALOPEX algorithm and a paradigm of temperature spreading is described. It shows several desirable properties: the number of iterations is significantly smaller than with the global optimization and independent on the data set size; a stopping condition is introduced; the biological plausibility for neural network implementations is also maintained. In particular, applications to the problem of pattern recognition are presented.

Proceedings ArticleDOI
Maurice Karnaugh1
01 Jan 1990
TL;DR: The primary requirements for using a search effectively to solve any set of optimization problems are given: adequately define the problem set, have good representations of the problem states and the search tree nodes, and find as good a heuristic as circumstances permit.
Abstract: A collection of tools for finding good solutions to cost-minimization problems by means of nonadmissible heuristic search is discussed. The origin of these tools lies in the assumption that many biased heuristics are orderly. Their usefulness will depend upon how well the orderliness assumptions are realized. Therefore, domain-dependent knowledge must be applied in selecting a good search strategy. A simple quadratic sort is used to illustrate, discuss, and test some of the node-selection and tree-pruning methods. Most of the results presented are not demonstrably optimal. The primary requirements for using a search effectively to solve any set of optimization problems are therefore given as follows: (1) adequately define the problem set, (2) have good representations of the problem states and the search tree nodes, (3) adequately define the state transformation operators, and (4) find as good a heuristic as circumstances permit. When these requirements have been met, it may be found that optimization is possible and worth while. If this is not the case, some optimization methods should be considered. >


Book ChapterDOI
01 Jan 1990
TL;DR: It is argumented that the large computing time needed in applying global optimization techniques and the suitability of some of these algorithms to parallelization makes them ideal candidates for execution on parallel computers.
Abstract: The problems in solving two real life optimal design problems suggest that explicite global optimization methods rather than some ad hock combination of local optimization techniques should be used. It is argumented that the large computing time needed in applying global optimization techniques and the suitability of some of these algorithms to parallelization makes them ideal candidates for execution on parallel computers. Results obtained with parallel Fortran on a 100 processor parallel computer using the processor farm configuration show that good speedup is achievable.

Proceedings ArticleDOI
01 May 1990
TL;DR: The effectiveness and efficiency of the new algorithm have been demonstrated through numerical examples which are commonly used as benchmarks by existing optimization methods, and by several electronic circuit examples, and in all cases encouraging results have been obtained.
Abstract: A constrained optimization algorithm suited to integrated-circuit (IC) design is presented. In contrast to existing optimization methods in which the constraint functions are only linearized, the algorithm uses a recently developed method of using optimization history data to obtain, without extra simulation, a second-order approximation to both objective and constraint functions. In the new algorithm, the search direction created at each optimization iteration is based on this second-order approximation. As a result, the computational efficiency has been greatly improved compared with other constrained optimization methods in terms of the number of function evaluations required (a measure which is crucial in the context of IC design). The effectiveness and efficiency of the new algorithm have been demonstrated through numerical examples which are commonly used as benchmarks by existing optimization methods, and by several electronic circuit examples. In all cases encouraging results have been obtained. >

Book ChapterDOI
01 Jan 1990
TL;DR: The choice facing a problem-solver is not among different solutions, but rather among the different algorithms which find these solutions, and each available algorithm should be evaluated based on both its output and its processing costs.
Abstract: A real-world problem-solver must balance conflicting objectives, and therefore measures the quality of a solution to his problem along multiple axes of value. However, algorithms for finding optimal multi-objective solutions usually require exponential computation time. Commonly, problem-solvers prefer computationally inexpensive methods which find non-optimal solutions [18]. Thus, there exists a clear trade-off between the desirability of an algorithm’s output, and the costs of computing that output. In fact, the choice facing a problem-solver is not among different solutions, but rather among the different algorithms which find these solutions. To make a wise choice, each available algorithm should be evaluated based on both its output and its processing costs[6].



01 Jan 1990
TL;DR: A simple and asymptotically optimal heuristic algorithms that solve the bottleneck assignment problem, the bottleneck spanning tree problem and the directed bottleneck traveling salesman problem in O(n2) time-complexity steps are presented.
Abstract: The purpose of this paper is twofold. Namely, to present a probabilistic analysis of a class of bottleneck (capacity) optimization problems, and to design simple and efficient heuristic algorithms guaranteed to be asymptotically optimal. OUf unified approach is applied to a wide variety of bottleneck problems including vehicle routing problems, location problems, and communication network problems. In particular, we present a simple and a.symptotically optimal heuristic algorithms that solve the bottleneck assignment problem, the bottleneck spanning tree problem and the directed bottleneck traveling salesman problem in O(n2) time-complexity steps (our algorithm runs in O(n3+t ) for the undirected version of the bottleneck traveling salesman problem). We also discuss polynomial heuristic algorithms for the bottleneck k-clique problem and the bottleneck k-Iocation problem. We prove using our probabilistic analysis that these algorithms with high probability (whp) produce the optimal solution. Furthermore, we extend our results to the d·th best solution for some bottleneck optimization problems. -This research was supported by AFOSR Grant 90-0107, and in part by the NSF Grant CCR.890030S, and by Grant ROI LM05118 from the Na.tional Library of Medicine.

01 Sep 1990
TL;DR: Inverse nonlinear programming problems for a new class of optimization problems relevant for game theory, system optimization, multicriteria optimization, etc. are considered by the author.
Abstract: Inverse nonlinear programming problems for a new class of optimization problems relevant for game theory, system optimization, multicriteria optimization, etc. are considered by the author. This paper deals with problem definitions, numerical methods and applications of the inverse nonlinear programming problem in multicriteria optimization. Some associated properties of related parametric optimization problems and software implementations are also considered.