scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 1991"



01 Jan 1991
TL;DR: An application of the proposed methodology to the classical travelling salesman problem shows that the system can rapidly provide very good, if not optimal, solutions.
Abstract: A combination of distributed computation, positive feedback and constructive greedy heuristic is proposed as a new approach to stochastic optimization and problem solving. Positive feedback accounts for rapid discovery of very good solutions, distributed computation avoids premature convergence, and greedy heuristic helps the procedure to find acceptable solutions in the early stages of the search process. An application of the proposed methodology to the classical travelling salesman problem shows that the system can rapidly provide very good, if not optimal, solutions. We report on many simulation results and discuss the working of the algorithm. Some hints about how this approach can be applied to a variety of optimization problems are also given.

376 citations


Book
01 Jan 1991
TL;DR: By introducing the ‘genetic’ approach to robot trajectory generation, much can be learned about the adaptive mechanisms of evolution and how these mechanisms can solve real world problems by introducing a new philosophy to optimization in general and engineering.
Abstract: Classical optimization methodologies fall short in very large and complex domains. In this book is suggested a different approach to optimization, an approach which is based on the 'blind' and heuristic mechanisms of evolution and population genetics. The genetic approach to optimization introduces a new philosophy to optimization in general, but particularly to engineering. By introducing the ‘genetic’ approach to robot trajectory generation, much can be learned about the adaptive mechanisms of evolution and how these mechanisms can solve real world problems. It is suggested further that optimization at large may benefit greatly from the adaptive optimization exhibited by natural systems when attempting to solve complex optimization problems, and that the determinism of classical optimization models may sometimes be an obstacle in nonlinear systems.This book is unique in that it reports in detail on an application of genetic algorithms to a real world problem, and explains the considerations taken during the development work. Futhermore, it addresses robotics in two new aspects: the optimization of the trajectory specification which has so far been done by human operators and has not received much attention for both automation and optimization, and the introduction of a heuristic strategy to a field predominated by deterministic strategies.

202 citations


Journal ArticleDOI
Fabio Schoen1
TL;DR: Stochastic algorithms for global optimization are reviewed with the aim of presenting recent papers on the subject, which have received only scarce attention in the most recent published surveys.
Abstract: In this paper stochastic algorithms for global optimization are reviewed. After a brief introduction on random-search techniques, a more detailed analysis is carried out on the application of simulated annealing to continuous global optimization. The aim of such an analysis is mainly that of presenting recent papers on the subject, which have received only scarce attention in the most recent published surveys. Finally a very brief presentation of clustering techniques is given.

141 citations



Journal ArticleDOI
TL;DR: This feasibility study is merely the first step in a project which aims at comparing the performance of the genetic algorithm with optimization methods that have already been applied to the wavelength selection problem.
Abstract: The genetic algorithm is proposed as a powerful search strategy for chemometricians engaged in large-scale optimization problems. The search space is explored while past information is exploited using memory gleaned from a natural evolution process. The algorithm is robust and highly efficient at the same time. These favorable properties fit into a mathematically well-founded framework, known as the schema theorema, and render the genetic algorithm a reasonable choice for tackling complex, large-scale optimization problems. For one problem of this kind — the optimal selection of wavelengths in multi-component analysis — it is shown that the genetic algorithm is able to find acceptable solutions in a reasonable time. This feasibility study is merely the first step in a project which aims at comparing the performance of the genetic algorithm with optimization methods that have already been applied to the wavelength selection problem.

123 citations


Journal ArticleDOI
TL;DR: The paper explains the tabu-search mechanism, discusses the general procedure with the use of simple examples, and details the important computational considerations that affect the performance of the routine.
Abstract: Tabu search is a powerful optimization procedure that has been successfully applied to a number of combinatorial optimization problems, including integer-programming and quadratic-assignment problems. The procedure is simple to implement, sufficiently versatile to incorporate problem-specific constraints, and may also act as a control mechanism to monitor and direct the progress of other optimization routines. The paper explains the tabu-search mechanism, discusses the general procedure with the use of simple examples, and details the important computational considerations that affect the performance of the routine. An electronic-circuit design problem is used as an example of the application of tabu search. Finally, possible modifications to the basic search procedure are indicated, together with lines of further investigation.

118 citations


Journal ArticleDOI
TL;DR: This paper describes applications of GAs to numerical optimization, present three novel ways to handle such problems, and gives some experimental results.
Abstract: Genetic algorithms (GAs) are stochastic adaptive algorithms whose search method is based on simulation of natural genetic inheritance and Darwinian striving for survival. They can be used to find approximate solutions to numerical optimization problems in cases where finding the exact optimum is prohibitively expensive, or where no algorithm is known. However, such applications can encounter problems that sometimes delay, if not prevent, finding the optimal solutions with desired precision. In this paper we describe applications of GAs to numerical optimization, present three novel ways to handle such problems, and give some experimental results.

112 citations


Journal ArticleDOI
TL;DR: In this article, a branch-and-bound framework is proposed for solving global optimization problems with a few variables and constraints, and the first complete solution of two difficult test problems is presented.
Abstract: Global optimization problems with a few variables and constraints arise in numerous applications but are seldom solved exactly. Most often only a local optimum is found, or if a global optimum is detected no proof is provided that it is one. We study here the extent to which such global optimization problems can be solved exactly using analytical methods. To this effect, we propose a series of tests, similar to those of combinatorial optimization, organized in a branch-and-bound framework. The first complete solution of two difficult test problems illustrates the efficiency of the resulting algorithm. Computational experience with the programbagop, which uses the computer algebra systemmacsyma, is reported on. Many test problems from the compendiums of Hock and Schittkowski and others sources have been solved.

62 citations


Book
01 Jan 1991
TL;DR: Part 1 Optimization as a circuit design tool: a generalized strategy for engineering design optimization and function minimization function space and the optimization problem of computer-aided design scope of the book.
Abstract: Part 1 Optimization as a circuit design tool: a generalized strategy for engineering design optimization and function minimization function space and the optimization problem of computer-aided design scope of the book. Part 2 Preliminary concepts: stationary points of functions unidirectional search classification of optimization methods. Part 3 Direct search optimization methods: tabulation methods sequential methods linear methods quadratically terminating direct search methods. Part 4 Gradient optimization methods: steepest descent Newton's method quasi-Newton methods least squares (Gauss-Newton) methods. Part 5 Unconstrained optimization in practice: local minima selection of an algorithm gradient evaluation. Part 6 Constrained optimization methods: classes of constrained optimization method linear programming quadratic and nonlinear programming commercial availability of constrained optimization algorithms. Part 7 Applications in electronic circuit design: optimization of linear frequency-selective networks optimization of nonlinear networks multiple-criterion optimization and statistical design of integrated circuits simulated annealing - a global optimization method? the future of optimization in electronic systems design.

52 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe an implementation of the tabu search metaheuristic that effectively finds a low-cost topology for a communications network to provide a cenlialized new service.
Abstract: We describe an implementation of the tabu search metaheuristic that effectively finds a low-cost topology for a communications network to provide a cenlialized new service. Our results are compared to those of a greedy algorithm which ^jplies corresponding decision rules, but widioul the guidance of the tabu search framework. These problems are difficult computationally, representing integer programs thai can involve as many as 10,000 integer variables and 2000 constraints in practical applications. The tabu search results approach succeeded in obtaining significant improvements over the greedy approach, yielding optimal solutions to problems small enough to allow independent verification of optimality status and, more generally, yielding both absolute and percentage cost improvements that did not deteriorate with increasing problem size.

Journal ArticleDOI
TL;DR: An overview of interval arithmetical tools and basic techniques that can be used to construct deterministic global optimization algorithms and are applicable to unconstrained and constrained optimization as well as to nonsmooth optimization and to problems over unbounded domains is presented.
Abstract: An overview of interval arithmetical tools and basic techniques is presented that can be used to construct deterministic global optimization algorithms. These tools are applicable to unconstrained and constrained optimization as well as to nonsmooth optimization and to problems over unbounded domains. Since almost all interval based global optimization algorithms use branch-and-bound methods with iterated bisection of the problem domain we also embed our overview in such a setting.


01 Jan 1991
TL;DR: In this article, a simple adaptation of Tabu Search which uses only a short term memory strategy to overcome local optimality is developed for the vehicle routing problem with time windows constraints.
Abstract: This paper addresses the application of the Tabu Search technique to the vehicle routing problem with time windows constraints. A simple adaptation of Tabu Search which uses only a short term memory strategy to overcome local optimality is developed. At first, an initial solution is obtained by a sequential route construction algorithm. After that, an arc interchange improving procedure is applied using as move attributes the deleted edges and the added edges. A sensitivity analysis is carried out in order to establish good choices for the number of edges considered tabu in a movement and for the lengths of the tabu lists. The method is tested on a large set of randomly generated routing problems and on a set of classical test problems. (A)

01 May 1991
TL;DR: This work presents several analytical and experimental results that shed some light on the shape of the cost function of the access plan spaces with which the randomized query optimization algorithms must deal, and concludes that the space of both deep and bushy trees is to be preferred over thespace of left-deep trees for query optimization.
Abstract: Query optimization for relational database systems is a combinatorial optimization problem, which makes exhaustive search unacceptable as the query size grows. Randomized algorithms, such as Iterative Improvement and Simulated Annealing, are viable alternatives to exhaustive search. We present several analytical and experimental results that shed some light on the shape of the cost function of the access plan spaces with which the randomized query optimization algorithms must deal. These are the space that includes only left-deep trees, and the space that includes both deep and bushy trees. We conclude that the shape of both spaces essentially forms a 'well', but of a distinctly different quality. This has inspired a new algorithm, Two Phase Optimization, which is a combination of Simulated Annealing and Iterative Improvement. We show how Iterative Improvement, Simulated Annealing, and Two Phase Optimization perform on the two spaces of interest and explain the results based on the above analysis on the shape of their cost function. In particular, the results show that Two Phase Optimization outperforms the original algorithms in terms of both output quality and running time. Additional experimentation shows that Two Phase Optimization is also very effective on small queries, having the traditional algorithm of System R as the basis for comparison. Thus, it emerges as a strong candidate for query optimization in future database systems. Finally, a comparison between the two spaces of interest in their form used in this work leads to the rather surprising conclusion that the space of both deep and bushy trees is to be preferred over the space of left-deep trees for query optimization. The former has a more definite shape of a 'well' than the latter and also includes more efficient alternatives in most cases. Hence, for the specific choices of connections of alternative access plans in the two spaces, the space of deep and bushy trees is superior with respect to both optimization time and output quality of the algorithms that use it as their search space.


Proceedings ArticleDOI
15 Aug 1991
TL;DR: The topics discussed are evolutionary programming; genetic algorithms; evolutionary function optimization experiments; background to classification problems and experimental results with evolutionary training.
Abstract: Training neural networks by the implementation of a gradient-based optimization algorithm (e.g., back-propagation) often leads to locally optimal solutions which may be far removed from the global optimum. Evolutionary optimization methods offer a procedure to stochastically search for suitable weights and bias terms given a specific network topology. The topics discussed are evolutionary programming; genetic algorithms; evolutionary function optimization experiments; background to classification problems and experimental results with evolutionary training. >

Journal ArticleDOI
TL;DR: In this paper, the authors give an example to illustrate the gap between multiobjective and single-objective optimization, which solves a problem proposed in Ref. 1, and demonstrate a gap between the two types of optimization.
Abstract: We give an example to illustrate a gap between multiobjective optimization and single-objective optimization, which solves a problem proposed in Ref. 1.

Proceedings ArticleDOI
02 Sep 1991
TL;DR: This paper looks into an implementation of tabu search on dedicated hardware and shows a potential for improvements of two orders of magnitude in the time taken to perform a fixed number of iterations for the traveling salesman problem (TSP).
Abstract: The tabu search is a new promising optimization heuristic used for obtaining near-optimum solutions of combinatorial optimization problems. This paper looks into an implementation of tabu search on dedicated hardware and shows a potential for improvements of two orders of magnitude in the time taken to perform a fixed number of iterations for the traveling salesman problem (TSP). >

Proceedings ArticleDOI
01 Aug 1991
TL;DR: In the implementation of the system, a graph representation of a solution of the problem was used, as opposed to the representations based on bit strings (as is done in most work on genetic algorithms), which is part of a larger project to create a new programming environment to support all kinds of optimization problems.
Abstract: Genetic algorithms are adaptive algorithms which find solutions to problems by an evolutionary process based on natural selection. They can be used to find approximate solutions to optimization problems in cases where finding the precise optimum is prohibitively expensive, or where no algorithmis known. This paper discusses the use of (non-standard) genetic algorithms for solving an optimization problem for a communication network. In the implementation of the system we have used a graph representation of a solution of the problem, as opposed to the representations based on bit strings (as isdone in most work on genetic algorithms) . This work is also a part of a larger project to create a newprogramming environment to support all kinds of optimization problems. 1 Introduction There is a large class of interesting problems for which no reasonably fast algorithms have been developed.Many of these problems are optimization or approximation problems that arise frequently in applications.For that reason the lack of efficient algorithms is of real importance.Given such hard optimization problem it is often possible to find an efficient algorithm whose solutionis approximately optimal. In fact sometimes there is an entire family of approximation algorithms for agiven problem, in which the better approximations require more time.For some hard optimization problems we can use some probabilistic algorithms as well—these algorithmsdo not guarantee the optimum value, however, by randomly choosing sufficiently many "witnesses" theprobability of error may be made as small as we like.


01 May 1991
TL;DR: A 2-phase optimization network is proposed which can obtain both the exact solution, in contrast to the approximate solution by Kennedy and Chua's networks, as well as the corresponding Lagrange multipliers associated with each constraint.
Abstract: Artificial neural networks (ANNs) for optimization are analyzed from the viewpoint of optimization theory. A unifying optimization network theory for linear programming, quadratic programming, convex programming, and nonlinear programming is derived. A 2-phase optimization network is proposed which can obtain both the exact solution, in contrast to the approximate solution by Kennedy and Chua's networks, as well as the corresponding Lagrange multipliers associated with each constraint. The quality of the solutions obtained by the optimization ANNs is quantified through simulation. The applicability of the optimization ANNs for solving real-world problems is demonstrated with examples of the economic power dispatching problem and the optimal power flow problem. It is shown that the mapping technique of the optimization ANNs is simple and that they are able to handle various kinds of constraint sets. Furthermore, it is demonstrated that the optimization ANNs attain a better objective function value. Overall, this work lays a solid groundwork for optimization ANNs in both theoretical and practical aspects.


Proceedings ArticleDOI
10 Nov 1991
TL;DR: It is shown that MULT* is an admissible algorithm and it has many of the important properties of A*.
Abstract: The problem of optimization in a multiple-cost search space by combining admissible heuristic estimates for the different cost parameters is investigated. The authors propose an algorithm, MULT* for solving trade-off optimization problems for which there are good admissible heuristics available for each of the associated cost parameters. It is shown that MULT* is an admissible algorithm and it has many of the important properties of A*. Conditions are also given under which it is possible to prune paths in the search graph. A method is also given to relax admissibility of the heuristics to have a more efficient version of MULT* with a bounded decrease in solution quality. >

Proceedings ArticleDOI
28 Apr 1991
TL;DR: The proposed algorithm, PBDA', is a massively parallel search algorithm based on the idea of staged search, and solution quality is scalable with the number of processors, and its execution time is directly proportional to the depth of search.
Abstract: Most admissible search algorithms fail to solve reallife problems because of their exponential time and storage requirements. Therefore, to quickljy obtain near-optimal solutions, the use of approximute algorithms and inadmissible heuristics are of practical interest. The use of parallel and distributed ahgorithms [l, 6, 8, 111 further reduces search complexity. I n this paper we present empirical results on a massively parallel search algorithm using a Connection .Machine CM-2. Our algorithm, PBDA', is based on the idea of staged search [9, lo]. Its execution time is directly proportional t o the depth of search, and solution quality is scalable with the number of processors. W e tested it on the 1Bpuzzle problem using both admissible and inadmissible heuristics. The best results gave an average relative error of 1.66% and 66% optimal solutions.

Proceedings ArticleDOI
13 Oct 1991
TL;DR: In this paper, a deterministic approach to simulated annealing for constrained optimization of continuous variables is proposed, where the constrained region is dynamically divided into simplices, based on previous search points, using multidimensional Delaunay triangulations.
Abstract: The authors propose a deterministic approach to simulated annealing for constrained optimization of continuous variables. The constrained region is dynamically divided into simplices, based on previous search points, using multidimensional Delaunay triangulations. For each simplex, the quotient of the exponential of the local-mean-function value and its hypervolume are calculated. The simplex with the minimum quantity described is selected to contain the next search point. This locates a region in the search space whose observed density of search points is the most inconsistent with the ideal density. The next search point is selected within this simplex such that the new simplices formed by the addition of the new search point each have an equal value of the quotient quantity described. The search is forced in a deterministic manner to generate the desired density of simulated annealing. The new method was applied to a standard set of test functions and the results are given. >

DissertationDOI
01 Jan 1991
TL;DR: Using Genetic Algorithms to Solve Combinatorial Optimization Problems and how they can be used to improve the quality of human-computer interaction.
Abstract: OF THE THESIS Using Genetic Algorithms to Solve Combinatorial Optimization Problems

Proceedings ArticleDOI
12 Aug 1991

01 Jan 1991
TL;DR: In this article, the authors give an example to illustrate the gap between multiobjective and single-objective optimization, which solves a problem proposed in Ref. 1, and demonstrate a gap between the two types of optimization.
Abstract: We give an example to illustrate a gap between multiobjective optimization and single-objective optimization, which solves a problem proposed in Ref. 1.