scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 1989"


Journal ArticleDOI
TL;DR: Tabu Search is a general heuristic procedure for global optimization as mentioned in this paper, based on simple ideas it has been extremely efficient in getting almost optimal solutions for many types of difficult combinatorial optimization problems.
Abstract: Tabu Search is a general heuristic procedure for global optimization. Based on simple ideas it has been extremely efficient in getting almost optimal solutions for many types of difficult combinatorial optimization problems. The principles of Tabu Search are discribed and illustrations are given. An example of problem type where the use of Tabu Search has drastically cut down the computational effort is presented; it consists of the learning process of an associative memory represented by a neural network.

195 citations


Journal ArticleDOI
Richard S. Bassein1

155 citations


Journal ArticleDOI
TL;DR: In this paper, a unified approach to various problems of structural optimization is presented, based on a combination of mathematical models of different complexity, which describe the behaviour of a designed structure and are connected with the sequential approximation of design problem constraints and/or an objective function.
Abstract: A unified approach to various problems of structural optimization is presented. It is based on a combination of mathematical models of different complexity. The models describe the behaviour of a designed structure. From the computational point of view, it is connected with the sequential approximation of design problem constraints and/or an objective function. In each step, a subregion of the initial search region in the space of design variables is chosen. In this subregion, various points (designs) are selected, for which response analyses are carried out using a numerical method (mostly FEM). Using the least-squares method, analytical expressions are formulated, which then replace the initial problem functions. They are used as functions of a particular mathematical programming problem. The size and location of sequential subregions may be changed according to the result of the search. The choice of one particular form of the analytical expressions is described. The application of the approach is shown by means of test examples and comparison with other optimization techniques is presented.

122 citations


Journal ArticleDOI
TL;DR: This introductory paper gives a mathematical description of the simulated annealing algorithm and discusses its behaviour from both a theoretical and a practical point of view.
Abstract: Simulated annealing is a general approach for approximately solving large combinatorial optimization problems. The algorithm is based on an intriguing combination of ideas from at first sight completely unrelated fields of science, viz. combinatorial optimization and statistical physics. On the one hand the algorithm can be viewed as an analogue of an algorithm used in statistical physics for computer simulation of the annealing of a solid to its minimum–energy state, on the other hand it can be considered as a generalization of the well–known iterative improvement approach to combinatorial optimization problems. In this introductory paper we give a mathematical description of the simulated annealing algorithm and discuss its behaviour from both a theoretical and a practical point of view. The latter is illustrated by applying the algorithm to the travelling salesman problem. This paper was written to familiarize the readers of Statistica Neerlandica with simulated annealing. It is a summary of papers written earlier by the authors and does not contain any new material.

116 citations


DOI
01 Jan 1989
TL;DR: It will be shown how proper problem representations, effective modelling schemes and solution strategies can play a crucial role in the successful application of these techniques in process synthesis.
Abstract: "This paper will attempt to show that one of the trends in the area of process synthesis is its gradual evolution towards the mathematical programming approach This is in part due to the fundamental understanding that has been gained on the nature of the synthesis problems over the last twenty years However, another major part has to do with the development of new and more powerful mathematical programming algorithms In particular, the development of new MINLP algorithms, coupled with advances in computers and software, is opening promising possibilities to rigorously model, optimize and automate synthesis problems A general overview of the MINLP approach and algorithms will be presented in this paper with the aim of gaining a basic understanding of these techniquesStrengths and weaknesses will be discussed, as well as difficulties and challenges that still need to be overcome In particular, it will be shown how proper problem representations, effective modelling schemes and solution strategies can play a crucial role in the successful application of these techniques The application of MINLP algorithms in process synthesis will be illustrated with several examples"

76 citations


Journal ArticleDOI
TL;DR: All the practical difficulties usually encountered in the discrete or mixed‐discrete optimization are believed to be overcome by the solution code presented herein.
Abstract: This paper presents a solution code for mixed‐discrete variable optimization. An optimization algorithm comprising two different techniques, discrete steepest descent and rotating coordinate directions, originally proposed for all discrete optimization, is extended. The extensions have been done in such a way that continuous variables are treated as discrete variables while keeping their continuous nature in the analysis. Several example problems have been solved, three of which are reported. Each example problem is different in nature and in degree of nonlinearity from others. All three example problems reached or almost reached the global optimum. All the practical difficulties (resolution valley, overshooting the optimum, etc.) usually encountered in the discrete or mixed‐discrete optimization are believed to be overcome by the solution code presented herein. The present solution code can effectively be applied to a variety of problems. Details of each technique with the necessary extension and modific...

60 citations


Journal ArticleDOI
TL;DR: This paper deals with a combinatorial problem arising from studies in distributed and parallel architectures, and proposes a loop topology that does not work after the failure of one node or arc or two nodes or edges.
Abstract: This paper deals with a combinatorial problem arising from studies in distributed and parallel architectures. The interconnection network is the heart of parallel computers, and several topologies have been proposed (see [l], [2], [16]). One of the most used is the loop topology, due to its simplicity [13]; in this case, the network consists of a loop (directed or not), which is modeled by a cycle (directed or not). But the simple loop topology has some disadvantages, in particular, its high vulnerability and low performance. Indeed, the network does not work after the failure of one node or arc (in the directed case) or two nodes or edges (in the undirected case). Furthermore, the diameter of the loop network, which corresponds to

53 citations


01 Jan 1989
TL;DR: Tabu search is a heuristic search strategy which is used to guide the repeated application of other simple heuristics in the search for optimal or near optimal solutions to combinatorial optimization problems.
Abstract: Tabu search is a heuristic search strategy which is used to guide the repeated application of other simple heuristics in the search for optimal or near optimal solutions to combinatorial optimization problems. A key element of tabu search is the tabu list which is used to prevent cycling. The most recent actions of the search, are recorded on the tabu list and is used to prevent the reversal of those actions, if they will not produce better results. This strategy of flexible restrictions combined with various length memory functions allows tabu search to efficiently identify good solutions. This paper describes the application of tabu search to the symmetric traveling salesman problem (TSP). The values of desirable parameter settings are presented along with the methods used to identify them. A comparison based on solution quality and computational efficiency is made between tabu search and other general heuristic search strategies, such as simulated annealing and genetic algorithms.

33 citations


Journal ArticleDOI
01 Mar 1989
TL;DR: This paper shows that an implementation of the Karmarkar's interior-point LP algorithm with a newly developed stopping criterion solves optimization problems of large multi-reservoir operations more efficiently than the simplex method.
Abstract: Optimization of multi-reservoir systems operations is typically a very large scale optimization problem. The following are the three types of optimization problems solved using linear programming (LP): (i) deterministic optimization for multiple periods involving fine stage intervals, for example, from an hour to a week (ii) implicit stochastic optimization using multiple years of inflow data, and (iii) explicit stochastic optimization using probability distributions of inflow data. Until recently, the revised simplex method has been the most efficient solution method available for solving large scale LP problems. In this paper, we show that an implementation of the Karmarkar's interior-point LP algorithm with a newly developed stopping criterion solves optimization problems of large multi-reservoir operations more efficiently than the simplex method. For example, using a Micro VAX II minicomputer, a 40 year, monthly stage, two-reservoir system optimization problem is solved 7.8 times faster than the advanced simplex code in MINOS 5.0. The advantage of this method is expected to be greater as the size of the problem grows from two reservoirs to multiples of reservoirs. This paper presents the details of the implementation and testing and in addition, some other features of the Karmarkar's algorithm which makes it a valuable optimization tool are illuminated.

29 citations


Journal ArticleDOI
TL;DR: An analytical approach to the optimization of the topological configuration of a set of functional blocks is presented, expressing the overlapping of blocks as an inequality constraint in a formulation based on nonlinear optimization, so that the blocks can move freely without any overlap during the optimization process.
Abstract: The paper presents an analytical approach to the optimization of the topological configuration of a set of functional blocks. By expressing the overlapping of blocks as an inequality constraint in a formulation based on nonlinear optimization, the blocks can move freely without any overlap during the optimization process. Some special characteristics of this problem, which prohibit the direct application of standard search methods for optimization, have been investigated. On the basis of this analysis, a search algorithm is developed. The proposed method allows fully automated design and exhibits computationally efficient convergence to local optima from random initial designs.

28 citations


Journal ArticleDOI
TL;DR: A nonlinear neural framework, called the generalized Hopfield network (GHN), is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations and offers a straightforward model for the parallelization of the optimization computations, significantly extending the practical limits of problems that can be formulated as an optimization problem.
Abstract: A nonlinear neural framework, called the generalized Hopfield network (GHN), is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the augmented Lagrangian, generalized reduced gradient, and successive quadratic programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and that can gain from the introduction of nonlinearities in their structure (e.g., pattern recognition, supervised learning, and design of content-addressable memories).

Proceedings ArticleDOI
13 Dec 1989
TL;DR: A nonparametric multidimensional search method that is based on ranking the functional evaluations is proposed for constrained global optimization and yields favorable results on three standard 2D and 3D test functions.
Abstract: A nonparametric multidimensional search method that is based on ranking the functional evaluations is proposed for constrained global optimization. The method yields favorable results on three standard 2D and 3D test functions. A fundamental property of the search method is that it gives equivalent results for any monotonic transformation of the objective function. >

Proceedings ArticleDOI
22 Nov 1989
TL;DR: The problem of efficient optimization of the K-d (K-dimensional) tree for fast nearest-neighbor search under a bucket-Voronoi intersection framework is addressed and a new optimization criterion is proposed which is more efficient than the maximum product criterion used recently.
Abstract: The problem of efficient optimization of the K-d (K-dimensional) tree for fast nearest-neighbor search under a bucket-Voronoi intersection framework is addressed. A new optimization criterion is proposed which is based on a geometric interpretation of the optimization problem using a direct characterization of the number of Voronoi intersections in the lower and upper regions of a partitioned node as a function of the partition location. The proposed optimization criterion is more efficient than the maximum product criterion (MPC) used recently. The authors give a clear geometric interpretation of the MPC and explain the reasons for its inefficiency. The proposed optimization is used for fast vector quantization encoding of speech and is empirically observed to achieve constant search complexity for O(log N) tree depths. >

Journal ArticleDOI
TL;DR: Comparisons have been made to the widely used non-linear optimization methods, which shows that the algorithm used here is very efficient in computation and is especially suitable for the high order vehicle model and microcomputer environment.

Proceedings Article
01 Jan 1989
TL;DR: A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations and offers a straightforward model for the parallelization of the optimization computations, significantly extending the practical limits of problems that can be formulated as an optimization problem.
Abstract: A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. pattern recognition, supervised learning, design of content-addressable memories).

01 Jan 1989
TL;DR: Results are obtained that confirm the viability and efficacy of the reduced SQP implementation for efficient solution of large, difficult nonlinear programs.
Abstract: The development and implementation of the Range and Null space Decomposition (RND) strategy for large-scale problems is described with emphasis on the optimization of engineering systems. The RND technique, as detailed in Vasantharajan and Biegler (1988), uses nonorthonormal, gradient based projections for the Jacobian. However, this implementation is dense, and does not take advantage of system sparsity. Here we extend this algorithm to incorporate general purpose sparse matrix techniques. Also, problems like inconsistent linearizations and infeasible Quadratic Programs (QPs), which are generally associated with QP based methods compromise the robustness of this method and need to be considered. Finally, systematic ways of generating a nonsingular basis for general nonlinear programs must be developed if this strategy is to be adapted to solve large, sparse problems efficiently. To deal with these problems, a two phase LP-based procedure is coupled to the RND algorithm. This strategy also serves to partition the variables into decisions and dependents, thereby generating a nonsingular basis. Any redundancies/degeneracies in the constraints are also detected and processed separately. The entire reduced SQP implementation is then interfaced with GAMS (Brooke et a/. (1988)), a front end for representing and solving process models. Finally, a thorough comparison of the RND based reduced SQP strategy with MINOS (Murtagh and Saunders (1978)) is effected on a set of NLPs and process design problems. The process problems include the optimization of the operation of distillation columns. These problems warrant special mention as have been uniquely conceived and implemented in a novel equation-oriented manner, thus exploiting the full potential of the GAMS architecture. Detailed discussion of the formulation and results are included and results are obtained that confirm the viability and efficacy of the reduced SQP implementation for efficient solution of large, difficult nonlinear programs. University Libraries Carnegie Mellon University Pittsburgh, Pennsylvania 15213


Proceedings ArticleDOI
P. Julien-Laferriere1, F. H. Lee, G.S. Stiles, A. Raghuram, T.W. Morgan 
22 Mar 1989
TL;DR: An evaluation is made of three stochastic algorithms for the optimization of distributed database networks: a simplerandom search, an iterative random search, and simulated annealing.
Abstract: An evaluation is made of three stochastic algorithms for the optimization of distributed database networks: a simple random search, an iterative random search, and simulated annealing. The primary goal of the optimization is to minimize the operating cost of the network while maintaining acceptable response times. The inputs to the problem include the number and locations of the network nodes, the number of distinct databases, the query and update traffic from each node to each database, and the various costs associated with installing and maintaining the network. The optimization routines must select the location and speed of communication links, the allocation of databases to nodes, and the paths for queries and updates. A maximum allowable time for a response to a query or update is also specified and serves as a constraint in the optimization process. >




Book ChapterDOI
01 Jan 1989
TL;DR: Some details of a user interface of the mechanical structural optimization system MBB-LAGRANGE are presented, a system that was implemented to solve optimization problems described by finite element structures.
Abstract: Knowledge-based integrated software systems in structural optimization can be used for modelling a problem, for selecting a suitable optimization algorithm, for providing problem data and for interpreting results and failures for a decision maker. Some details of a user interface of the mechanical structural optimization system MBB-LAGRANGE are presented, a system that was implemented to solve optimization problems described by finite element structures. The interface is selflearning and uses rules e.gĝ to select a suitable numerical method or to interprete failures. A data base is used for processing model information, algorithms, problem data and results. More details are found in Schittkowski and Zotemantel (1988).

Proceedings ArticleDOI
17 May 1989
TL;DR: Recent results are presented on the performance of the algorithm of simulated annealing for optimization in reaching the global minimum of combinatorial optimization problems.
Abstract: Several important problems in diverse application areas such as image restoration, code design, and VLSI design, contain at their core an optimization problem whose solution crucially determines the performance of the resulting engineering system Standard descent algorithms for such optimization problems, however, typically get trapped in local minima, and fail to reach solutions at or near the global minimum Motivated by the problems of determining the global minima of optimization problems, the algorithm of simulated annealing for optimization has been proposed Here we present recent results on the performance of this algorithm in reaching the global minimum of combinatorial optimization problems


Journal ArticleDOI
TL;DR: In this article, direct search methods for solving dynamic optimization problems were developed for finding and following the minimum of the unimodal objective function, where drift parameters are unknwon, and methods for identification of the unknown parameters and optimization algorithms were developed.