scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 1994"


Journal ArticleDOI
TL;DR: The analogy between genetic algorithms and the search processes in nature is drawn and the genetic algorithm that Holland introduced in 1975 and the workings of GAs are described and surveyed.
Abstract: Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception. >

2,095 citations


Journal ArticleDOI
01 Apr 1994
TL;DR: The proposed search algorithm is realized by GAs which utilize a penalty function in the objective function to account for violation, based on systematic multi-stage assignments of weights in the penalty method as opposed to single- stage assignments in sequential unconstrained minimization.
Abstract: This paper presents an application of genetic algorithms (GAs) to nonlinear constrained optimization. GAs are general purpose optimization algorithms which apply the rules of natural genetics to explore a given search space. When GAs are applied to nonlinear constrained problems, constraint handling becomes an important issue. The proposed search algorithm is realized by GAs which utilize a penalty function in the objective function to account for violation. This extension is based on systematic multi-stage assignments of weights in the penalty method as opposed to single-stage assignments in sequential unconstrained minimization. The experimental results are satisfactory and agree well with those of the gradient type methods.

758 citations


Book ChapterDOI
09 Oct 1994
TL;DR: The experiments show that 2-parent recombination is inferior on the classical DeJong functions and in some cases 2 parents are optimal, while in some others more parents are better.
Abstract: We investigate genetic algorithms where more than two parents are involved in the recombination operation. We introduce two multi-parent recombination mechanisms: gene scanning and diagonal crossover that generalize uniform, respecively n-point crossovers. In this paper we concentrate on the gene scanning mechanism and we perform extensive tests to observe the effect of different numbers of parents on the performance of the GA. We consider different problem types, such as numerical optimization, constrained optimization (TSP) and constraint satisfaction (graph coloring). The experiments show that 2-parent recombination is inferior on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better.

397 citations


Journal ArticleDOI
TL;DR: It is shown how tabu search can be coupled with directional search and scatter search approaches to solve nonlinear optimization problems from both continuous and discrete settings, and suggests ways to exploit potential links between scatter search and genetic algorithms.

211 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for getting almost optimal solutions of large traveling salesman problems is proposed, which uses the intermediate- and long-term memory concepts of tabu search as well as a new kind of move.

211 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the opportunity exists to develop more advanced procedures that make fuller use of scatter search strategies and their recent extensions.
Abstract: We provide a tutorial survey of connections between genetic algorithms and scatter search that have useful implications for developing new methods for optimization problems. The links between these approaches are rooted in principles underlying mathematical relaxations, which became inherited and extended by scatter search. Hybrid methods incorporating elements of genetic algorithms and scatter search are beginning to be explored in the literature, and we demonstrate that the opportunity exists to develop more advanced procedures that make fuller use of scatter search strategies and their recent extensions.

144 citations



Proceedings ArticleDOI
08 Mar 1994
TL;DR: The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems, and the subset sum, maximum cut, and minimum tardy task problems are considered.
Abstract: The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems. In particular, the subset sum, maximum cut, and minimum tardy task problems are considered. Except for the fitness function, no problem-specific changes of the genetic algorithm are required in order to achieve results of high quality even for the problem instances of size 100 used in the paper. For constrained problems, such as the subset sum and the minimum tardy task, the constraints are taken into account by incorporating a graded penalty term into the fitness function. Even for large instances of these highly multimodal optimization problems, an iterated application of the genetic algorithm is observed to find the global optimum within a number of runs. As the genetic algorithm samples only a tiny fraction of the search space, these results are quite encouraging.

121 citations


Proceedings ArticleDOI
20 Jun 1994
TL;DR: This paper presents a radically different and relatively new functional optimization methodology known as genetic algorithm (GA) optimization that overcomes the problems of the traditional techniques and discusses how GA optimization is applied to 1D and 2D antenna design.
Abstract: Synthesis of antenna patterns employing iterative optimization techniques has been studied by many authors. However, successful application of these approaches to pattern synthesis has usually been limited to relatively simple arrays or has required careful, intelligent selection of the optimization starting points dictated by the nature of the optimization techniques used and the functions being optimized. This is because conventional functional optimization techniques are either based on greedy, local optimization methods such as gradient methods or consist of random walk solution space searches. In either case, these conventional techniques are often poorly suited to the task of arbitrary pattern synthesis in 1D and 2D antenna arrays due to the high dimensional, multimodal functional domains involved. In addition, traditional optimization techniques usually require the object function to be, at the very least, continuous and, in many cases to be differentiable, placing severe limitations on the form and content of the object function. This paper presents a radically different and relatively new functional optimization methodology known as genetic algorithm (GA) optimization that overcomes the above-mentioned problems of the traditional techniques and discusses how GA optimization is applied to 1D and 2D antenna design. >

104 citations


Journal ArticleDOI
TL;DR: In this article, the authors present linear network optimization: Algorithms and Codes, which is a generalization of Linear Network Optimization (LNO) and Linear Network Programming (LNP).
Abstract: (1994). Linear Network Optimization: Algorithms and Codes. Journal of the Operational Research Society: Vol. 45, No. 4, pp. 483-483.

104 citations


Book
01 Jan 1994
TL;DR: General Optimality Conditions via a Separation Scheme F. Giannessi, G. Di Pillo, V.G. Evtushenko, M. Potapov, and M.W. Dixon.
Abstract: General Optimality Conditions via a Separation Scheme F. Giannessi. Linear Equations in Optimization C.G. Broyden. Generalized and Sparse Least Squares Problems A. Bjoerck. Algorithms for Solving Nonlinear Systems of Equations J.M. Martinez. An Overview of Unconstrained Optimization R. Fletcher. Nonquadratic Model Methods in Unconstrained Optimization Naiyang Deng, Zhengfeng Li. Algorithms for General Constrained Nonlinear Optimization M.C. Bartholomew-Biggs. Exact Penalty Methods G. Di Pillo. Stable Barrier-projection and Barrier-Newton Methods for Linear and Nonlinear Programming Y.G. Evtushenko, V.G. Zhadan. Large-Scale Nonlinear Constrained Optimization - a Current Survey A.R. Conn, N. Gould, P.L. Toint. ABS Methods for Nonlinear Optimization E. Spedicato, Zunquan Xia. A Condensed Introduction to Bundle Methods in Nonsmooth Optimization C. Lemarechal, J. Zowe. Computational Methods for Linear Programming D.F. Shanno. Infeasible Interior Point Methods for Solving Linear Programs J. Stoer. Algorithms for Linear Complementarity Problems J.J. Judice. A Homework Exercise - the "Big M" Problem R.W.H. Sargent. Deterministic Global Optimization Y.G. Evtushenko, M.A. Potapov. On Automatic Differentiation and Continuous Optimization L.C.W. Dixon. Neural Networks and Unconstrained Optimization L.C.W. Dixon. Parallel Nonlinear Optimization - Limitations, Challenges and Opportunities R.B. Schnabel.

Journal Article
TL;DR: The paper discusses three classes of problems associated with multicriteria optimization in engineering design and focuses on the search when the ideal solution, or solutions, are known, but they are infeasible and the solutions sought should come as close as possible.
Abstract: An engineering design optimization problem can be formulated as a search of the solution space for which the relationships among various groups of attributes are given. The search can be also conducted for a multicriteria optimization. The paper discusses three classes of problems associated with multicriteria optimization in engineering design. First, multicriteria optimization methods with a single domination relationship are reviewed. Next, such methods with many domination functions are discussed. The last part of the paper focuses on the search when the ideal solution, or solutions, are known, but they are infeasible and the solutions sought should come as close as possible.

Book ChapterDOI
11 Apr 1994
TL;DR: This paper presents a new approach to hybridization of genetic algorithms that involves incorporating other methods such as simulated annealing or local optimization as an ‘add-on’ extra to the basic GA strategy of selection and reproduction.
Abstract: Genetic algorithms (GAs) have proved to be a versatile and effective approach for solving combinatorial optimization problems. Nevertheless, there are many situations in which the simple GA does not perform particularly well, and various methods of hybridization have been proposed. These often involve incorporating other methods such as simulated annealing or local optimization as an ‘add-on’ extra to the basic GA strategy of selection and reproduction.

Book ChapterDOI
09 Oct 1994
TL;DR: A systematic approach to identifying hybrid methods (which combine features of both SA and GA) that exhibit performance superior than either method alone is introduced by defining a space of methods as a nondeterministic generating grammar.
Abstract: Simulated annealing and genetic algorithms represent powerful optimization methods with complementary strengths and weaknesses. Hence, there is an interest in identifying hybrid methods (which combine features of both SA and GA) that exhibit performance superior than either method alone. This paper introduces a systematic approach to identifying these hybrids by defining a space of methods as a nondeterministic generating grammar. This space includes SA, GA, previously introduced hybrids and many new methods. An empirical evaluation has been completed for 14 methods from this space applied to 9 diverse optimization problems. Results demonstrate that the space contains promising new methods. In particular, a new method that combines the recombinative power of GAs and annealing schedule of SA is shown to be one of the best methods for all 9 optimization problems explored.

Proceedings ArticleDOI
06 Nov 1994
TL;DR: The results show the genetic algorithm can provide very good solutions, often surpassing other complex and specialized techniques.
Abstract: The paper presents a method for optimizing the design of plane and space trusses subject to a specified set of constraints. The method is based upon a search technique using genetic algorithms. Traditional structural optimization techniques consider it continuous search space, and consequently lead to unrealistic solutions because structural members are not available in continuously varying sizes. A practical method should consider only the discrete values associated with commonly available materials. On the other hand, most modern structural optimization techniques, even when they consider a discrete search space, suffer a lack of generality, and tend to be limited to a certain kind of structure. Genetic algorithms remedy these two problems since they can deal with discrete search spaces and they are general enough to be easily extended to any kind of structure without substantial modifications. Our results show the genetic algorithm can provide very good solutions, often surpassing other complex and specialized techniques. >

Journal ArticleDOI
TL;DR: In this article, a multiobjective optimization procedure is developed to address the combined problems of the synthesis of structures/controls and the actuator-location problem for the design of intelligent structures.
Abstract: A multiobjective optimization procedure is developed to address the combined problems of the synthesis of structures/controls and the actuator-location problem for the design of intelligent structures Continuous and discrete variables are treated equally in the formulation Multiple and conflicting design objectives such as vibration reduction, dissipated energy, power and a performance index are included by utilizing an efficient multiobjective optimization formulation Piezoelectric materials are used as actuators in the control system A simulated annealing algorithm is used for optimization and an approximation technique is used to reduce computational effort A numerical example using a cantilever box beam demonstrates the utility of the optimization procedure when compared with a previous nonlinear programming technique



Book
03 Feb 1994
TL;DR: This work includes the development of new algorithmic features that are motivated by the molecular configuration problem but are applicable to a wider class of large scale, partially separable global optimization problems.
Abstract: : Global optimization problems are computationally extensive problems that arise in many important applications. The solution of very large practical global optimization problems, which may have thousands of variables and huge numbers of local minimizers, is not yet possible. It will require efficient numerical algorithms that take advantage of the properties of the particular application, as well as efficient utilization of the fastest available computers, which will almost certainly be highly parallel machines. This paper summarizes our research efforts in this direction. First, we describe general purpose adaptive, asynchronous parallel stochastic global optimization methods that we have developed, our computational experience with them. Second, we describe several alternative dynamic scheduling algorithms that are required to control such dynamic parallel algorithms on distributed memory multiprocessors, and compare their performance in the context of our parallel in the context of our parallel global optimization methods. Third, we discuss the application and refinement of these methods to global optimization problems arising from the structural optimization of chemical molecules, and present preliminary computational results on some problems with between 15 and 100 variables. This work includes the development of new algorithmic features that are motivated by the molecular configuration problem but are applicable to a wider class of large scale, partially separable global optimization problems.

Book ChapterDOI
09 Oct 1994
TL;DR: ENZO-M combines two successful search techniques using two different timescales: learning (gradient descent) for finetuning of each offspring and evolution for coarse optimization steps of the network topology.
Abstract: ENZO-M combines two successful search techniques using two different timescales: learning (gradient descent) for finetuning of each offspring and evolution for coarse optimization steps of the network topology. Therefore, our evolutionary algorithm is a metaheuristic based on the best available local heuristic. Through training each offspring by fast gradient methods the search space of our evolutionary algorithm is considerably reduced to the set of local optima.

Proceedings ArticleDOI
27 Jun 1994
TL;DR: A new method is developed which determines the next point to evaluate by analysing the usefulness of evaluating the function at a certain position, and it is shown that this method permits one to precisely define the heuristic of the search.
Abstract: Most of the algorithms for global optimization making use of the concept of population exploit very little of the information provided by agents in the population in order to choose the next point to evaluate. We develop a new method called S.T.E.P. (Select The Easiest Point) which determines the next point to evaluate by analysing the usefulness of evaluating the function at a certain position. Moreover, we will see that this method permits one to precisely define the heuristic of the search. We also prove its convergence under certain conditions. >

Journal ArticleDOI
TL;DR: This work presents an approximate algorithm for the problem of partitioning integrated combinational circuits, based on the tabu search metaheuristic, and presents several original features, such as the use of a reduced neighborhood obtained from moves involving only a subset of boundary nodes.
Abstract: The logical test of integrated VLSI circuits is one of the main phases of their design and fabrication. The pseudo-exhaustive approach for the logical test of integrated circults consists in partitioning the original circuits to be tested into non-overlapping subcircuits with a small, bounded number of subcircuits, which are then exhaustively tested in parallel. In this work, we present an approximate algorithm for the problem of partitioning integrated combinational circuits, based on the tabu search metaheuristic. The proposed algorithm presents several original features, such as: the use of a reduced neighborhood, obtained from moves involving only a subset of boundary nodes; complex moves which entail several resulting moves, although the variations in the cost function are easily computable; a bi-criteria cost function combining the number of subcircuits and the number of cuts, which simultaneously adds a diversification strategy to the search; and the use of a bin-packing heuristic as a post-optimization step. The behavior of the proposed algorithm was evaluated through its application to a set of benchmark circuits. The computational results have been compared with those obtained by the other algorithms in the literature, with significant improvements. The average reduction rates have been of the order of 30% in the number of subcircuits in the partition, and of the order of 40% in the number of cuts.

Proceedings ArticleDOI
30 May 1994
TL;DR: A systematic method to find several local optimal solutions for a general nonlinear optimization problem and can find the global optimal solution by properly switching between quasi-gradient systems and reflected gradient systems is proposed.
Abstract: We propose a systematic method to find several local optimal solutions for a general nonlinear optimization problem. Analytical results for quasi-gradient systems and reflected gradient systems are developed and applied to explore the topological and geometric aspects of the critical points of the objective function. A mechanism is devised to escape from a local optimal solution and proceed into another local optimal solution by locating the decomposition point. By properly switching between quasi-gradient systems and reflected gradient systems, our proposed method can obtain a set of local optimal solutions and decomposition points. This algorithm also can find the global optimal solution. It depends on its ability to find all the decomposition points. The main algorithm contains two levels: the lower level is continuous while the upper level is discrete in nature. Further improvements in the algorithm to locate all decomposition points are desirable. The proposed method is applied to one test example with encouraging results. >

Proceedings Article
01 Aug 1994
TL;DR: Experimental results in job shop scheduling problems support the hypotheses that this approach is capable of capturing diverse user optimization preferences and re-using them to guide solution quality improvement, and is robust in the sense that it improves solution quality independent of the method of initial solution generation.
Abstract: We have developed an approach to acquire complicated user optimization criteria and use them to guide iterative solution improvement. The effectiveness of the approach was tested on job shop scheduling problems. The ill-structuredness of the domain and the desired optimization objectives in real-life problems, such as factory scheduling, makes the problems difficult to formalize and costly to solve. Current optimization technology requires explicit global optimization criteria in order to control its search for the optimal solution. But often, a user's optimization preferences are state-dependent and cannot be expressed in terms of a single global optimization criterion. In our approach, the optimization preferences are represented implicitly and extensionally in a case base. Experimental results in job shop scheduling problems support the hypotheses that our approach (1) is capable of capturing diverse user optimization preferences and re-using them to guide solution quality improvement, (2) is robust in the sense that it improves solution quality independent of the method of initial solution generation, and (3) produces high quality solutions, which are comparable with solutions generated by traditional iterative optimization techniques, such as simulated annealing, at much lower computational cost.

Journal ArticleDOI
TL;DR: The effectiveness, robustness, and fast convergence of modified genetic algorithms are demonstrated through the results of several examples, and Genetic algorithms are more capable of locating the global optimum.
Abstract: This paper presents the applications of genetic algorithms to nonlinear constrained mixed-discrete optimization problems that occur in engineering design. Genetic algorithms are heuristic combinatorial optimization strategies. Several strategies are adopted to enhance the search efficiency and reduce the computational cost. The effectiveness, robustness, and fast convergence of modified genetic algorithms are demonstrated through the results of several examples. Moreover, genetic algorithms are more capable of locating the global optimum.

Book ChapterDOI
Jun Gu1
25 Aug 1994
TL;DR: A new optimization approach, multispace search, is given for general search and optimization problem solving that interplays structural operations related to problem structure with traditional value search.
Abstract: A traditional search algorithm optimizes by changing values in the value space, it is difficult for a value search algorithm to handle the pathological phenomena occurred in many combinatorial optimization problems. We give a new optimization approach, multispace search, for general search and optimization problem solving. A multispace search algorithm interplays structural operations related to problem structure with traditional value search. This disturbs the environment of forming local minima and makes multispace search a very natural approach to handle difficult optimization problems.

Book ChapterDOI
02 May 1994
TL;DR: This paper has tested this technique on queries of different sizes and different types and has shown that Tabu Search obtains almost always better query execution plans than other combinatorial optimization techniques.
Abstract: Query optimization is a hard combinatorial optimization problem, which makes enumerative optimization strategies unacceptable as the query size grows. In order to cope with complex large join queries, combinatorial optimization algorithms, such as Simulating Annealing and Iterative Improvement were proposed as alternatives to traditional enumerative algorithms. In this paper, we propose to apply to optimization of complex large join queries the relatively new combinatorial optimization technique called Tabu Search. We have tested this technique on queries of different sizes and different types and have shown that Tabu Search obtains almost always better query execution plans than other combinatorial optimization techniques.

Proceedings ArticleDOI
27 Jun 1994
TL;DR: This paper proposes a method of solving combinatorial optimization problems by uniting genetic algorithms (GAs) with Hopfield's model (Hp model), and applies it to the traveling salesman problem (TSP).
Abstract: It is important to solve a combinatorial optimization problem because of its utility. In this paper, the authors propose a method of solving combinatorial optimization problems by uniting genetic algorithms (GAs) with Hopfield's model (Hp model). The authors also apply it to the traveling salesman problem (TSP). GAs are global search algorithms. On the other hand, in the Hp model the range of a search is in the neighborhood of the initial point. Then the Hp model is local search algorithm. By using these natures that make up for defects of each other, the authors unite GAs with the Hp model. Then the authors can overcome some difficulties, such as coding and crossover in GAs and setting up the initial point and parameter in the Hp model. The availability of the authors' proposed approach is verified by simulations. >

Journal ArticleDOI
TL;DR: Some methods of determining the expected time to encounter an optimal solution are described, which provide rationalization of decisions made by many algorithm designers and some insights useful for future designs.

Journal ArticleDOI
TL;DR: All four algorithms outperform a local search heuristic previously proposed in the literature; on the class of instances dealt with, a remarkably stable ranking of the four algorithms emerges.
Abstract: Collections of cans containing nuclear fuel have to be grouped in batches that are as homogeneous as possible with respect to several criteria This highly combinatorial problem, which can be described as grouping or clustering, is tackled using simulated annealing and tabu search Both approaches are submitted to extensive experimentation on a real data set and several artificial ones Two variants of the basic approaches called “Locally optimized simulated annealing” and “Tabu search with variable offset” are also tested Sensitivity to parameter choice and to problem size are investigated All four algorithms outperform a local search heuristic previously proposed in the literature; on the class of instances dealt with, a remarkably stable ranking of the four algorithms emerges