scispace - formally typeset
Search or ask a question
Topic

Extremal optimization

About: Extremal optimization is a research topic. Over the lifetime, 1168 publications have been published within this topic receiving 104943 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A modified ant colony optimization (MACO) algorithm implementing a new definition of pheromone and a new cooperation mechanism between ants is presented and results obtained indicate that MACO is more efficient and robust than standard ACO in solving dynamic topology optimization problems.

29 citations

Proceedings ArticleDOI
29 Nov 1995
TL;DR: The aim is to integrate the approach introduced by Dorigo et al., known as the ant system, with GA, exploiting the cooperative effect of the latter and the evolutionary effect of GA, to optimize another algorithm for optimization.
Abstract: The authors propose the use of genetic algorithms (GA) to optimize another algorithm for optimization. The aim is to integrate the approach introduced by Dorigo et al., known as the ant system, with GA, exploiting the cooperative effect of the latter and the evolutionary effect of GA. An ant algorithm aims to solve problems of combinatorial optimization by means of a population of agents/processors that work parallel without a supervisor in a cooperative manner. A genetic algorithm aims to optimize the performance of the ant population by selecting optimal values for its parameters by means of evolution of the genetic patrimony associated with each single agent. The approach has been applied to the traveling salesman problem; results and comparisons with the original method are presented.

29 citations

Journal ArticleDOI
01 Jan 2011
TL;DR: This paper first introduces evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies, and other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are described.
Abstract: Neural networks and fuzzy systems are two soft-computing paradigms for system modelling. Adapting a neural or fuzzy system requires to solve two optimization problems: structural optimization and parametric optimization. Structural optimization is a discrete optimization problem which is very hard to solve using conventional optimization techniques. Parametric optimization can be solved using conventional optimization techniques, but the solution may be easily trapped at a bad local optimum. Evolutionary computation is a general-purpose stochastic global optimization approach under the universally accepted neo-Darwinian paradigm, which is a combination of the classical Darwinian evolutionary theory, the selectionism of Weismann, and the genetics of Mendel. Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies. Other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are also described. Some topics pertaining to evolutionary algorithms are also discussed, and a comparison between evolutionary algorithms and simulated annealing is made. Finally, the application of EAs to the learning of neural networks as well as to the structural and parametric adaptations of fuzzy systems is also detailed.

29 citations

Journal ArticleDOI
TL;DR: It is shown that the conventional transition rule used in ant algorithms is responsible for the stagnation phenomenon and a new transition rule is developed as a remedy for the premature convergence problem, shown to overcome the stagnation problem leading to high quality solutions.
Abstract: Ant algorithms are now being used more and more to solve optimization problems other than those for which they were originally developed. The method has been shown to outperform other general purpose optimization algorithms including genetic algorithms when applied to some benchmark combinatorial optimization problems. Application of these methods to real world engineering problems should, however, await further improvements regarding the practicality of their application to these problems. The sensitivity analysis required to determine the controlling parameters of the ant method is one of the main shortcomings of the ant algorithms for practical use. Premature convergence of the method, often encountered with an elitist strategy of pheromone updating, is another problem to be addressed before any industrial use of the method is expected. It is shown in this article that the conventional transition rule used in ant algorithms is responsible for the stagnation phenomenon. A new transition rule is, therefo...

28 citations

01 Jan 1990
TL;DR: A simple analysis of SA is presented that will provide a time bound for convergence with overwhelming probability and a simpler and more general proof of convergence for Nested Annealing, a heuristic algorithm developed in [12].
Abstract: Simulated Annealing is a family of randomized algorithms used to solve many combinatorial optimization problems. In practice they have been applied to solve some presumably hard (e.g., NP-complete) problems. The level of performance obtained has been promised [5, 2, 6, 14]. The success of its heuristic technique has motivated analysis of this algorithm from a theoretical point of view. In particularly, people have looked at the convergence of this algorithm. They have shown (see e.g., [10]) that this algorithm converges in the limit to a globally optimal solution with probability 1. However few of these convergence results specify a time limit within which the algorithm is guaranteed to converge(with some high probability, say). We present, for the first time, a simple analysis of SA that will provide a time bound for convergence with overwhelming probability. The analysis will hold no matter what annealing schedule is used. Convergence of Simulated Annealing in the limit will follow as a corollary to our time convergence proof. In this paper we also look at optimization problems for which the cost function has some special properties. We prove that for these problems the convergence is much faster. In particular, we give a simpler and more general proof of convergence for Nested Annealing, a heuristic algorithm developed in [12]. Nested Annealing is based on defining a graph corresponding to the given optimization problem. If this graph is 'small separable', they [12] show that Nested Annealing will converge 'faster'. For arbitrary optimization problem, we may not have any knowledge about the 'separability' of its graph. In this paper we give tight bounds for the 'separability' of a random graph. We then use these bounds to analyze the expected behavior of Nested Annealing on an arbitrary optimization problem. The 'separability' bounds we derive in this paper are of independent interest and have the potential of finding other applications.

28 citations


Network Information
Related Topics (5)
Genetic algorithm
67.5K papers, 1.2M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
81% related
Artificial neural network
207K papers, 4.5M citations
80% related
Cluster analysis
146.5K papers, 2.9M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20232
202213
20217
20209
201922
201815