scispace - formally typeset
Search or ask a question

Showing papers on "Extremal optimization published in 2000"


Journal ArticleDOI
TL;DR: Computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that MM AS is currently among the best performing algorithms for these problems.

2,739 citations


Journal ArticleDOI
TL;DR: A new local optimizer called SOP-3-exchange is presented for the sequential ordering problem that extends a local search for the traveling salesman problem to handle multiple constraints directly without increasing computational complexity.
Abstract: We present a new local optimizer called SOP-3-exchange for the sequential ordering problem that extends a local search for the traveling salesman problem to handle multiple constraints directly without increasing computational complexity. An algorithm that combines the SOP-3-exchange with an Ant Colony Optimization algorithm is described, and we present experimental evidence that the resulting algorithm is more effective than existing methods for the problem. The best-known results for many of a standard test set of 22 problems are improved using the SOP-3-exchange with our Ant Colony Optimization algorithm or in combination with the MPO/AI algorithm (Chen and Smith 1996).

386 citations


Journal ArticleDOI
TL;DR: It is shown that under certain conditions, the solutions generated in each iteration of this Graph-based Ant System converge with a probability that can be made arbitrarily close to 1 to the optimal solution of the given problem instance.

376 citations


Book
10 Feb 2000
TL;DR: For each algorithm, the authors present the procedures of the algorithm, parameter selection criteria, convergence property analysis, and parallelization, and several real-world examples that illustrate various aspects of the algorithms.
Abstract: For each algorithm, the authors present the procedures of the algorithm, parameter selection criteria, convergence property analysis, and parallelization. There are also several real-world examples that illustrate various aspects of the algorithms. The book includes an introduction to fuzzy logic and its application in the formulation of multi-objective optimization problems, a discussion on hybrid techniques that combine features of heuristics, a survey of recent research work, and examples that illustrate required mathematical concepts.

348 citations


Journal ArticleDOI
TL;DR: In this article, Extremal Optimization is proposed to find high-quality solutions to hard optimization problems, inspired by self-organizing processes often found in nature, successively eliminating extremely undesirable components of sub-optimal solutions.

265 citations


Journal ArticleDOI
TL;DR: An ant colony optimization framework has been compared and shown to be a viable alternative approach to other stochastic search algorithms and can be successfully used for large-scale process optimization.
Abstract: An ant colony optimization framework has been compared and shown to be a viable alternative approach to other stochastic search algorithms. The algorithm has been tested for variety of different benchmark test functions involving constrained and unconstrained NLP, MILP, and MINLP optimization problems. This novel algorithm handles different types of continuous functions very well and can be successfully used for large-scale process optimization.

147 citations



Journal ArticleDOI
01 Nov 2000
TL;DR: The extremal dynamics of the Bak-Sneppen model can be converted into an optimization algorithm called extremal optimization as discussed by the authors, which is a generalization of the BSP model.
Abstract: The extremal dynamics of the Bak-Sneppen model can be converted into an optimization algorithm called extremal optimization. Attractive features of the model include the following: it is straightforward to relate the sum of all fitnesses to the cost function of the system; in the self-organized critical state to which the system inevitably evolves, almost all species have a much better than random fitness; most species preserve a good fitness for long times unless they are connected to poorly adapted species, providing the system with a long memory; the system retains a potential for large, hill-climbing fluctuations at any stage; and the model accomplishes these features without any control parameters.

63 citations



Journal ArticleDOI
TL;DR: This paper presents a tabu search approach to a combinatorial optimization problem, in which the objective is to maximize the production throughput of a high-speed automated placement machine.
Abstract: Combinatorial optimization represents a wide range of real-life manufacturing optimization problems. Due to the high computational complexity, and the usually high number of variables, the solution of these problems imposes considerable challenges. This paper presents a tabu search approach to a combinatorial optimization problem, in which the objective is to maximize the production throughput of a high-speed automated placement machine. Tabu search is a modern heuristic technique widely employed to cope with large search spaces, for which classical search methods would not provide satisfactory solutions in a reasonable amount of time. The developed TS strategies are tailored to address the different issues caused by the modular structure of the machine.

29 citations


Posted Content
TL;DR: An introduction to Extremal Optimization written for the Computer Simulation Column in ''Computing in Science and Engineering'' (CISE) is given in this paper, along with a discussion of the main challenges of extreme optimization.
Abstract: An introduction to Extremal Optimization written for the Computer Simulation Column in ``Computing in Science and Engineering'' (CISE).

Proceedings Article
01 May 2000
TL;DR: It is demonstrated how extremal optimization can be implemented for a variety of hard optimization problems, and believed that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.
Abstract: The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only one adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.

Book ChapterDOI
18 Sep 2000
TL;DR: It is believed that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.
Abstract: We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems The method, called extremal optimization, is inspired by "self-organized criticality," a concept introduced to describe emergent complexity in many physical systems In contrast to Genetic Algorithms which operate on an entire "genepool" of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones Large fluctuations, called "avalanches," ensue that efficiently explore many local optima Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing With only one adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity


Posted Content
23 Oct 2000

Proceedings Article
01 Jan 2000
TL;DR: A parallel evolutionary multi-criteria optimization algorithm: DGA and DRMOGA are applied to block layout problems and it is confirmed that it is difficult to derive the solutions with any model, even if the problem has only one objective.
Abstract: In this paper, a parallel evolutionary multi-criteria optimization algorithm: DGA and DRMOGA are applied to block layout problems. The results are compared to the results of SGA and discussed. Because block layout problems are NP hard and can have several types of objectives, these problems are suitable to evolutionary multicriterion optimization algorithms. DRMOGA is a DGA model that can derive good Pareto solutions in continuous optimization problems. However it has not been applied to discrete problems. In the numerical example, the Pareto solutions of the block layout problem with 13 blocks were derived by DGA, DRMOGA and SGA. It was confirmed that it is difficult to derive the solutions with any model, even if the problem has only one objective. It is also found that good parallel efficiency can be derived from both DGA and DRMOGA. The results of Pareto solutions of DGA and DRMOGA are almost the same. However, DRMOGA searched a wider area than that of DGA.

Book
01 Jan 2000
TL;DR: Based on the characteristic cost distribution of a problem, the model helps to predict what components of an evolutionary algorithm are most relevant (e.g., initialization, mutation), and what is the expected overall performance.
Abstract: According to the No-Free-Lunch theorems of Wolpert and Macready, we cannot expect one generic optimization technique to outperform others on average [WM97]. For every optimization technique there exist "easy" and "hard" problems. However, little is known as to what criteria determine the success of an optimization technique. In this paper, we consider this question from the evolutionary computing point of view. We use cost distributions, i.e., the frequencies of the objective function's values occurring in the search spaces, to devise a classification of optimization problems. Unlike fitness landscapes, the cost distribution is truly problem intrinsic rathern than part of an algorithmic solution. Based on the characteristic cost distribution of a problem, our model helps to predict what components of an evolutionary algorithm are most relevant (e.g., initialization, mutation), and what is the expected overall performance. We validate the model through experiments on three problems: Set Partitioning, Knapsack, and Traveling Salesman.

Journal Article
TL;DR: This paper surveys the algorithmic research development of the travelling salesman problem and main ideas of these algorithms are presented together with their time complexities.
Abstract: This paper surveys the algorithmic research development of the travelling salesman problem. Main ideas of these algorithms are presented together with their time complexities.