scispace - formally typeset
Search or ask a question

Showing papers on "Extremal optimization published in 1999"


Journal ArticleDOI
TL;DR: In this article, a simulated annealing-based heuristic was developed to obtain the least-cost design of a looped water distribution network, where a Newton search method was used to solve the hydraulic network equations.
Abstract: A simulated annealing-based heuristic has been developed to obtain the least-cost design of a looped water distribution network. A Newton search method was used to solve the hydraulic network equations. Simulated annealing is a stochastic optimization method that can work well for large-scale optimization problems that are cast in discrete or combinatorial form, as with the problem proposed. The results obtained with this approach for networks currently appearing in the literature as case studies in this field (whose solution by other optimization methods was known) have proved the ability of the heuristic to handle this kind of problem.

439 citations


Proceedings Article
13 Jul 1999
TL;DR: The Extremal Optimization method as mentioned in this paper is a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model.
Abstract: We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than "breeding" better components. In contrast to Genetic Algorithms which operate on an entire "gene-pool" of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance proves competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.

178 citations



Journal ArticleDOI
TL;DR: In this article, extremal optimization (EO) is compared with simulated annealing (SA) in extensive numerical simulations, and the relative error of SA for large graphs is found to diverge relative to EO at equalized runtime.
Abstract: The benefits of a recently proposed method to approximate hard optimization problems are demonstrated on the graph partitioning problem. The performance of this new method, called extremal optimization (EO), is compared with simulated annealing (SA) in extensive numerical simulations. While generally a complex (NP-hard) problem, the optimization of the graph partitions is particularly difficult for sparse graphs with average connectivities near the percolation threshold. At this threshold, the relative error of SA for large graphs is found to diverge relative to EO at equalized runtime. On the other hand, EO, based on the extremal dynamics of self-organized critical systems, reproduces known results about optimal partitions at this critical point quite well.

61 citations


Journal ArticleDOI
01 Feb 1999
TL;DR: The paper proposes a formal method for comparing and selecting heuristic algorithms (or equivalently, different settings of a same algorithm) given a desired confidence level and a particular set of problem instances and demonstrates that the method can determine the relative performance of heuristic algorithm with high confidence probability while using a small fraction of computer times that conventional methods require.
Abstract: The performance of heuristic algorithms for combinatorial optimization is often sensitive to problem instances. In extreme cases, a specialized heuristic algorithm may perform exceptionally well on a particular set of instances while fail to produce acceptable solutions on others. Such a problem-sensitive nature is most evident in algorithms for combinatorial optimization problems such as job shop scheduling, vehicle routing, and cluster analysis. The paper proposes a formal method for comparing and selecting heuristic algorithms (or equivalently, different settings of a same algorithm) given a desired confidence level and a particular set of problem instances. We formulate this algorithm comparison problem as a stochastic optimization problem. Two approaches for stochastic optimization, ordinal optimization and optimal computing budget allocation are applied to solve this algorithm selection problem. Computational testing on a set of statistical clustering algorithms in the IMSL library is conducted. The results demonstrate that our method can determine the relative performance of heuristic algorithms with high confidence probability while using a small fraction of computer times that conventional methods require.

58 citations


Journal ArticleDOI
TL;DR: A procedure is presented which considerably improves the performance of local search based heuristic algorithms for combinatorial optimization problems by merging pairs of solutions: certain parts of either solution are transcribed by the related parts of the respective other solution.
Abstract: A procedure is presented which considerably improves the performance of local search based heuristic algorithms for combinatorial optimization problems. It increases the average `gain' of the individual local searches by merging pairs of solutions: certain parts of either solution are transcribed by the related parts of the respective other solution, corresponding to flipping clusters of a spin glass. This iterative partial transcription acts as a local search in the subspace spanned by the differing components of both solutions. Embedding it in the simple multi-start-local-search algorithm and in the thermal-cycling method, we demonstrate its effectiveness for several instances of the traveling salesman problem. The obtained results indicate that, for this task, such approaches are far superior to simulated annealing.

41 citations


Posted Content
TL;DR: A general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model, which successively eliminates extremely undesirable components of sub-optimal solutions, rather than "breeding" better components.
Abstract: We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ``breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ``gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance proves competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.

36 citations


Proceedings Article
13 Jul 1999
TL;DR: This paper presents an approach that uses reinforcement learning (RL) algorithms to solve combinatorial optimization problems and shows that the presented RL-agent approach can be used as a basis for global optimization techniques.
Abstract: This paper presents an approach that uses reinforcement learning (RL) algorithms to solve combinatorial optimization problems. In particular, the approach combines both local and global search characteristics: local information as encoded by typical RL schemes and global information as contained in a population of search agents. The effectiveness of the approach is demonstrated on both the Asymmetric Traveling Salesman (ATSP) and the Quadratic Assignment Problem (QAP). These results are competitive with other well-known search techniques and suggest that the presented RL-agent approach can be used as a basis for global optimization techniques.

21 citations


Patent
04 Nov 1999
TL;DR: In this article, a system enables an interactively guided heuristic search for solving a combinatorial optimization problem, where the parameters are altered based on the visualization of the optimization problem and the current solution.
Abstract: A system enables an interactively guided heuristic search for solving a combinatorial optimization problem. The system initially performs a hill-climbing search on the combinatorial optimization problem to obtain a solution using initial default parameters. The current solution and the combinatorial optimization problem are visualized on an optimization table, a table-top display device. The parameters are altered based on the visualization of the combinatorial optimization problem and the current solution. Then, the searching, visualizing, and setting are repeated until the solution is selected as an acceptable solution of the combinatorial optimization problem. During the repeating, the parameters can be a set of probabilities, and in which case the search is a random perturbation-based search. Alternatively, the parameters can be a set of priorities, in which case the search is an exhaustive local search.

21 citations


01 Jan 1999
TL;DR: This paper describes a general method that allows the best optimization techniques used in vector spaces to be applied to all order based problems whose domain is a permutation space and it will be shown how this method can be application to a real world problem, the optimal placement of interconnected cells on a chip.
Abstract: Many optimization problems find a natural mapping in permutation spaces where dedicated algorithms can be used during the optimization process. Unfortunately, some of the best and most effective techniques currently used can only be applied to vectors (cartesian) spaces, where a concept of distance between different objects can be easily defined. Examples of such techniques go from simplest deepest descent hill climbers and the more sophisticated conjugate gradient methods used in continuous spaces, to dynanic hill climbers or Genetic algorithms (GAs) used in many large combinatorial problems. This paper describes a general method that allows the best optimization techniques used in vector spaces to be applied to all order based problems whose domain is a permutation space. It will also be shown how this method can be applied to a real world problem, the optimal placement of interconnected cells (modules) on a chip, in order to minimize the total length of their connections. For this problem a dynamic hill climber has been used as the optimization engine, but other techniques that work in a multidimensional vector space can be applied as well.

15 citations


01 Jan 1999
TL;DR: In this article, an Ant Colony Optimization approach is proposed to solve the problem of determining a job-sequence that minimizes the overall tardiness for a given set of jobs to be processed on a single, continuously available machine, the Single Machine Total Tardiness Problem.
Abstract: Ant Colony Optimization is a relatively new meta-heuristic that has proven its quality and versatility on various combinatorial optimization problems such as the traveling salesman problem, the vehicle routing problem and the job shop scheduling problem. The paper introduces an Ant Colony Optimization approach to solve the problem of determining a job-sequence that minimizes the overall tardiness for a given set of jobs to be processed on a single, continuously available machine, the Single Machine Total Tardiness Problem. We experiment with various heuristic information as well as with variants for local search. Experiments with 250 benchmark problems with 50 and 100 jobs illustrate that Ant Colony Optimization is an adequate method to tackle the SMTTP. (author's abstract)

Journal ArticleDOI
TL;DR: In this paper, Extremal Optimization is proposed to find high-quality solutions to hard optimization problems, inspired by self-organizing processes often found in nature, successively eliminating extremely undesirable components of sub-optimal solutions.
Abstract: We propose a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organizing processes often found in nature. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions. Drawing upon models used to simulate far-from-equilibrium dynamics, it complements approximation methods inspired by equilibrium statistical physics, such as Simulated Annealing. With only one adjustable parameter, its performance proves competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.

Proceedings ArticleDOI
11 Nov 1999
TL;DR: The article mainly shows the genetic algorithm, AG, developed by Vieira and Lopes, as a general, correct genetic algorithm for construction, from which other algorithms can be correctly reproduced, depending in each case on the adopted representation.
Abstract: The article presents the specification of a system for the travelling salesman problem through a genetic algorithm developed from abstract data types (R.V. Vieira and M.A. Lopes, 1999). It is intended with this to demonstrate the efficiency of genetic algorithms in the solution of evolution problems. It mainly shows the genetic algorithm, AG, developed by Vieira and Lopes, as a general, correct genetic algorithm for construction, from which other algorithms can be correctly reproduced, depending in each case on the adopted representation.

Journal ArticleDOI
TL;DR: It is shown that the optimization of the graph partitions is particularly difficult for sparse graphs with average connectivities near the percolation threshold, and a new general purpose method based on self-organized criticality produces near-optimal partitions with bounded error at any low connectivity at a comparable computational cost.
Abstract: The partitioning of random graphs is investigated numerically using “simulated annealing” and “extremal optimization”. While generally in an NP-hard problem, it is shown that the optimization of the graph partitions is particularly difficult for sparse graphs with average connectivities near the percolation threshold. At this threshold, the relative error of “simulated annealing” is found to diverge in the thermodynamic limit. On the other hand, “extremal optimization”, a new general purpose method based on self-organized criticality, produces near-optimal partitions with bounded error at any low connectivity at a comparable computational cost.

Journal ArticleDOI
T. Fukao1, J. Wu1, K. Ikeda1
TL;DR: It is shown that the theoretical critical temperatures determined by the estimation agree with those derived by computational experiments in graph partitioning problems and in the travelling salesman problem.
Abstract: This paper discusses the critical temperature (control parameter) of an annealed neural network, a typical collective computation for combinatorial optimization problems. It is shown that the theoretical critical temperatures determined by our estimation agree with those derived by computational experiments in graph partitioning problems and in the travelling salesman problem.