scispace - formally typeset
Search or ask a question

Showing papers on "Extremal optimization published in 2004"


Book
01 Jan 2004
TL;DR: Ant colony optimization (ACO) is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals as discussed by the authors In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization.
Abstract: Swarm intelligence is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization Ant colony optimization (ACO) takes inspiration from the foraging behavior of some ant species These ants deposit pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony Ant colony optimization exploits a similar mechanism for solving optimization problems From the early nineties, when the first ant colony optimization algorithm was proposed, ACO attracted the attention of increasing numbers of researchers and many successful applications are now available Moreover, a substantial corpus of theoretical results is becoming available that provides useful guidelines to researchers and practitioners in further applications of ACO The goal of this article is to introduce ant colony optimization and to survey its most notable applications

6,861 citations


Journal ArticleDOI
01 Apr 2004
TL;DR: This paper proposes a new framework for implementing ant colony optimization algorithms called the hyper-cube framework, which limits the pheromone values to the interval [0,1], and proves that in the ant system, the ancestor of all ant colony optimized algorithms, the average quality of the solutions produced increases in expectation over time when applied to unconstrained problems.
Abstract: Ant colony optimization is a metaheuristic approach belonging to the class of model-based search algorithms. In this paper, we propose a new framework for implementing ant colony optimization algorithms called the hyper-cube framework for ant colony optimization. In contrast to the usual way of implementing ant colony optimization algorithms, this framework limits the pheromone values to the interval [0,1]. This is obtained by introducing changes in the pheromone value update rule. These changes can in general be applied to any pheromone value update rule used in ant colony optimization. We discuss the benefits coming with this new framework. The benefits are twofold. On the theoretical side, the new framework allows us to prove that in the ant system, the ancestor of all ant colony optimization algorithms, the average quality of the solutions produced increases in expectation over time when applied to unconstrained problems. On the practical side, the new framework automatically handles the scaling of the objective function values. We experimentally show that this leads on average to a more robust behavior of ant colony optimization algorithms.

428 citations


Journal ArticleDOI
TL;DR: This paper introduces model-based search as a unifying framework accommodating some recently proposed metaheuristics for combinatorial optimization such as ant colony optimization, stochastic gradient ascent, cross-entropy and estimation of distribution methods.
Abstract: In this paper we introduce model-based search as a unifying framework accommodating some recently proposed metaheuristics for combinatorial optimization such as ant colony optimization, stochastic gradient ascent, cross-entropy and estimation of distribution methods. We discuss similarities as well as distinctive features of each method and we propose some extensions.

256 citations


Journal ArticleDOI
TL;DR: In this article, a new heuristic approach for minimizing the operating path of automated or computer numerically controlled drilling operations is described, which is first defined as a travelling salesman problem.
Abstract: A new heuristic approach for minimizing the operating path of automated or computer numerically controlled drilling operations is described. The operating path is first defined as a travelling salesman problem. The new heuristic, particle swarm optimization, is then applied to the travelling salesman problem. A model for the approximate prediction of drilling time based on the heuristic solution is presented. The new method requires few control variables: it is versatile, robust and easy to use. In a batch production of a large number of items to be drilled such as in printed circuit boards, the travel time of the drilling device is a significant portion of the overall manufacturing process, hence the new particle swarm optimization–travelling salesman problem heuristic can play a role in reducing production costs.

174 citations


Book
01 Jul 2004
TL;DR: The Potts Model is used for Solving Hard Max-cut Problems and Finding Low-energy Configurations and for Computing the Potts Free Energy and Submodular Functions, both of which have applications in physics and engineering.
Abstract: List of Contributors. 1 Introduction (A.K. Hartmann and H. Rieger). Part I: Applications in Physics. 2 Cluster Monte Carlo Algorithms (W. Krauth). 2.1 Detailed Balance and a priori Probabilities. 2.2 The Wolff Cluster Algorithm for the Ising Model. 2.3 Cluster Algorithm for Hard Spheres and Related Systems. 2.4 Applications. 2.4.1 Phase Separation in Binary Mixtures. 2.4.2 Polydisperse Mixtures. 2.4.3 Monomer-Dimer Problem. 2.5 Limitations and Extensions. References. 3 Probing Spin Glasses with Heuristic Optimization Algorithms (O.C. Martin). 3.1 Spin Glasses. 3.1.1 Motivations. 3.1.2 The Ising Model. 3.1.3 Models of Spin Glasses. 3.1.4 Some Challenges. 3.2 Some Heuristic Algorithms. 3.2.1 General Issues. 3.2.2 Variable Depth Search. 3.2.3 Genetic Renormalization Algorithm. 3.3 A survey of Physics Results. 3.3.1 Convergence of the Ground-state Energy Density. 3.3.2 Domain Walls. 3.3.3 Clustering of Ground States. 3.3.4 Low-energy Excitations. 3.3.5 Phase Diagram. 3.4 Outlook. References. 4 Computing Exact Ground States of Hard Ising Spin Glass Problems by Branch-and-cut (F. Liers, M. Junger, G. Reinelt, and G. Rinaldi). 4.1 Introduction. 4.2 Ground States and Maximum Cuts. 4.3 A General Scheme for Solving Hard Max-cut Problems. 4.4 Linear Programming Relaxations of Max-cut. 4.5 Branch-and-cut. 4.6 Results of Exact Ground-state Computations. 4.7 Advantages of Branch-and-cut. 4.8 Challenges for the Years to Come. References. 5 Counting States and Counting Operations (A. Alan Middleton). 5.1 Introduction. 5.2 Physical Questions about Ground States. 5.2.1 Homogeneous Models. 5.2.2 Magnets with Frozen Disorder. 5.3 Finding Low-energy Configurations. 5.3.1 Physically Motivated Approaches. 5.3.2 Combinatorial Optimization. 5.3.3 Ground-state Algorithm for the RFIM. 5.4 The Energy Landscape: Degeneracy and Barriers. 5.5 Counting States. 5.5.1 Ground-state Configuration Degeneracy. 5.5.2 Thermodynamic State. 5.5.3 Numerical Studies of Zero-temperature States. 5.6 Running Times for Optimization Algorithms. 5.6.1 Running Times and Evolution of the Heights. 5.6.2 Heuristic Derivation of Running Times. 5.7 Further Directions. References. 6 Computing the Potts Free Energy and Submodular Functions (J.-C. Angles d'Auriac). 6.1 Introduction. 6.2 The Potts Model. 6.2.1 Definition of the Potts Model. 6.2.2 Some Results for Non-random Models. 6.2.3 The Ferromagnetic Random Bond Potts Model. 6.2.4 High Temperature Development. 6.2.5 Limit of an Infinite Number of States. 6.3 Basics on the Minimization of Submodular Functions. 6.3.1 Definition of Submodular Functions. 6.3.2 A simple Characterization. 6.3.3 Examples. 6.3.4 Minimization of Submodular Function. 6.4 Free Energy of the Potts Model in the Infinite q-Limit. 6.4.1 The Method. 6.4.2 The Auxiliary Problem. 6.4.3 The Max-flow Problem: the Goldberg and Tarjan Algorithm. 6.4.4 About the Structure of the Optimal Sets. 6.5 Implementation and Evaluation. 6.5.1 Implementation. 6.5.2 Example of Application. 6.5.3 Evaluation of the CPU Time. 6.5.4 Memory Requirement. 6.5.5 Various Possible Improvements. 6.6 Conclusion. References. Part II Phase Transitions in Combinatorial Optimization Problems. 7 The Random 3-satisfiability Problem: From the Phase Transition to the Efficient Generation of Hard, but Satisfiable Problem Instances (M. Weigt). 7.1 Introduction. 7.2 Random 3-SAT and the SAT/UNSAT Transition. 7.2.1 Numerical Results. 7.2.2 Using Statistical Mechanics. 7.3 Satisfiable Random 3-SAT Instances. 7.3.1 The Naive Generator. 7.3.2 Unbiased Generators. 7.4 Conclusion. References. 8 Analysis of Backtracking Procedures for Random Decision Problems (S. Cocco, L. Ein-Dor, and R. Monasson). 8.1 Introduction. 8.2 Phase Diagram, Search Trajectories and the Easy SAT Phase. 8.2.1 Overview of Concepts Useful to DPLL Analysis. 8.2.2 Clause Populations: Flows, Averages and Fluctuations. 8.2.3 Average-case Analysis in the Absence of Backtracking. 8.2.4 Occurrence of Contradictions and Polynomial SAT Phase. 8.3 Analysis of the Search Tree Growth in the UNSAT Phase. 8.3.1 Numerical Experiments. 8.3.2 Parallel Growth Process and Markovian Evolution Matrix. 8.3.3 Generating Function and Large-size Scaling. 8.3.4 Interpretation in Terms of Growth Process. 8.4 Hard SAT Phase: Average Case and Fluctuations. 8.4.1 Mixed Branch and Tree Trajectories. 8.4.2 Distribution of Running Times. 8.4.3 Large Deviation Analysis of the First Branch in the Tree. 8.5 The Random Graph Coloring Problem. 8.5.1 Description of DPLL Algorithm for Coloring. 8.5.2 Coloring in the Absence of Backtracking. 8.5.3 Coloring in the Presence of Massive Backtracking. 8.6 Conclusions. References. 9 New Iterative Algorithms for Hard Combinatorial Problems (R. Zecchina). 9.1 Introduction. 9.2 Combinatorial Decision Problems, K-SAT and the Factor Graph Representation. 9.2.1 Random K-SAT. 9.3 Growth Process Algorithm: Probabilities, Messages and Their Statistics. 9.4 Traditional Message-passing Algorithm: Belief Propagation as Simple Cavity Equations. 9.5 Survey Propagation Equations. 9.6 Decimating Variables According to Their Statistical Bias. 9.7 Conclusions and Perspectives. References. Part III New Heuristics and Interdisciplinary Applications. 10 Hysteretic Optimization ((K.F. Pal). 10.1 Hysteretic Optimization for Ising Spin Glasses. 10.2 Generalization to Other Optimization Problems. 10.3 Application to the Traveling Salesman Problem. 10.4 Outlook. References. 11 Extremal Optimization (S. Boettcher) 227 11.1 Emerging Optimality. 11.2 Extremal Optimization. 11.2.1 Basic Notions. 11.2.2 EO Algorithm. 11.2.3 Extremal Selection. 11.2.4 Rank Ordering. 11.2.5 Defining Fitness. 11.2.6 Distinguishing EO from other Heuristics. 11.2.7 Implementing EO. 11.3 Numerical Results for EO. 11.3.1 Early Results. 11.3.2 Applications of EO by Others. 11.3.3 Large-scale Simulations of Spin Glasses. 11.4 Theoretical Investigations. References. 12 Sequence Alignments (A.K. Hartmann) 253 12.1 Molecular Biology. 12.2 Alignments and Alignment Algorithms. 12.3 Low-probability Tail of Alignment Scores. References. 13 Protein Folding in Silico - the Quest for Better Algorithms (U.H.E. Hansmann). 13.1 Introduction. 13.2 Energy Landscape Paving. 13.3 Beyond Global Optimization. 13.3.1 Parallel Tempering. 13.3.2 Multicanonical Sampling and Other Generalized-ensemble Techniques. 13.4 Results. 13.4.1 Helix Formation and Folding. 13.4.2 Structure Predictions of Small Proteins. 13.5 Conclusion. References. Index.

147 citations


Proceedings ArticleDOI
14 Sep 2004
TL;DR: In this paper, a modified particle swarm optimization (PSO) algorithm was proposed to solve a typical combinatorial optimization problem: traveling salesman problem (TSP), which is a well-known NP-hard problem.
Abstract: Particle swarm optimization, as an evolutionary computing technique, has succeeded in many continuous problems, but research on discrete problems especially combinatorial optimization problem has been done little according to Kennedy and Eberhart (1997) and Mohan and Al-kazemi (2001). In this paper, a modified particle swarm optimization (PSO) algorithm was proposed to solve a typical combinatorial optimization problem: traveling salesman problem (TSP), which is a well-known NP-hard problem. Fuzzy matrices were used to represent the position and velocity of the particles in PSO and the operators in the original PSO formulas were redefined. Then the algorithm was tested with concrete examples in TSPLIB, experiment shows that the algorithm can achieve good results.

147 citations


Journal ArticleDOI
TL;DR: How a particular unified modeling framework, coupled with latest advances in heuristic search methods, makes it possible to solve problems from a wide range of important model classes.
Abstract: Combinatorial optimization problems are often too complex to be solved within reasonable time limits by exact methods, in spite of the theoretical guarantee that such methods will ultimately obtain an optimal solution. Instead, heuristic methods, which do not offer a convergence guarantee, but which have greater flexibility to take advantage of special properties of the search space, are commonly a preferred alternative. The standard procedure is to craft a heuristic method to suit the particular characteristics of the problem at hand, exploiting to the extent possible the structure available. Such tailored methods, however, typically have limited usefulness in other problems domains.

103 citations



Journal ArticleDOI
TL;DR: The exploration of the degenerate ground states indicates that the backbone order parameter, measuring the constrainedness of the problem, exhibits a first-order phase transition in vertex coloring on random graphs.
Abstract: We investigate the phase transition in vertex coloring on random graphs, using the extremal optimization heuristic. Three-coloring is among the hardest combinatorial optimization problems and is equivalent to a 3-state anti-ferromagnetic Potts model. Like many other such optimization problems, it has been shown to exhibit a phase transition in its ground state behavior under variation of a system parameter: the graph’s mean vertex degree. This phase transition is often associated with the instances of highest complexity. We use extremal optimization to measure the ground state cost and the “backbone,” an order parameter related to ground state overlap, averaged over a large number of instances near the transition for random graphs of size n up to 512. For these graphs, benchmarks show that extremal optimization reaches ground states and explores a sufficient number of them to give the correct backbone value after about Osn 3.5 d update steps. Finite size scaling yields a critical mean degree value ac = 4.703s28d. Furthermore, the exploration of the degenerate ground states indicates that the backbone order parameter, measuring the constrainedness of the problem, exhibits a first-order phase transition.

86 citations


Journal ArticleDOI
TL;DR: The generalized extremal optimization (GEO) algorithm as discussed by the authors is a meta-heuristic based on the genetic algorithm and simulated annealing, but with the a priori advantage of having only one free parameter to adjust.

57 citations


Journal ArticleDOI
TL;DR: A new concept, named restricted evolution, shows more improved characteristics than conventional approaches that have been used for multimodal function optimization and will be verified by the application to various cases including practical optimization problems.
Abstract: In this paper, a novel algorithm for multimodal function optimization is proposed, based on the concept of evolution strategy. A new concept, named restricted evolution, shows more improved characteristics than conventional approaches that have been used for multimodal function optimization. The efficiency and usefulness of the proposed method will be verified by the application to various cases including practical optimization problems.

Book ChapterDOI
05 Sep 2004
TL;DR: The existing algorithms based on the Ant Colony Optimization metaheuristic are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objectives genetic algorithms.
Abstract: The difficulty to solve multiple objective combinatorial optimization problems with traditional techniques has urged researchers to look for alternative, better performing approaches for them. Recently, several algorithms have been proposed which are based on the Ant Colony Optimization metaheuristic. In this contribution, the existing algorithms of this kind are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objective genetic algorithms.

Proceedings ArticleDOI
01 Nov 2004
TL;DR: The GSO algorithm is essentially a population-based heuristic search technique which can be used to solve combinatorial optimization problems, modeled on the concept of natural selection but also based on cultural and social evolution.
Abstract: A new hybrid evolutionary algorithm called GSO (genetical swarm optimization) Is here presented. GSO combines the well known particle swarm optimization and genetic algorithms. The GSO algorithm is essentially a population-based heuristic search technique which can be used to solve combinatorial optimization problems, modeled on the concept of natural selection but also based on cultural and social evolution. A detailed description of the algorithm and numerical comparison of the different techniques are presented for a typical electromagnetic optimization problem.

Journal ArticleDOI
TL;DR: A version of the extremal optimization (EO) algorithm introduced by Boettcher and Percus is tested on two- and three-dimensional spin glasses with Gaussian disorder, finding exact ground states with a speedup of order 10(4) (10(2) ) for 16-2 - (8(3) -) spin samples.
Abstract: A version of the extremal optimization (EO) algorithm introduced by Boettcher and Percus is tested on twoand three-dimensional spin glasses with Gaussian disorder. EO preferentially flips spins that are locally “unfit”; the variant introduced here reduces the probability of flipping previously selected spins. Relative to EO, this adaptive algorithm finds exact ground states with a speedup of order 10 4 s10 2 d for 16 2 - s8 3 -dspin samples. This speedup increases rapidly with system size, making this heuristic a useful tool in the study of materials with quenched disorder.

Proceedings ArticleDOI
26 Aug 2004
TL;DR: A new pheromone updating strategy is presented, which is used to optimize ACO (ant colony optimization) in solving the traveling salesman problem and the efficiency of the algorithm is demonstrated by means of experimental study.
Abstract: This work presents a new pheromone updating strategy , which is used to optimize ACO (ant colony optimization) in solving the traveling salesman problem At first, the paper introduces the principle, the characteristics, the construction and the realization method about the ACO Then, an improved ant colony optimization algorithm using a new pheromone updating strategy is proposed The pheromone trail of each edge is set with a lower limit at the beginning iterations of the algorithm, and the worst ant judged by its tour length like the best ant used in ACO is allowed to perform global trail updating At last, we demonstrate the efficiency of the algorithm by means of experimental study

Proceedings ArticleDOI
19 Jun 2004
TL;DR: This work proposes benchmarks for the dynamic travelling salesman problem, adapted from the CHN-144 benchmark of 144 Chinese cities for the static travelling salesmanProblem, and provides an example of the use of the benchmark, and illustrates the information that can be gleaned from analysis of the algorithm performance on the benchmarks.
Abstract: Dynamic optimisation problems are becoming increasingly important; meanwhile, progress in optimisation techniques and in computational resources are permitting the development of effective systems for dynamic optimisation, resulting in a need for objective methods to evaluate and compare different techniques The search for effective techniques may be seen as a multi-objective problem, trading off time complexity against effectiveness; hence benchmarks must be able to compare techniques across the Pareto front, not merely at a single point We propose benchmarks for the dynamic travelling salesman problem, adapted from the CHN-144 benchmark of 144 Chinese cities for the static travelling salesman problem We provide an example of the use of the benchmark, and illustrate the information that can be gleaned from analysis of the algorithm performance on the benchmarks

Book ChapterDOI
01 Jan 2004
TL;DR: This chapter shows how the CE method can be easily transformed into an efficient and versatile randomized algorithm for solving optimization problems, in particular combinatorial optimization problems.
Abstract: In this chapter we show how the CE method can be easily transformed into an efficient and versatile randomized algorithm for solving optimization problems, in particular combinatorial optimization problems

Journal ArticleDOI
TL;DR: The present design application highlights the GEO features of being easily implemented and efficient on tackling optimization problems when the objective function presents design variables with strong nonlinear interactions and is subject to multiple constraints.
Abstract: In this paper, an application of the Generalized Extremal Optimization (GEO) algorithm to the optimization of a heat pipe (HP) for a space application is presented. The GEO algorithm is a generalization of the Extremal Optimization (EO) algorithm, devised to be applied readily to a broad class of design optimization problems regardless of the design space complexity it would face. It is easy to implement, does not make use of derivatives, and can be applied to either unconstrained or constrained problems with continuous, discrete, or integer variables. The GEO algorithm has been tested in a series of test functions and shows to be competitive to other stochastic algorithms, such as the Genetic Algorithm. In this work, it is applied to the problem of minimizing the mass of an HP as a function of a desirable heat transport capability and a given temperature on the condenser. The optimal solutions were obtained for different heat loads, heat sink temperatures, and three working fluids: ammonia, methanol, and...

Proceedings ArticleDOI
26 Aug 2004
TL;DR: Experimental results of the global optimization of two continuous multi-extreme functions indicate the effectiveness and the applicability of the proposed algorithm.
Abstract: A hybrid optimization technique is proposed for global optimization of continuous multi-extreme functions. The scheme incorporates a deterministic searching algorithm (the Powell method) into the ant colony algorithm. This hybrid method can improve the optimization performance and enhance the fast convergence during the local search of the ant colony algorithm. Experimental results of the global optimization of two continuous multi-extreme functions indicate the effectiveness and the applicability of the proposed algorithm.

Book ChapterDOI
29 Jun 2004
TL;DR: This paper investigates the influence of model bias in model-based search as ACO and presents the effect of two different pheromone models for ACO algorithm to tackle the Multiple Knapsack Problem (MKP).
Abstract: The Ant Colony Optimization (ACO) algorithms are being applied successfully to a wide range of problems ACO algorithms could be good alternatives to existing algorithms for hard combinatorial optimization problems (COPs) In this paper we investigate the influence of model bias in model-based search as ACO We present the effect of two different pheromone models for ACO algorithm to tackle the Multiple Knapsack Problem (MKP) The MKP is a subset problem and can be seen as a general model for any kind of binary problems with positive coefficients The results show the importance of the pheromone model to quality of the solutions

Journal Article
TL;DR: The experimental result shows that this new algorithm can find a better solution for function optimization problem than genetic algorithms and other ant colony system for continuous optimization.
Abstract: Based on Ant Colony System,a new algorithm for continuous function optimization is proposeEach ant makes a selection from ten decimal numbers whenever it takes a step in this algorithm And in this way a solution for the function optimization problem can be built The same as general Ant Colony System, the ants will change the information left on their paths, so that the probability that an ant chooses a number in a step next time can be changed to lead the ant to a better path The experimental result shows that this new algorithm can find a better solution for function optimization problem than genetic algorithms and other ant colony system for continuous optimization This new algorithm presents a new way to solve continuous optimization problems


Proceedings ArticleDOI
26 Aug 2004
TL;DR: In this paper, the hybrid of EN and inver-over algorithm is proposed to solve the dynamic TSP problem, and the results of the experiment show that this algorithm is effective.
Abstract: Most researches in evolutionary computation focus on the optimization of static and no-change problems. Many real world optimization problems however are actually dynamic, and optimization methods capable of continuously adapting the solution to a changing environment are needed. In this paper, we introduce an approach to solving the dynamic TSP. A dynamic TSP is harder than a general TSP, which is a NP-hard problem, because the city number and the cost matrix of a dynamic TSP are time varying. We propose an algorithm to solve the dynamic TSP problem, which is the hybrid of EN and inver-over algorithm. From the results of the experiment, we conclude this algorithm is effective.

01 Jan 2004
TL;DR: This thesis presents an introduction to stochastic routing problems followed by the description and the mathematical model of the Vehicle Routing Problem with Stochastic Demands (VRPSD) and the development and comparison of metaheuristics to tackle the VRPSD.
Abstract: The work presented in this thesis is part of the research carried out in the Metaheuristics Network, a Research Training Network sponsored by the Improving Human Potential Program of the European Community (HPRN-CT1999-00106). This thesis consist of two parts. The first part (Chapters 1 and 2) presents an introduction to stochastic routing problems followed by the description and the mathematical model of the Vehicle Routing Problem with Stochastic Demands (VRPSD). As stochastic routing problems are NP-hard combinatorial optimization problems, quite a lot of research has been devoted to the development of metaheuristic methods to tackle them, so a literature review is also given. Metaheuristic methods are approximate methods which combine basic heuristic methods in a higher level framework aimed at efficiently exploring a search space. Chapter 2 introduces the concept of metaheuristic and shows some criteria for classification of metaheuristics. Successively it gives a description of nowadays most important metaheuristics. The second part of the thesis (Chapters 3 and 4) summarizes the research results of the Metaheuristics Network on the development and comparison of metaheuristics to tackle the VRPSD. The Metaheuristics Network aims at the comparison of metaheuristics on different combinatorial optimization problems. For each combinatorial optimization problem considered, five metaheuristics are implemented by different persons in different sites involved in the Metaheuristics Network. The five metaheuristics considered are: Ant Colony Optimization, Evolutionary Computation, Iterated Local Search, Tabu Search, and Simulated Annealing. The comparison of the experimental results of the implemented metaheuristics [11] will be presented at the 8th International Conference on Parallel Problem Solving from Nature (PPSN VIII) that will be held in Birmingham, UK, on 18-22 September 2004 and will be published in a forthcoming volume of the series Lecture Notes in Computer Science.

Journal ArticleDOI
TL;DR: An extended random walk is constructed and used to show that fitness threshold accepting is optimal also for several other measures of algorithm performance, such as maximizing the expected probability of seeing the ground state and minimizing the expected value of the lowest energy seen.
Abstract: We treat the problem of selecting the next degree of freedom for update in an extremal optimization algorithm designed to find the ground state of a system with a complex energy landscape. We show that there exists a best distribution for selecting the next degree of freedom in order to optimize any linear function of the state probabilities, e.g., the expected number of visits to the ground state. We dub the class of algorithms using this best distribution in conjunction with extremal optimization fitness threshold accepting. In addition, we construct an extended random walk and use it to show that fitness threshold accepting is optimal also for several other measures of algorithm performance, such as maximizing the expected probability of seeing the ground state and minimizing the expected value of the lowest energy seen.

Book ChapterDOI
06 Jun 2004
TL;DR: The convergence of a Monte Carlo (MC) method for Combinatorial Optimization Problems (COPs) with Additional Reinforcement (ACO-AR) of the pheromone to the unused movements is proved.
Abstract: In this paper we prove the convergence of a Monte Carlo (MC) method for Combinatorial Optimization Problems (COPs). The Ant Colony Optimization (ACO) is a MC method, created to solve efficiently COPs. The Ant Colony Optimization (ACO) algorithms are being applied successfully to diverse heavily problems. To show that ACO algorithms could be good alternatives to existing algorithms for hard combinatorial optimization problems, recent research in this area has mainly focused on the development of algorithmic variants which achieve better performance than previous one. In this paper we present ACO algorithm with Additional Reinforcement (ACO-AR) of the pheromone to the unused movements. ACO-AR algorithm differs from ACO algorithms in several important aspects. In this paper we prove the convergence of ACO-AR algorithm.

Journal ArticleDOI
TL;DR: The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set.
Abstract: The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set. While in combinatorial optimization the point is in developing efficient algorithms and heuristics for solving specified types of problems, the extremal graph theory deals with finding bounds for various graph invariants under some constraints and with constructing extremal graphs. We analyze by examples some interconnections and interactions of the two theories and propose some conclusions.

Journal ArticleDOI
01 May 2004-EPL
TL;DR: A family of algorithms using rectangular distributions in combination with extremal optimization Fitness Threshold Accepting is proposed, which shows that if the authors wish to minimize any linear function of the state probabilities, e.g. the final energy, then the best distribution for selecting the next DoF is a rectangular distribution with a cutoff for the fitness.
Abstract: We consider the problem of selecting the next degree of freedom (DoF) for update in an extremal optimization algorithm designed to find the ground state of a system with a complex energy landscape. We show that if we wish to minimize any linear function of the state probabilities, e.g. the final energy, then the best distribution for selecting the next DoF is a rectangular distribution with a cutoff for the fitness. We dub the family of algorithms using rectangular distributions in combination with extremal optimization Fitness Threshold Accepting.

Proceedings ArticleDOI
14 Sep 2004
TL;DR: GSO algorithm is essentially a population-based heuristic search technique which can be used to solve combinatorial optimization problems, modeled on the concept of natural selection but also based on cultural and social evolution.
Abstract: This paper presents a new hybrid evolutionary algorithm combining Particle Swarm Optimization and Genetic Algorithms, called GSO (Genetical Swarm Optimization). GSO algorithm is essentially a population-based heuristic search technique which can be used to solve combinatorial optimization problems, modeled on the concept of natural selection but also based on cultural and social evolution. Numerical results and comparison of the different techniques are presented for an electromagnetic optimization problem.

Journal ArticleDOI
TL;DR: Several popular heuristic algorithms, namely: descent local search, simulated annealing, tabu search, genetic algorithms, ant algorithms, and iterated local search are discussed, and the unified paradigms of these heuristics are given.
Abstract: Heuristic algorithms (or simply heuristics) are methods that seek for high quality solutions within a reasonable (limited) amount of time without being able to guarantee optimality. They often come out as a result of imitation of the real world (physics, nature, biology, etc.). In this paper, we give an overview of some heuristic algorithms for combinatorial optimization problems. At the beginning, some definitions related to combinatorial optimization, as well as the principle (framework) and basic features of the heuristics for combinatorial problems are concerned. Then, several popular heuristic algorithms are discussed, namely: descent local search, simulated annealing, tabu search, genetic algorithms, ant algorithms, and iterated local search. The unified paradigms of these heuristics are given. Finally, we present some results of comparisons of these algorithms for the well-known combinatorial problem, the quadratic assignment problem.