scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 2003"


Journal ArticleDOI
TL;DR: A survey of the nowadays most important metaheuristics from a conceptual point of view and introduces a framework, that is called the I&D frame, in order to put different intensification and diversification components into relation with each other.
Abstract: The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the two forces that largely determine the behavior of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization.

3,287 citations


Journal ArticleDOI
TL;DR: The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory and graphical parameter selection guidelines are derived, resulting in results superior to previously published results.

2,554 citations


Book
01 Jan 2003
TL;DR: This book discusses Metaheuristic Class Libraries, Hyper-Heuristics, and Artificial Neural Networks for Combinatorial Optimization, which are concerned withMetaheuristic Algorithms and their applications in Search Technology.
Abstract: List of Contributing Authors. Preface. 1. Scatter Search and Path Relinking: Advances and Applications F. Glover, et al. 2. An Introduction to Tabu Search M. Grenreau. 3. Genetic Algorithms C. Reeves. 4. Genetic Programming 5. A Gentle Introduction to Memetic Algorithms P. Moscato, C. Cotta. 6. Variable Neighborhood Search P. Hansen, N. Mladenovic. 7. Guided Local Search C. Voudouris, E. Tsang. 8. Greedy Randomized Adaptive Search Procedures M. Resende, C. Ribeiro. 9. The Ant Colony Optimization Metaheuristic: Algorithms, Applications, and Advances M. Doringo, T. Stutzle. 10. The Theory and Practice of Simulated Annealing D. Henderson, et al. 11. Iterated Local Search H. Lourenco, et al. 12. Multi-Start Methods R. Marti 13. Local Search and Constraint Programming F. Focacci, et al. 14. Constraint Satisfaction E. Freuder, M. Wallace. 15. Artificial Neural Networks for Combinatorial Optimization J.-Y. Potvin, K. Smith. 16. Hyper-Heuristics: An Emerging Direction in Modern Search Technology E. Burke, et al. 17. Parallel Strategies for Meta-Heuristics T.G. Crainic, M. Toulouse. 18. Metaheuristic Class Libraries A. Fink, et al. 19. Asynchronous Teams S. Talukdar, et al. Index.

2,284 citations


Book ChapterDOI
01 Jan 2003
TL;DR: Iterated Local Search (ILS) as mentioned in this paper is a general purpose metaheuristic for finding good solutions of combinatorial optimization problems, which is based on building a sequence of (locally optimal) solutions by perturbing the current solution and applying local search to that modified solution.
Abstract: This is a survey of "Iterated Local Search", a general purpose metaheuristic for finding good solutions of combinatorial optimization problems. It is based on building a sequence of (locally optimal) solutions by: (1) perturbing the current solution; (2) applying local search to that modified solution. At a high level, the method is simple, yet it allows for a detailed use of problem-specific properties. After giving a general framework, we cover the uses of Iterated Local Search on a number of well studied problems.

969 citations


Book ChapterDOI
01 Jan 2003
TL;DR: The field of ACO algorithms is very lively, as testified, for example, by the successful biannual workshop (ANTS—From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms; http://iridia.ulb.ac.be/~ants/) where researchers meet to discuss the properties ofACO and other ant algorithms.
Abstract: The field of ACO algorithms is very lively, as testified, for example, by the successful biannual workshop (ANTS—From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms; http://iridia.ulb.ac.be/~ants/) where researchers meet to discuss the properties of ACO and other ant algorithms, both theoretically and experimentally.

890 citations


Book
08 Aug 2003
TL;DR: In this article, the authors discuss the principles of multiobjective optimization methods and the criteria for choice of a method, and evaluate the performance of methods and their performance measurement criteria.
Abstract: I Principles of multiobjective optimization methods.- 1 Introduction: multiobjective optimization and domination.- 2 Scalar methods.- 3 Interactive methods.- 4 Fuzzy methods.- 5 Multiobjective methods using metaheuristics.- 6 Decision aid methods.- II Evaluation of methods, and criteria for choice of method.- 7 Performance measurement.- 8 Test functions for multiobjective optimization methods.- 9 An attempt to classify multiobjective optimization methods.- III Case studies.- 10 Case study 1: qualification of scientific software.- 11 Case study 2: study of the extension of a telecommunication network.- 12 Case study 3: multicriteria decision tools to deal with bids.- 13 Conclusion.- References.

684 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: This method combines the traditional velocity and position update rules with the ideas of Gaussian mutation and has succeeded in acquiring better results than those by GA and PSO alone.
Abstract: In this paper we present particle swarm optimization with Gaussian mutation combining the idea of the particle swarm with concepts from evolutionary algorithms. This method combines the traditional velocity and position update rules with the ideas of Gaussian mutation. This model is tested and compared with the standard PSO and standard GA. The comparative experiments have been conducted on unimodal functions and multimodal functions. PSO with Gaussian mutation is able to obtain a result superior to GA. We also apply the PSO with Gaussian mutation to a gene network. Consequently, it has succeeded in acquiring better results than those by GA and PSO alone.

553 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: A particle swarm optimization toolbox for use with the Matlab scientific programming environment has been developed and PSO is introduced briefly and the use of the toolbox is explained with some examples.
Abstract: A particle swarm optimization toolbox (PSOt) for use with the Matlab scientific programming environment has been developed. PSO is introduced briefly and then the use of the toolbox is explained with some examples. A link to downloadable code is provided.

504 citations


Book ChapterDOI
01 Jan 2003
TL;DR: This chapter presents practical guidelines for the implementation of simulated annealing in terms of cooling schedules, neighborhood functions, and appropriate applications, as well as recent advances in the analysis of finite time performance.
Abstract: Simulated annealing is a popular local search meta-heuristic used to address discrete and, to a lesser extent, continuous optimization problems. The key feature of simulated annealing is that it provides a means to escape local optima by allowing hill-climbing moves (i.e., moves which worsen the objective function value) in hopes of finding a global optimum. A brief history of simulated annealing is presented, including a review of its application to discrete and continuous optimization problems. Convergence theory for simulated annealing is reviewed, as well as recent advances in the analysis of finite time performance. Other local search algorithms are discussed in terms of their relationship to simulated annealing. The chapter also presents practical guidelines for the implementation of simulated annealing in terms of cooling schedules, neighborhood functions, and appropriate applications.

481 citations


Book ChapterDOI
01 Jan 2003
TL;DR: The generic denomination of ‘Memetic Algorithms’ (MAs) is used to encompass a broad class of metaheuristics (i.e. general purpose methods aimed to guide an underlying heuristic) and proved to be of practical success in a variety of problem domains and in particular for the approximate solution of NP Optimization problems.
Abstract: The generic denomination of ‘Memetic Algorithms’ (MAs) is used to encompass a broad class of metaheuristics (i.e. general purpose methods aimed to guide an underlying heuristic). The method is based on a population of agents and proved to be of practical success in a variety of problem domains and in particular for the approximate solution of NP Optimization problems. Unlike traditional Evolutionary Computation (EC) methods, MAs are intrinsically concerned with exploiting all available knowledge about the problem under study. The incorporation of problem domain knowledge is not an optional mechanism, but a fundamental feature that characterizes MAs. This functioning philosophy is perfectly illustrated by the term “memetic”. Coined by R. Dawkins [52], the word ‘meme’ denotes an analogous to the gene in the context of cultural evolution [154]. In Dawkins’ words:

479 citations


Journal ArticleDOI
TL;DR: The results indicate that the particle swarm optimization algorithm does locate the constrained minimum de-sign in continuous applications with very good precision, albeit at a much highercomputational cost than that of a typical gradient based optimizer.
Abstract: Gerhard Venter (gventer_vrand.conl) *Vanderpla(ds Research and Development, bit.1767 S 8th St'reef. Suite 100, Colorado Springs. CO 80906Jaroslaw Sobieszczanski-Sobieski (j.sobieski:_larc.nasa.gov) *A_4SA Lcmgley Research Ce,_terMS 240, Hampton, I:4 23681-2199The purpose of this paper is to show how the search algorithm, known as par-ticle swarm optimization performs. Here, particle swarm optimization ks appliedto structural design problems, but the method.has a much wider range of possi-ble applications. The paper's new contributions are improvements to the particleswarm optimization algorithm and conclusions and recommendations as to theutility of the algorithm. Results of numerical experiments for both continuousand discrete applications are presented in the paper. The results indicate that theparticle swarm optimization algorithm does locate the constrained minimum de-sign in continuous applications with very good precision, albeit at a much highercomputational cost than that of a typical gradient based optimizer. However, thetrue potential of particle swarm optimization is primarily in applications withdiscrete and/or discontinuous functions and variables. Additionally, particleswarm optimization has the potential of e3_icient computation with very largenumbers of concurrently operating processors.

Proceedings ArticleDOI
10 Nov 2003
TL;DR: A hybrid particle swarm with differential evolution operator, termed DEPSO, which provide the bell-shaped mutations with consensus on the population diversity along with the evolution, while keeping the self-organized particle swarm dynamics, is proposed.
Abstract: A hybrid particle swarm with differential evolution operator, termed DEPSO, which provide the bell-shaped mutations with consensus on the population diversity along with the evolution, while keeping the self-organized particle swarm dynamics, is proposed. Then it is applied to a set of benchmark functions, and the experimental results illustrate its efficiency.

Journal Article
TL;DR: A theorem 1 is presented that shows that the maximization of this scalar value constitutes the necessary and sufficient condition for the function's arguments to be maximally diverse Pareto optimal solutions of a discrete, multi-objective, optimization problem.
Abstract: This article describes a set function that maps a set of Pareto optimal points to a scalar. A theorem 1 is presented that shows that the maximization of this scalar value constitutes the necessary and sufficient condition for the function's arguments to be maximally diverse Pareto optimal solutions of a discrete, multi-objective, optimization problem. This scalar quantity, a hypervolume based on a Lebesgue measure, is therefore the best metric to assess the quality of multiobjective optimization algorithms. Moreover, it can be used as the objective function in simulated annealing (SA) to induce convergence in probability to the Pareto optima. An efficient, polynomial-time algorithm for calculating this scalar and an analysis of its complexity is also presented.

Proceedings ArticleDOI
01 Jan 2003
TL;DR: This paper has developed some special methods for solving TSP using PSO and proposed the concept of swap operator and swap sequence, and redefined some operators on the basis of them, and designed a special PSO.
Abstract: This paper proposes a new application of particle swarm optimization for traveling salesman problem. We have developed some special methods for solving TSP using PSO. We have also proposed the concept of swap operator and swap sequence, and redefined some operators on the basis of them, in this way the paper has designed a special PSO. The experiments show that it can achieve good results.

Proceedings ArticleDOI
24 Apr 2003
TL;DR: The paper presents a modified particle swarm optimization (PSO) algorithm for engineering optimization problems with constraints and shows that PSO is an efficient and general approach to solve most nonlinear optimization problem with inequity constraints.
Abstract: The paper presents a modified particle swarm optimization (PSO) algorithm for engineering optimization problems with constraints. PSO is started with a group of feasible solutions and a feasibility function is used to check if the newly explored solutions satisfy all the constraints. All the particles keep only those feasible solutions in their memory. Several engineering design optimization problems were tested and the results show that PSO is an efficient and general approach to solve most nonlinear optimization problems with inequity constraints.

Journal ArticleDOI
TL;DR: An hybrid method combining two algorithms is proposed for the global optimization of multiminima functions, called continuous hybrid algorithm (CHA), performing the exploration with a GA, and the exploitation with a Nelder–Mead SS, and compared the results to the ones supplied by other competitive methods.

Journal ArticleDOI
Olli Bräysy1
TL;DR: The findings indicate that the proposed procedure outperforms other recent local searches and metaheuristics and the best solution obtained is improved by modifying the objective function to escape from a local minimum.
Abstract: The purpose of this paper is to present a new deterministic metaheuristic based on a modification of the variable neighborhood search of Mladenovic and Hansen (1997) for solving the vehicle-routing problem with time windows. Results are reported for the standard 100, 200, and 400 customer data sets by Solomon (1987) and Gehring and Homberger (1999), and two real-life problems by Russell (1995). The findings indicate that the proposed procedure outperforms other recent local searches and metaheuristics. In addition, four new best-known solutions were obtained. The proposed procedure is based on a new four-phase approach. In this approach an initial solution is first created using new route-construction heuristics followed by a route-elimination procedure to improve the solutions regarding the number of vehicles. In the third phase the solutions are improved in terms of total traveled distance using four new local-search procedures proposed in this paper. Finally, in phase four, the best solution obtained is improved by modifying the objective function to escape from a local minimum.

Journal ArticleDOI
TL;DR: In this article, a Tabu search meta-heuristic has been developed and successfully demonstrated to provide solutions to the system reliability optimization problem of redundancy allocation, which generally involves the selection of components and redundancy levels to maximize system reliability given various system-level constraints.
Abstract: A tabu search meta-heuristic has been developed and successfully demonstrated to provide solutions to the system reliability optimization problem of redundancy allocation. Tabu search is particularly well-suited to this problem and it offers distinct advantages compared to alternative optimization methods. While there are many forms of the problem, the redundancy allocation problem generally involves the selection of components and redundancy levels to maximize system reliability given various system-level constraints. This is a common and extensively studied problem involving system design, reliability engineering and operations research. It is becoming increasingly important to develop efficient solutions to this reliability optimization problem because many telecommunications (and other) systems are becoming more complex, yet with short development schedules and very stringent reliability requirements. Tabu search can be applied to a more diverse problem domain compared to mathematical programming meth...

BookDOI
01 Jan 2003
TL;DR: Simulation-based optimization :, Simulation-based Optimization :, کتابخانه مرکزی دانشگاه علوم ایران
Abstract: Simulation-based optimization : , Simulation-based optimization : , کتابخانه مرکزی دانشگاه علوم پزشکی ایران

Proceedings ArticleDOI
24 Apr 2003
TL;DR: This paper presents a modified dynamic neighborhood particle swarm optimization (DNPSO) algorithm that is modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives.
Abstract: This paper presents a modified dynamic neighborhood particle swarm optimization (DNPSO) algorithm for multiobjective optimization problems. PSO is modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives. An extended memory is introduced to store global Pareto optimal solutions to reduce computation time. Several benchmark cases were tested and the results show that the modified DNPSO is much more efficient than the original DNPSO and other multiobjective optimization techniques.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: Two test problems on multiobjective optimization (one simple general problem and the second one on an engineering application of cantilever design problem) are solved using differential evolution (DE), which is an improved version of genetic algorithm.
Abstract: Two test problems on multiobjective optimization (one simple general problem and the second one on an engineering application of cantilever design problem) are solved using differential evolution (DE). DE is a population based search algorithm, which is an improved version of genetic algorithm (GA), Simulations carried out involved solving (1) both the problems using Penalty function method, and (2) first problem using Weighing factor method and finding Pareto optimum set for the chosen problem, DE found to be robust and faster in optimization. To consolidate the power of DE, the classical Himmelblau function, with bounds on variables, is also solved using both DE and GA. DE found to give the exact optimum value within less generations compared to simple GA.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The results show that mutation hinders the motion of the swarm on the sphere but the combination of CPSO with mutation provides a significant improvement in performance for the Rastrigin and Rosenbrock functions for all dimensions and the Ackley function for dimensions 20 and 30, with no improvement for the 10 dimensional case.
Abstract: The particle swarm optimization algorithms converges rapidly during the initial stages of a search, but often slows considerably and can get trapped in local optima. This paper examines the use of mutation to both speed up convergence and escape local minima. It compares the effectiveness of the basic particle swarm optimization scheme (BPSO) with each of BPSO with mutation, constriction particle swarm optimization (CPSO) with mutation, and CPSO without mutation. The four test functions used were the Sphere, Ackley, Rastrigin and Rosenbrock functions of dimensions 10, 20 and 30. The results show that mutation hinders the motion of the swarm on the sphere but the combination of CPSO with mutation provides a significant improvement in performance for the Rastrigin and Rosenbrock functions for all dimensions and the Ackley function for dimensions 20 and 30, with no improvement for the 10 dimensional case.

Journal ArticleDOI
TL;DR: This approach is a tabu-embedded simulated annealing algorithm which restarts a search procedure which results in a metaheuristic to solve the pickup and delivery problem with time windows.
Abstract: In this paper, we propose a metaheuristic to solve the pickup and delivery problem with time windows. Our approach is a tabu-embedded simulated annealing algorithm which restarts a search procedure...

01 Jan 2003
TL;DR: Iterated Local Search (ILS) as mentioned in this paper is a general purpose metaheuristic for finding good solutions of combinatorial optimization problems, which is based on building a sequence of (locally optimal) solutions by perturbing the current solution and applying local search to that modified solution.
Abstract: This is a survey of "Iterated Local Search", a general purpose metaheuristic for finding good solutions of combinatorial optimization problems. It is based on building a sequence of (locally optimal) solutions by: (1) perturbing the current solution; (2) applying local search to that modified solution. At a high level, the method is simple, yet it allows for a detailed use of problem-specific properties. After giving a general framework, we cover the uses of Iterated Local Search on a number of well studied problems.

Proceedings ArticleDOI
07 Dec 2003
TL;DR: This paper summarizes some of the most relevant approaches that have been developed for the purpose of optimizing simulated systems and focuses on the metaheuristic black-box approach that leads the field of practical applications.
Abstract: The merging of optimization and simulation technologies has seen a rapid growth in recent years. A Google search on "Simulation Optimization" returns more than six thousand pages where this phrase appears. The content of these pages ranges from articles, conference presentations and books to software, sponsored work and consultancy. This is an area that has sparked as much interest in the academic world as in practical settings. In this paper, we first summarize some of the most relevant approaches that have been developed for the purpose of optimizing simulated systems. We then concentrate on the metaheuristic black-box approach that leads the field of practical applications and provide some relevant details of how this approach has been implemented and used in commercial software. Finally, we present an example of simulation optimization in the context of a simulation model developed to predict performance and measure risk in a real world project selection problem.

Book ChapterDOI
01 Jan 2003
TL;DR: This chapter suggests a classification of dynamic optimization problems, and survey and classify a number of the most widespread techniques that have been published in the literature so far to make evolutionary algorithms suitable for changing optimization problems.
Abstract: Most research in evolutionary computation focuses on optimization of static, non-changing problems. Many real-world optimization problems, however, are dynamic, and optimization methods are needed that are capable of continuously adapting the solution to a changing environment. If the optimization problem is dynamic, the goal is no longer to find the extrema, but to track their progression through the space as closely as possible. In this chapter, we suggest a classification of dynamic optimization problems, and survey and classify a number of the most widespread techniques that have been published in the literature so far to make evolutionary algorithms suitable for changing optimization problems. After this introduction to the basics, we will discuss in more detail two specific approaches, pointing out their deficiencies and potential. The first approach is based on memorization, the other one uses a novel multi-population structure.

Proceedings ArticleDOI
24 Apr 2003
TL;DR: A modified particle swarm optimizer which deals with permutation problems and preliminary study on the n-queens problem shows that the modified PSO is promising in solving constraint satisfaction problems.
Abstract: This paper introduces a modified particle swarm optimizer which deals with permutation problems. Particles are defined as permutations of a group of unique values. Velocity updates are redefined based on the similarity of two particles. Particles change their permutations with a random rate defined by their velocities. A mutation factor is introduced to prevent the current pBest from becoming stuck at local minima. Preliminary study on the n-queens problem shows that the modified PSO is promising in solving constraint satisfaction problems.

01 Jan 2003
TL;DR: This paper studies a parallel version of the Vector Evaluated Particle Swarm Optimization (VEPSO) method for multiobjective problems, investigating both the efficiency and the advantages of the parallel implementation.
Abstract: This paper studies a parallel version of the Vector Evaluated Particle Swarm Optimization (VEPSO) method for multiobjective problems. Experiments on well known and widely used test problems are performed, aiming at investigating both the efficiency of VEPSO as well as the advantages of the parallel implementation. The obtained results are compared with the corresponding results of the Vector Evaluated Genetic Algorithm approach, yielding the superiority of VEPSO.

Book
01 Jan 2003
TL;DR: An attempt is made to describe the theoretical prop- erties of several stochastic adaptive search methods, which may allow us to better predict algorithm performance and ultimately design new and improved algorithms.
Abstract: The field of global optimization has been developing at a rapid pace. There is a journal devoted to the topic, as well as many publications and notable books discussing various aspects of global optimization. This book is intended to complement these other publications with a focus on stochastic methods for global optimization. Stochastic methods, such as simulated annealing and genetic algo- rithms, are gaining in popularity among practitioners and engineers be- they are relatively easy to program on a computer and may be cause applied to a broad class of global optimization problems. However, the theoretical performance of these stochastic methods is not well under- stood. In this book, an attempt is made to describe the theoretical prop- erties of several stochastic adaptive search methods. Such a theoretical understanding may allow us to better predict algorithm performance and ultimately design new and improved algorithms. This book consolidates a collection of papers on the analysis and de- velopment of stochastic adaptive search. The first chapter introduces random search algorithms. Chapters 2-5 describe the theoretical anal- ysis of a progression of algorithms. A main result is that the expected number of iterations for pure adaptive search is linear in dimension for a class of Lipschitz global optimization problems. Chapter 6 discusses algorithms, based on the Hit-and-Run sampling method, that have been developed to approximate the ideal performance of pure random search. The final chapter discusses several applications in engineering that use stochastic adaptive search methods.

Journal ArticleDOI
TL;DR: The aim of this paper is to propose an adaptation of this new method to the graph coloring problem with a focus on the variable neighborhood search method.