scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 2002"


Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight.
Abstract: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight. The approach is validated using several standard test functions from the specialized literature. Our results indicate that our approach is highly competitive with current evolutionary multiobjective optimization techniques.

1,842 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: The effects of various population topologies on the particle swarm algorithm were systematically investigated and it was discovered that previous assumptions may not have been correct.
Abstract: The effects of various population topologies on the particle swarm algorithm were systematically investigated. Random graphs were generated to specifications, and their performance on several criteria was compared. What makes a good population structure? We discovered that previous assumptions may not have been correct.

1,589 citations


Journal ArticleDOI
TL;DR: A Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Abstract: This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and e1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.

1,436 citations


Journal ArticleDOI
TL;DR: This survey examines the state of the art of a variety of problems related to pseudo-Boolean optimization, i.e. to the optimization of set functions represented by closed algebraic expressions.

903 citations


Journal ArticleDOI
TL;DR: A taxonomy of hybrid metaheuristics is presented in an attempt to provide a common terminology and classification mechanisms and is also applicable to most types of heuristics and exact optimization algorithms.
Abstract: Hybrid metaheuristics have received considerable interest these recent years in the field of combinatorial optimization. A wide variety of hybrid approaches have been proposed in the literature. In this paper, a taxonomy of hybrid metaheuristics is presented in an attempt to provide a common terminology and classification mechanisms. The taxonomy, while presented in terms of metaheuristics, is also applicable to most types of heuristics and exact optimization algorithms. As an illustration of the usefulness of the taxonomy an annoted bibliography is given which classifies a large number of hybrid approaches according to the taxonomy.

829 citations


Proceedings ArticleDOI
11 Mar 2002
TL;DR: Critical aspects of the VEGA approach for Multiobjective Optimization using Genetic Algorithms are adapted to the PSO framework in order to develop a multi-swarm PSO that can cope effectively with MO problems.
Abstract: This paper constitutes a first study of the Particle Swarm Optimization (PSO) method in Multiobjective Optimization (MO) problems. The ability of PSO to detect Pareto Optimal points and capture the shape of the Pareto Front is studied through experiments on well-known non-trivial test functions. The Weighted Aggregation technique with fixed or adaptive weights is considered. Furthermore, critical aspects of the VEGA approach for Multiobjective Optimization using Genetic Algorithms are adapted to the PSO framework in order to develop a multi-swarm PSO that can cope effectively with MO problems. Conclusions are derived and ideas for further research are proposed.

674 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper presents a particle swarm optimization algorithm modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives for multiobjective optimization problems.
Abstract: This paper presents a particle swarm optimization (PSO) algorithm for multiobjective optimization problems. PSO is modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives. Several benchmark cases were tested and showed that PSO could efficiently find multiple Pareto optimal solutions.

671 citations


Journal ArticleDOI
TL;DR: There is a disconnect between research in simulation optimization--which has addressed the stochastic nature of discrete-event simulation by concentrating on theoretical results of convergence and specialized algorithms that are mathematically elegant--and the recent software developments, which implement very general algorithms adopted from techniques in the deterministic optimization metaheuristic literature.
Abstract: Probably one of the most successful interfaces between operations research and computer science has been the development of discrete-event simulation software. The recent integration of optimization techniques into simulation practice, specically into commercial software, has become nearly ubiquitous, as most discrete-event simulation packages now include some form of ?optimization? routine. The main thesis of this article, how-ever,is that there is a disconnect between research in simulation optimization--which has addressed the stochastic nature of discrete-event simulation by concentratingon theoretical results of convergence and specialized algorithms that are mathematically elegant--and the recent software developments, which implement very general algorithms adopted from techniques in the deterministic optimization metaheuristic literature (e.g., genetic algorithms, tabu search, artificial neural networks). A tutorial exposition that summarizes the approaches found in the research literature is included, as well as a discussion contrasting these approaches with the algorithms implemented in commercial software. The article concludes with the author's speculations on promising research areas and possible future directions in practice.

652 citations


01 Jan 2002
TL;DR: There is a disconnect between research in simulation optimization—which has addressed the stochastic nature of discrete-event simulation by concentrating on theoretical results of convergence and specialized algorithms that are mathematically elegant—and the recent software developments, which implement very general algorithms adopted from techniques in the deterministic optimization metaheuristic literature.
Abstract: Probably one of the most successful interfaces between operations research and computer science has been the development of discrete-event simulation software. The recent integration of optimization techniques into simulation practice, specifically into commercial software, has become nearly ubiquitous, as most discrete-event simulation packages now include some form of “optimization” routine. The main thesis of this article, however, is that there is a disconnect between research in simulation optimization—which has addressed the stochastic nature of discrete-event simulation by concentrating on theoretical results of convergence and specialized algorithms that are mathematically elegant—and the recent software developments, which implement very general algorithms adopted from techniques in the deterministic optimization metaheuristic literature (e.g., genetic algorithms, tabu search, artificial neural networks). A tutorial exposition that summarizes the approaches found in the research literature is included, as well as a discussion contrasting these approaches with the algorithms implemented in commercial software. The article concludes with the author’s speculations on promising research areas and possible future directions in practice. (Simulation Optimization; Simulation Software; Stochastic Approximation; Metaheuristics)

637 citations


Book
01 Jan 2002
TL;DR: Pardalos and Resende as mentioned in this paper proposed a method to solve the problem of finding the minimum-cost single-Commodity Flow (MCSF) in a network.
Abstract: PrefacePanos M. Pardalos and Mauricio G. C. Resende: IntroductionPanos M. Pardalos and Mauricio G. C. Resende: Part One: Algorithms 1: Linear Programming 1.1: Tamas Terlaky: Introduction 1.2: Tamas Terlaky: Simplex-Type Algorithms 1.3: Kees Roos: Interior-Point Methods for Linear Optimization 2: Henry Wolkowicz: Semidefinite Programming 3: Combinatorial Optimization 3.1: Panos M. Pardalos and Mauricio G. C. Resende: Introduction 3.2: Eva K. Lee: Branch-and-Bound Methods 3.3: John E. Mitchell: Branch-and-Cut Algorithms for Combinatorial Optimization Problems 3.4: Augustine O. Esogbue: Dynamic Programming Approaches 3.5: Mutsunori Yagiura and Toshihide Ibaraki: Local Search 3.6: Metaheuristics 3.6.1: Bruce L. Golden and Edward A. Wasil: Introduction 3.6.2: Eric D. Taillard: Ant Systems 3.6.3: John E. Beasley: Population Heuristics 3.6.4: Pablo Moscato: Memetic Algorithms 3.6.5: Leonidas S. Pitsoulis and Mauricio G. C. Resende: Greedy Randomized Adaptive Search Procedures 3.6.6: Manuel Laguna: Scatter Search 3.6.7: Fred Glover and Manuel Laguna: Tabu Search 3.6.8: E. H. L. Aarts and H. M. M. Ten Eikelder: Simulated Annealing 3.6.9: Pierre Hansen and Nenad Mladenovi'c: Variable Neighborhood Search 4: Yinyu Ye: Quadratic Programming 5: Nonlinear Programming 5.1: Gianni Di Pillo and Laura Palagi: Introduction 5.2: Gianni Di Pillo and Laura Palagi: Unconstrained Nonlinear Programming 5.3: Constrained Nonlinear Programming }a Gianni Di Pillo and Laura Palagi 5.4: Manlio Gaudioso: Nonsmooth Optimization 6: Christodoulos A. Floudas: Deterministic Global Optimizatio and Its Applications 7: Philippe Mahey: Decomposition Methods for Mathematical Programming 8: Network Optimization 8.1: Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin: Introduction 8.2: Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin: Maximum Flow Problem 8.3: Edith Cohen: Shortest-Path Algorithms 8.4: S. Thomas McCormick: Minimum-Cost Single-Commodity Flow 8.5: Pierre Chardaire and Abdel Lisser: Minimum-Cost Multicommodity Flow 8.6: Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin: Minimum Spanning Tree Problem 9: Integer Programming 9.1: Nelson Maculan: Introduction 9.2: Nelson Maculan: Linear 0-1 Programming 9.3: Yves Crama and peter L. Hammer: Psedo-Boolean Optimization 9.4: Christodoulos A. Floudas: Mixed-Integer Nonlinear Optimization 9.5: Monique Guignard: Lagrangian Relaxation 9.6: Arne Lookketangen: Heuristics for 0-1 Mixed-Integer Programming 10: Theodore B. Trafalis and Suat Kasap: Artificial Neural Networks in Optimization and Applications 11: John R. Birge: Stochastic Programming 12: Hoang Tuy: Hierarchical Optimization 13: Michael C. Ferris and Christian Kanzow: Complementarity and Related Problems 14: Jose H. Dula: Data Envelopment Analysis 15: Yair Censor and Stavros A. Zenios: Parallel Algorithms in Optimization 16: Sanguthevar Rajasekaran: Randomization in Discrete Optimization: Annealing Algorithms Part Two: Applications 17: Problem Types 17.1: Chung-Yee Lee and Michael Pinedo: Optimization and Heuristics of Scheduling 17.2: John E. Beasley, Abilio Lucena, and Marcus Poggi de Aragao: The Vehicle Routing Problem 17.3: Ding-Zhu Du: Network Designs: Approximations for Steiner Minimum Trees 17.4: Edward G. Coffman, Jr., Janos Csirik, and Gerhard J. Woeginger: Approximate Solutions to Bin Packing Problems 17.5: Rainer E. Burkard: The Traveling Salesmand Problem 17.6: Dukwon Kim and Boghos D. Sivazlian: Inventory Management 17.7: Zvi Drezner: Location 17.8: Jun Gu, Paul W. Purdom, John Franco, and Benjamin W. Wah: Algorithms for the Satisfiability (SAT) Problem 17.9: Eranda Cela: Assignment Problems 18: Application Areas 18.1: Warren B. Powell: Transportation and Logistics 18.2: Gang Yu and Benjamin G. Thengvall: Airline Optimization 18.3: Alexandra M. Newman, Linda K. Nozick, and Candace Arai Yano: Optimization in the Rail Industry 18.4: Andres Weintraub Pohorille and John Hof: Forstry Industry 18.5: Stephen C. Graves: Manufacturing Planning and Control 18.6: Robert C. Leachman: Semiconductor Production Planning 18.7: Matthew E. Berge, John T. Betts, Sharon K. Filipowski, William P. Huffman, and David P. Young: Optimization in the Aerospace Industry 18.8: Energy 18.8.1: Gerson Couto de Oliveira, Sergio Granville, and Mario Pereira: Optimization in Electrical Power Systems 18.8.2: Roland N. Horne: Optimization Applications in Oil and Gas Recovery 18.8.3: Roger Z. Rios-Mercado: Natural Gas Pipeline Optimization 18.9: G. Anandalingam: Opimization of Telecommunications Networks 18.10: Stanislav Uryasev: Optimization of Test Intervals in Nuclear Engineering 18.11: Hussein A. Y. Etawil and Anthony Vannelli: Optimization in VLSI Design: Target Distance Models for Cell Placement 18.12: Michael Florian and Donald W. Hearn: Optimization Models in Transportation Planning 18.13: Guoliang Xue: Optimization in computation Molecular Biology 18.14: Anna Nagurney: Optimization in the Financial Services Industry 18.15: J. B. Rosen, John H. Glick, and E. Michael Gertz: Applied Large-Scale Nonlinear Optimization for Optimal Control of Partial Differential Equations and Differential Algebraic Equations 18.16: Kumaraswamy Ponnambalam: Optimization in Water Reservoir Systems 18.17: Ivan Dimov and Zahari Zlatev: Optimization Problems in Air-Pollution Modeling 18.18: Charles B. Moss: Applied Optimization in Agriculture 18.19: Petra Mutzel: Optimization in Graph Drawing 18.20: G. E. Stavroulakis: Optimization for Modeling of Nonlinear Interactions in Mechanics Part Three: Software 19: Emmanuel Fragniere and Jacek Gondzio: Optimization Modeling Languages 20: Stephen J. Wright: Optimization Software Packages 21: Andreas Fink, Stefan VoB, and David L. Woodruff: Optimization Software Libraries 22: John E. Beasley: Optimization Test Problem Libraries 23: Simone de L. Martins, Celso C. Ribeiro, and Noemi Rodriguez: Parallel Computing Environment 24: Catherine C. McGeoch: Experimental Analysis of Optimization Algorithms 25: Andreas Fink, Stefan VoB, and David L. Woodruff: Object-Oriented Programming 26: Michael A. Trick: Optimization and the Internet Directory of Contributors Index

Proceedings Article
09 Jul 2002
TL;DR: A procedure that empirically evaluates a set of candidate configurations by discarding bad ones as soon as statistically sufficient evidence is gathered against them and allows to focus on the most promising ones is proposed.
Abstract: This paper describes a racing procedure for finding, in a limited amount of time, a configuration of a metaheuristic that performs as good as possible on a given instance class of a combinatorial optimization problem. Taking inspiration from methods proposed in the machine learning literature for model selection through cross-validation, we propose a procedure that empirically evaluates a set of candidate configurations by discarding bad ones as soon as statistically sufficient evidence is gathered against them. We empirically evaluate our procedure using as an example the configuration of an ant colony optimization algorithm applied to the traveling salesman problem. The experimental results show that our procedure is able to quickly reduce the number of candidates, and allows to focus on the most promising ones.

Journal ArticleDOI
TL;DR: Results of the presented experiment indicate that the algorithm outperforms other multi-objective methods based on GLS and a Pareto ranking-based multi- objective genetic algorithm (GA) on travelling salesperson problem (TSP).

01 Jan 2002
TL;DR: Particle Swarm Optimization is an efficient and general solution to solve most nonlinear optimization problems with nonlinear inequality constraints with preserving feasibility strategy employed to deal with constraints.
Abstract: This paper presents a Particle Swarm Optimization (PSO) algorithm for constrained nonlinear optimization problems. In PSO, the potential solutions, called particles, are "flown" through the problem space by learning from the current optimal particle and its own memory. In this paper, preserving feasibility strategy is employed to deal with constraints. PSO is started with a group of feasible solutions and a feasibility function is used to check if the new explored solutions satisfy all the constraints. All particles keep only those feasible solutions in their memory. Eleven test cases were tested and showed that PSO is an efficient and general solution to solve most nonlinear optimization problems with nonlinear inequality constraints.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the performance of two multiple-objective genetic local search (MOGLS) algorithms to the best performers in the previous experiments using the same test instances and conclude that the MOGLS algorithm generates better approximations to the nondominated set in the same number of functions evaluations than the other algorithms.
Abstract: Multiple-objective metaheuristics, e.g., multiple-objective evolutionary algorithms, constitute one of the most active fields of multiple-objective optimization. Since 1985, a significant number of different methods have been proposed. However, only few comparative studies of the methods were performed on large-scale problems. We continue two comparative experiments on the multiple-objective 0/1 knapsack problem reported in the literature. We compare the performance of two multiple-objective genetic local search (MOGLS) algorithms to the best performers in the previous experiments using the same test instances. The results of our experiment indicate that our MOGLS algorithm generates better approximations to the nondominated set in the same number of functions evaluations than the other algorithms.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: The foundations and performance of the two algorithms when applied to the design of a profiled corrugated horn antenna are investigated and the possibility of hybridizing the twogorithms is investigated.
Abstract: Genetic algorithms (GA) have proven to be a useful method of optimization for difficult and discontinuous multidimensional engineering problems. A new method of optimization, particle swarm optimization (PSO), is able to accomplish the same goal as GA optimization in a new and faster way. The purpose of this paper is to investigate the foundations and performance of the two algorithms when applied to the design of a profiled corrugated horn antenna. Also investigated is the possibility of hybridizing the two algorithms.

Journal ArticleDOI
TL;DR: A new optimization algorithm to solve multiobjective design optimization problems based on behavioral concepts similar to that of a real swarm is presented, indicating that the swarm algorithm is capable of generating an extended Pareto front with significantly fewer function evaluations when compared to the nondominated sorting genetic algorithm (NSGA).
Abstract: This paper presents a new optimization algorithm to solve multiobjective design optimization problems based on behavioral concepts similar to that of a real swarm. The individuals of a swarm update their flying direction through communication with their neighboring leaders with an aim to collectively attain a common goal. The success of the swarm is attributed to three fundamental processes: identification of a set of leaders, selection of a leader for information acquisition, and finally a meaningful information transfer scheme. The proposed algorithm mimics the above behavioral processes of a real swarm. The algorithm employs a multilevel sieve to generate a set of leaders, a probabilistic crowding radius-based strategy for leader selection and a simple generational operator for information transfer. Two test problems, one with a discontinuous Pareto front and the other with a multi-modal Pareto front is solved to illustrate the capabilities of the algorithm in handling mathematically complex problems. ...

Book ChapterDOI
01 Jan 2002
TL;DR: This paper is an annotated bibliography of the GRASP literature from 1989 to 2001, covering a wide range of combinatorial optimization problems, ranging from scheduling and routing to drawing and turbine balancing.
Abstract: A greedy randomized adaptive search procedure (GRASP) is a metaheuristic for combinatorial optimization. It is a multi-start or iterative process, in which each GRASP iteration consists of two phases, a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Since 1989, numerous papers on the basic aspects of GRASP, as well as enhancements to the basic metaheuristic have appeared in the literature. GRASP has been applied to a wide range of combinatorial optimization problems, ranging from scheduling and routing to drawing and turbine balancing. This paper is an annotated bibliography of the GRASP literature from 1989 to 2001.

Proceedings ArticleDOI
06 Oct 2002
TL;DR: The fundamentals of the method are described, and an application to the problem of loss minimization and voltage control is presented, with very good results.
Abstract: This paper presents a new optimization model EPSO, evolutionary particle swarm optimization, inspired in both evolutionary algorithms and in particle swarm optimization algorithms. The fundamentals of the method are described, and an application to the problem of loss minimization and voltage control is presented, with very good results.

Journal ArticleDOI
TL;DR: The Intelligent Particle Swarm Optimization (IPSO) algorithm as mentioned in this paper uses concepts such as group experiences, unpleasant memories (tabu to be avoided), local landscape models based on virtual neighbors, and memetic replication of successful behavior parameters.
Abstract: The paper describes a new stochastic heuristic algorithm for global optimization. The new optimization algorithm, called intelligent-particle swarm optimization (IPSO), offers more intelligence to particles by using concepts such as: group experiences, unpleasant memories (tabu to be avoided), local landscape models based on virtual neighbors, and memetic replication of successful behavior parameters. The new individual complexity is amplified at the group level and consequently generates a more efficient optimization procedure. A simplified version of the IPSO algorithm was implemented and compared with the classical PSO algorithm for a simple test function and for the Loney's solenoid.

Journal ArticleDOI
TL;DR: A survey on variousevolutionary methods for MO optimization by considering the usual performancemeasures in MO optimization and a few metrics to examinethe strength and weakness of each evolutionary approach both quantitatively and qualitatively.
Abstract: Evolutionary techniques for multi-objective (MO) optimization are currently gaining significant attention from researchers in various fields due to their effectiveness and robustness in searching for a set of trade-off solutions. Unlike conventional methods that aggregate multiple attributes to form a composite scalar objective function, evolutionary algorithms with modified reproduction schemes for MO optimization are capable of treating each objective component separately and lead the search in discovering the global Pareto-optimal front. The rapid advances of multi-objective evolutionary algorithms, however, poses the difficulty of keeping track of the developments in this field as well as selecting an existing approach that best suits the optimization problem in-hand. This paper thus provides a survey on various evolutionary methods for MO optimization. Many well-known multi-objective evolutionary algorithms have been experimented with and compared extensively on four benchmark problems with different MO optimization difficulties. Besides considering the usual performance measures in MO optimization, e.g., the spread across the Pareto-optimal front and the ability to attain the global trade-offs, the paper also presents a few metrics to examine the strength and weakness of each evolutionary approach both quantitatively and qualitatively. Simulation results for the comparisons are analyzed, summarized and commented.

Journal ArticleDOI
TL;DR: An immunity-based ant colony optimization (ACO) algorithm for solving weapon–target assignment (WTA) problems is proposed and from the simulation for those WTA problems, the proposed algorithm indeed is very efficient.

Journal ArticleDOI
TL;DR: A novel incomplete approach for solving constraint satisfaction problems (CSPs) based on the ant colony optimization (ACO) metaheuristic, to use artificial ants to keep track of promising areas of the search space by laying trails of pheromone.
Abstract: We describe a novel incomplete approach for solving constraint satisfaction problems (CSPs) based on the ant colony optimization (ACO) metaheuristic. The idea is to use artificial ants to keep track of promising areas of the search space by laying trails of pheromone. This pheromone information is used to guide the search, as a heuristic for choosing values to be assigned to variables. We first describe the basic ACO algorithm for solving CSPs and we show how it can be improved by combining it with local search techniques. Then, we introduce a preprocessing step, the goal of which is to favor a larger exploration of the search space at a lower cost, and we show that it allows ants to find better solutions faster. Finally, we evaluate our approach on random binary problems.

Book ChapterDOI
TL;DR: A population based ACO (Ant Colony Optimization) algorithm is proposed where (nearly) all pheromone information corresponds to solutions that are members of the actual population and the results show that the new approach is competitive.
Abstract: A population based ACO (Ant Colony Optimization) algorithm is proposed where (nearly) all pheromone information corresponds to solutions that are members of the actual population. Advantages of the population based approach are that it seems promising for solving dynamic optimization problems, its finite state space and the chances it offers for designing new metaheuristics. We compare the behavior of the new approach to the standard ACO approach for several instances of the TSP and the QAP problem. The results show that the new approach is competitive.

Proceedings Article
01 Jan 2002
TL;DR: The underlying ideas of this approach that lead from the biological inspiration to the ACO metaheuristic, which gives a set of rules of how to apply ACO algorithms to challenging combinatorial problems are reviewed.
Abstract: Ant Colony Optimization (ACO) is a recent metaheuristic method that is inspired by the behavior of real ant colonies. In this paper, we review the underlying ideas of this approach that lead from the biological inspiration to the ACO metaheuristic, which gives a set of rules of how to apply ACO algorithms to challenging combinatorial problems. We present some of the algorithms that were developed under this framework, give an overview of current applications, and analyze the relationship between ACO and some of the best known metaheuristics. In addition, we describe recent theoretical developments in the field and we conclude by showing several new trends and new research directions in this field.

Journal ArticleDOI
TL;DR: A greedy randomized adaptive search procedure (GRASP), a variable neighborhood search (VNS), and a path-relinking (PR) intensification heuristic for MAX-CUT are proposed and tested and Computational results indicate that these randomized heuristics find near-optimal solutions.
Abstract: Given an undirected graph with edge weights, the MAX-CUT problem consists in finding a partition of the nodes into two subsets, such that the sum of the weights of the edges having endpoints in different subsets is maximized. It is a well-known NP-hard problem with applications in several fields, including VLSI design and statistical physics. In this article, a greedy randomized adaptive search procedure (GRASP), a variable neighborhood search (VNS), and a path-relinking (PR) intensification heuristic for MAX-CUT are proposed and tested. New hybrid heuristics that combine GRASP, VNS, and PR are also proposed and tested. Computational results indicate that these randomized heuristics find near-optimal solutions. On a set of standard test problems, new best known solutions were produced for many of the instances.

Journal ArticleDOI
TL;DR: Several parallel decomposition strategies are examined in Ant Colony Optimization applied to a specific problem, namely the travelling salesman problem, with encouraging speedup and efficiency results.

Journal ArticleDOI
TL;DR: This paper proposes a new algorithm for learning BNs based on a recently introduced metaheuristic, which has been successfully applied to solve a variety of combinatorial optimization problems: ant colony optimization (ACO).

Book ChapterDOI
27 Aug 2002
TL;DR: In this paper, an unbiased comparison of the performance of straightforward implementations of five different metaheuristics on a university course timetabling problem is presented. And the results show that no metaheuristic is best on all the timetabling instances considered.
Abstract: The main goal of this paper is to attempt an unbiased comparison of the performance of straightforward implementations of five different metaheuristics on a university course timetabling problem. In particular, the metaheuristics under consideration are Evolutionary Algorithms, Ant Colony Optimization, Iterated Local Search, Simulated Annealing, and Tabu Search. To attempt fairness, the implementations of all the algorithms use a common solution representation, and a common neighbourhood structure or local search. The results show that no metaheuristic is best on all the timetabling instances considered. Moreover, even when instances are very similar, from the point of view of the instance generator, it is not possible to predict the best metaheuristic, even if some trends appear when focusing on particular instance classes. These results underline the difficulty of finding the best metaheuristics even for very restricted classes of timetabling problem.

Journal ArticleDOI
TL;DR: Several new heuristics for solving the one-dimensional bin packing problem are presented, and the most effective algorithm turned out to be one based on running one of the former to provide an initial solution for the latter.