scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 2001"


Book
01 Jan 2001
TL;DR: This text provides an excellent introduction to the use of evolutionary algorithms in multi-objective optimization, allowing use as a graduate course text or for self-study.
Abstract: From the Publisher: Evolutionary algorithms are relatively new, but very powerful techniques used to find solutions to many real-world search and optimization problems. Many of these problems have multiple objectives, which leads to the need to obtain a set of optimal solutions, known as effective solutions. It has been found that using evolutionary algorithms is a highly effective way of finding multiple effective solutions in a single simulation run. · Comprehensive coverage of this growing area of research · Carefully introduces each algorithm with examples and in-depth discussion · Includes many applications to real-world problems, including engineering design and scheduling · Includes discussion of advanced topics and future research · Features exercises and solutions, enabling use as a course text or for self-study · Accessible to those with limited knowledge of classical multi-objective optimization and evolutionary algorithms The integrated presentation of theory, algorithms and examples will benefit those working and researching in the areas of optimization, optimal design and evolutionary computing. This text provides an excellent introduction to the use of evolutionary algorithms in multi-objective optimization, allowing use as a graduate course text or for self-study.

12,134 citations


Journal ArticleDOI
01 Feb 2001
TL;DR: A new heuristic algorithm, mimicking the improvisation of music players, has been developed and named Harmony Search (HS), which is illustrated with a traveling salesman problem (TSP), a specific academic optimization problem, and a least-cost pipe network design problem.
Abstract: Many optimization problems in various fields have been solved using diverse optimization al gorithms. Traditional optimization techniques such as linear programming (LP), non-linear programming (NL...

5,136 citations


Journal ArticleDOI
TL;DR: In this article, a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS), is presented, which can easily be implemented using any local search algorithm as a subroutine.

1,824 citations


01 Jan 2001
TL;DR: The genetic algorithm using a oat representation is found to be superior to both a binary genetic algorithm and simulated annealing in terms of e ciency and quality of solution.
Abstract: A genetic algorithm implemented in Matlab is presented. Matlab is used for the following reasons: it provides many built in auxiliary functions useful for function optimization; it is completely portable; and it is e cient for numerical computations. The genetic algorithm toolbox developed is tested on a series of non-linear, multi-modal, non-convex test problems and compared with results using simulated annealing. The genetic algorithm using a oat representation is found to be superior to both a binary genetic algorithm and simulated annealing in terms of e ciency and quality of solution. The use of genetic algorithm toolbox as well as the code is introduced in the paper.

1,318 citations


Proceedings ArticleDOI
27 May 2001
TL;DR: The experimental results illustrate that the fuzzy adaptive PSO is a promising optimization method, which is especially useful for optimization problems with a dynamic environment.
Abstract: A fuzzy system is implemented to dynamically adapt the inertia weight of the particle swarm optimization algorithm (PSO). Three benchmark functions with asymmetric initial range settings are selected as the test functions. The same fuzzy system has been applied to all three test functions with different dimensions. The experimental results illustrate that the fuzzy adaptive PSO is a promising optimization method, which is especially useful for optimization problems with a dynamic environment.

1,132 citations


Journal ArticleDOI
TL;DR: It is argued that software engineering is ideal for the application of metaheuristic search techniques, such as genetic algorithms, simulated annealing and tabu search, which could provide solutions to the difficult problems of balancing competing competing constraints.
Abstract: This paper claims that a new field of software engineering research and practice is emerging: search-based software engineering. The paper argues that software engineering is ideal for the application of metaheuristic search techniques, such as genetic algorithms, simulated annealing and tabu search. Such search-based techniques could provide solutions to the difficult problems of balancing competing (and some times inconsistent) constraints and may suggest ways of finding acceptable solutions in situations where perfect solutions are either theoretically impossible or practically infeasible. In order to develop the field of search-based software engineering, a reformulation of classic software engineering problems as search problems is required. The paper briefly sets out key ingredients for successful reformulation and evaluation criteria for search-based software engineering.

761 citations


Book ChapterDOI
07 Mar 2001
TL;DR: This paper uses an abstract building-block problem to illustrate how 'multi-objectivizing' a single-objective optimization (SOO) problem can remove local optima, and investigates small instances of the travelling salesman problem where additional objectives are defined using arbitrary sub-tours.
Abstract: One common characterization of how simple hill-climbing optimization methods can fail is that they become trapped in local optima - a state where no small modification of the current best solution will produce a solution that is better. This measure of 'better' depends on the performance of the solution with respect to the single objective being optimized. In contrast, multi-objective optimization (MOO) involves the simultaneous optimization of a number of objectives. Accordingly, the multi-objective notion of 'better' permits consideration of solutions that may be superior in one objective but not in another. Intuitively, we may say that this gives a hill-climber in multi-objective space more freedom to explore and less likelihood of becoming trapped. In this paper, we investigate this intuition by comparing the performance of simple hill-climber-style algorithms on single-objective problems and multi-objective versions of those same problems. Using an abstract building-block problem we illustrate how 'multi-objectivizing' a single-objective optimization (SOO) problem can remove local optima. Then we investigate small instances of the travelling salesman problem where additional objectives are defined using arbitrary sub-tours. Results indicate that multi-objectivization can reduce local optima and facilitate improved optimization in some cases. These results enlighten our intuitions about the nature of search in multi-objective optimization and sources of difficulty in single-objective optimization.

369 citations


Journal ArticleDOI
TL;DR: The methodological issues that must be confronted by researchers undertaking experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks are highlighted.
Abstract: Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on some classic models, most of the heuristics developed for large optimization problem must be evaluated empirically—by applying procedures to a collection of specific instances and comparing the observed solution quality and computational burden. This paper focuses on the methodological issues that must be confronted by researchers undertaking such experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks. The questions are difficult, and there are no clear right answers. We seek only to highlight the main issues, present alternative ways of addressing them under different circumstances, and caution about pitfalls to avoid.

319 citations


Journal ArticleDOI
TL;DR: In this article, an alternative mixed integer linear disjunctive formulation was proposed, which has better conditioning properties than the standard nonlinear mixed integer formulation, where an upper bound provided by a heuristic solution is used to reduce the tree search.
Abstract: The classical nonlinear mixed integer formulation of the transmission network expansion problem cannot guarantee finding the optimal solution due to its nonconvex nature. We propose an alternative mixed integer linear disjunctive formulation, which has better conditioning properties than the standard disjunctive model. The mixed integer program is solved by a commercial branch and bound code, where an upper bound provided by a heuristic solution is used to reduce the tree search. The heuristic solution is obtained using a GRASP metaheuristic, capable of finding sub-optimal solutions with an affordable computing effort. Combining the upper bound given by the heuristic and the mixed integer disjunctive model, optimality can be proven for several hard problem instances.

295 citations


Book
25 Oct 2001
TL;DR: This paper presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and therefore expensive process of designing and implementing design optimization schemes.
Abstract: Introduction to Design Optimization.- Genetic and Evolutionary Algorithms as a Design Optimization Tool.- Advanced Evolutionary Algorithm Techniques.- Evolutionary Algorithms for Single Criterion Optimization.- Evolutionary Algorithms for Multicriteria Optimization.- Some Other Evolutionary Algorithms Based Methods.- Design Optimization Examples and Their Solution by Evolutionary Algorithms.- Appendix: Evolutionary Optimization System.- Appendix: C Codes for Two Design Optimization.

294 citations


Journal ArticleDOI
TL;DR: This work presents a robust genetic algorithm for the single-mode resource constrained project scheduling problem, proposes a new representation for the solutions, based on the standard activity list representation and develops new crossover techniques with good performance in a wide sample of projects.
Abstract: Genetic algorithms have been applied to many different optimization problems and they are one of the most promising metaheuristics However, there are few published studies concerning the design of efficient genetic algorithms for resource allocation in project scheduling In this work we present a robust genetic algorithm for the single-mode resource constrained project scheduling problem We propose a new representation for the solutions, based on the standard activity list representation and develop new crossover techniques with good performance in a wide sample of projects Through an extensive computational experiment, using standard sets of project instances, we evaluate our genetic algorithm and demonstrate that our approach outperforms the best algorithms appearing in the literature

Book
01 Mar 2001
TL;DR: In this article, the connection with Scalar-Valued Optimization (SVO) and Scalar Valued Optimisation (SVO) is discussed, and the Manifold of Stationary Points is discussed.
Abstract: 1 Introduction.- 2 Vector Optimization in Industrial Applications.- 3 Principles and Methods of Vector Optimization.- 4 The Connection with Scalar-Valued Optimization.- 5 The Manifold of Stationary Points.- 6 Homotopy Strategies.- 7 Numerical Results.

Journal ArticleDOI
TL;DR: A parallel and easily implemented hybrid optimization framework is presented, which reasonably combines genetic algorithm with simulated annealing, and applies it to job-shop scheduling problems.

Book ChapterDOI
07 Mar 2001
TL;DR: This paper introduces two methods for co-operation between the colonies and compares them with a multistart ant algorithm that corresponds to the case of no cooperation.
Abstract: In this paper we propose a new approach to solve bi-criterion optimization problems with ant algorithms where several colonies of ants cooperate in finding good solutions. We introduce two methods for co-operation between the colonies and compare them with a multistart ant algorithm that corresponds to the case of no cooperation. Heterogeneous colonies are used in the algorithm, i.e. the ants differ in their preferences between the two criteria. Every colony uses two pheromone matrices -- each suitable for one optimization criterion. As a test problem we use the Single Machine Total Tardiness problem with changeover costs.

01 Jan 2001
TL;DR: These preliminary results suggest that these two modifications of the particle swarm optimizer allow PSO to search in both static and dynamic environments.
Abstract: In this paper the authors propose a method for adapting the particle swarm optimizer for dynamic environments. The process consists of causing each particle to reset its record of its best position as the environment changes, to avoid making direction and velocity decisions on the basis of outdated information. Two methods for initiating this process are examined: periodic resetting, based on the iteration count, and triggered resetting, based on the magnitude of the change in the environment. These preliminary results suggest that these two modifications allow PSO to search in both static and dynamic

Journal ArticleDOI
TL;DR: In this paper, a two level VNS, called Variable Neighborhood Decomposition Search (VNDS), is presented and illustrated on the p-median problem, and results on 1400, 3038 and 5934 node instances from the TSP library show VNDS improves upon VNS in less computing time, and gives much better results than Fast Interchange (FI), in the same time that FI takes for a single descent.
Abstract: The recent Variable Neighborhood Search (VNS) metaheuristic combines local search with systematic changes of neighborhood in the descent and escape from local optimum phases. When solving large instances of various problems, its efficiency may be enhanced through decomposition. The resulting two level VNS, called Variable Neighborhood Decomposition Search (VNDS), is presented and illustrated on the p-median problem. Results on 1400, 3038 and 5934 node instances from the TSP library show VNDS improves notably upon VNS in less computing time, and gives much better results than Fast Interchange (FI), in the same time that FI takes for a single descent. Moreover, Reduced VNS (RVNS), which does not use a descent phase, gives results similar to those of FI in much less computing time.

Proceedings ArticleDOI
07 Nov 2001
TL;DR: This approach is a tabu-embedded simulated annealing algorithm which restarts a search procedure from the current best solution after several non-improving search iterations, and is the first approach to solve large multiple-vehicle PDPTW problem instances with various distribution properties.
Abstract: In this paper, we propose a metaheuristic to solve the pickup and delivery problem with time windows. Our approach is a tabu-embedded simulated annealing algorithm which restarts a search procedure from the current best solution after several non-improving search iterations. The computational experiments on the six newly-generated different data sets marked our algorithm as the first approach to solve large multiple-vehicle PDPTW problem instances with various distribution properties.

Journal ArticleDOI
TL;DR: In this paper, the authors emulate the behavior of a colony of ants to achieve this optimization, using the fact that ants are capable of finding the shortest path from a food source to their nest by depositing a trail of pheromone during their walk.

Journal ArticleDOI
TL;DR: This paper presents a parallel model for ant colonies to solve the quadratic assignment problem (QAP), where the cooperation between simulated ants is provided by a pheromone matrix that plays the role of a global memory.

Book ChapterDOI
01 Jan 2001
TL;DR: This chapter will describe evolutionary algorithms that seem to respond to the characteristics required by soft computing, both with regard to versatility and to the efficiency and goodness of the results obtained.
Abstract: As pointed out in the previous chapters, both fuzzy logic and neural networks imply optimization processes. For fuzzy logic in particular, optimization algorithms are needed that will allow determinations of the number of rules, the number of fuzzy sets and their position in the universe of discourse to be based on optimum criteria instead of on empirical techniques. This process generally involves a large number of variables and thus requires particularly efficient optimization algorithms. Similarly, in the field of neural networks, what can be of considerable use are optimization algorithms capable of finding the global minimum of a function with many variables, in order to overcome the intrinsic limitations inherent in learning algorithms based on the gradient technique. Therefore, this chapter will describe evolutionary algorithms that seem to respond to the characteristics required by soft computing, both with regard to versatility and to the efficiency and goodness of the results obtained. Genetic algorithms have proved to be a valid procedure for global optimization, applicable in very many sectors of engineering [10–15]. Ease of implementation and the potentiality inherent in an evolutionist approach make genetic algorithms a powerful optimization tool for non-convex functions. The genetic algorithms (GA) represent a new optimization procedure based on Darwin’s natural evolution principle. Adopting this analogy, inside a population in continuous evolution, the individual who best adapts to environmental constraints corresponds to the optimal solution of the problem to be solved.

Journal ArticleDOI
TL;DR: Experimental results show that the ACO approach is competitive with these other approaches in terms of performance and CPU requirements.

Journal ArticleDOI
TL;DR: The objective of this paper is to present and categorise the solution approaches in the literature for 2D regular and irregular strip packing problems and focus is hereby on the analysis of themethods involving genetic algorithms.
Abstract: This paper is a review of the approaches developed to solve 2D packing problems with meta-heuristic algorithms As packing tasks are combinatorial problems with very large search spaces, the recent literature encourages the use of meta-heuristic search methods, in particular genetic algorithms The objective of this paper is to present and categorise the solution approaches in the literature for 2D regular and irregular strip packing problems The focus is hereby on the analysis of the methods involving genetic algorithms An overview of the methods applying other meta-heuristic algorithms including simulated annealing, tabu search, and artificial neural networks is also given

Journal ArticleDOI
TL;DR: In this paper, a GA metaheuristic based cell formation procedure is presented to simultaneously group machines and part-families into cells, so that intercellular movements are minimized.

Journal ArticleDOI
TL;DR: In this paper, an implementation of Tabu Search to cope with long-term transmission network expansion planning problems is described, and the results obtained by their approach let us conclude that TS is a robust and promising technique to be applied in this problem.
Abstract: This paper describes an implementation of Tabu Search to cope with long-term transmission network expansion planning problems. Tabu Search is a metaheuristic proposed in 1989 to be applied to combinatorial problems. To assess the potential of our approach we test it with two cases of transmission network expansion planning. The results obtained by our approach let us to conclude that TS is a robust and promising technique to be applied in this problem.

Journal ArticleDOI
TL;DR: Algorithms that learn to improve search performance on large-scale optimization tasks, including STAGE, which works by learning an evaluation function that predicts the outcome of a local search algorithm from features of states visited during search.
Abstract: This paper describes algorithms that learn to improve search performance on large-scale optimization tasks. The main algorithm, STAGE, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited during search. The learned evaluation function is then used to bias future search trajectories toward better optima on the same problem. Another algorithm, X-STAGE, transfers previously learned evaluation functions to new, similar optimization problems. Empirical results are provided on seven large-scale optimization domains: bin-packing, channel routing, Bayesian network structure-finding, radiotherapy treatment planning, cartogram design, Boolean satisfiability, and Boggle board setup.

Journal ArticleDOI
TL;DR: A comparative study among GA, SA, and TS, which shows that these algorithms have many similarities, but they also possess distinctive features, mainly in their strategies for searching the solution state space.

Journal ArticleDOI
TL;DR: This technique is a hybrid multi-pass method that combines random sampling procedures with a backward–forward method that greatly outperforms both the heuristics and metaheuristics currently available for the RCPSP being thus competitive with the best heuristic solution techniques for this problem.
Abstract: In this work a new heuristic solution technique for the Resource-Constrained Project Scheduling Problem (RCPSP) is proposed This technique is a hybrid multi-pass method that combines random sampling procedures with a backward–forward method The impact of each component of the algorithm is evaluated through a step-wise computational analysis which in addition permits the value of their parameters to be specified Furthermore, the performance of the new technique is evaluated against the best currently available heuristics using a well known set of instances The results obtained point out that the new technique greatly outperforms both the heuristics and metaheuristics currently available for the RCPSP being thus competitive with the best heuristic solution techniques for this problem

Journal ArticleDOI
TL;DR: The purpose of this review is to give a detailed description of this metaheuristic and to show where it stands in terms of performance.
Abstract: Iterated Local Search (ILS) has many of the desirable features of a metaheuristic: it is simple, easy to implement, robust, and highly effective The essential idea of ILS lies in focusing the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for a given optimization engine The success of ILS lies in the biased sampling of this set of local optima How effective this approach turns out to be depends mainly on the choice of the local search, the perturbations, and the acceptance criterion So far, it has lead to a number of state-of-the-art results without the use of too much problem-specific knowledge ILS can often become a competitive or even state of the art algorithm The purpose of this review is both to give a detailed description of ILS and to show where it stands in terms of performance

Book
01 Jun 2001
TL;DR: In this paper, the multilevel paradigm and its potential to aid the solution of combinatorial optimisation problems are considered. But, with the exception of the graph partitioning problem, multilevectors have not been widely applied to combinatory optimization problems.
Abstract: We consider the multilevel paradigm and its potential to aid the solution of combinatorial optimisation problems. The multilevel paradigm is a simple one, which involves recursive coarsening to create a hierarchy of approximations to the original problem. An initial solution is found (sometimes for the original problem, sometimes the coarsest) and then iteratively refined at each level. As a general solution strategy, the multilevel paradigm has been in use for many years and has been applied to many problem areas (most notably in the form of multigrid techniques). However, with the exception of the graph partitioning problem, multilevel techniques have not been widely applied to combinatorial optimisation problems. In this paper we address the issue of multilevel refinement for such problems and, with the aid of examples and results in graph partitioning, graph colouring and the travelling salesman problem, make a case for its use as a metaheuristic. The results provide compelling evidence that, although the multilevel framework cannot be considered as a panacea for combinatorial problems, it can provide an extremely useful addition to the combinatorial optimisation toolkit. We also give a possible explanation for the underlying process and extract some generic guidelines for its future use on other combinatorial problems.

Proceedings Article
01 Jan 2001
TL;DR: This paper primarily focuses on the significant progress which general frames within the meta-heuristics field have implied for solving combinatorial optimization problems, mainly those for planning and scheduling.
Abstract: Meta-heuristics support managers in decision-making with robust tools that provide high-quality solutions to important applications in business, engineering, economics and science in reasonable time horizons. In this paper we give some insight into the state of the art of meta-heuristics. This primarily focuses on the significant progress which general frames within the meta-heuristics field have implied for solving combinatorial optimization problems, mainly those for planning and scheduling.