scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 2000"


Proceedings ArticleDOI
16 Jul 2000
TL;DR: It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension.
Abstract: The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors.

2,922 citations


Journal ArticleDOI
TL;DR: Computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that MM AS is currently among the best performing algorithms for these problems.

2,739 citations


Journal ArticleDOI
TL;DR: The notion of using co-evolution to adapt the penalty factors of a fitness function incorporated in a genetic algorithm (GA) for numerical optimization is introduced.

1,096 citations


Journal ArticleDOI
TL;DR: This paper overviews some models derived from the observation of real ants, emphasizing the role played by stigmergy as distributed communication paradigm, and shows how these models have inspired a number of novel algorithms for the solution of distributed optimization and distributed control problems.

821 citations


Journal Article
TL;DR: The features of Scatter Search and Path Relinking are described, which set them apart from other evolutionary approaches, and that offer opportunities for creating increasingly more versatile and effective methods in the future.
Abstract: —The evolutionary approach called Scatter Search, and its generalized form called Path Relinking, have proved unusually effective for solving a diverse array of optimization problems from both classical and real world settings. Scatter Search and Path Relinking differ from other evolutionary procedures, such as genetic algorithms, by providing unifying principles for joining solutions based on generalized path constructions (in both Euclidean and neighborhood spaces) and by utilizing strategic designs where other approaches resort to randomization. Scatter Search and Path Relinking are also intimately related to the Tabu Search metaheuristic, and derive additional advantages by making use of adaptive memory and associated memory-exploiting mechanisms that are capable of being adapted to particular contexts. We describe the features of Scatter Search and Path Relinking that set them apart from other evolutionary approaches, and that offer opportunities for creating increasingly more versatile and effective methods in the future.

801 citations


Proceedings Article
10 Jul 2000
TL;DR: An ant colony optimization approach (ACO) for the resource-constrained project scheduling problem (RCPSP) is presented and Combinations of two pheromone evaluation methods are used by the ants to find new solutions.
Abstract: An ant colony optimization approach (ACO) for the resource-constrained project scheduling problem (RCPSP) is presented. Combinations of two pheromone evaluation methods are used by the ants to find new solutions. We tested our ACO algorithm on a set of large benchmark problems from the PSPLIB. Compared to several other heuristics for the RCPSP including genetic algorithms, simulated annealing, tabu search, and different sampling methods our algorithm performed best on the average. For some test instances the algorithm was able to find new best solutions.

699 citations


Journal ArticleDOI
TL;DR: This article is a survey of heuristics for the Vehicle Routing Problem which contains well-known schemes such as, the savings method, the sweep algorithm and various two-phase approaches and tabu search heuristic which have proved to be the most successful metaheuristic approach.

666 citations


Journal ArticleDOI
TL;DR: An overview of the methods that have been developed since 1977 for solving various reliability optimization problems; applications of these methods to various types of design problems; and heuristics, metaheuristic algorithms, exact methods, reliability-redundancy allocation, multi-objective optimization and assignment of interchangeable components in reliability systems.
Abstract: This paper provides: an overview of the methods that have been developed since 1977 for solving various reliability optimization problems; applications of these methods to various types of design problems; and heuristics, metaheuristic algorithms, exact methods, reliability-redundancy allocation, multi-objective optimization and assignment of interchangeable components in reliability systems. Like other applications, exact solutions for reliability optimization problems are not necessarily desirable because exact solutions are difficult to obtain, and even when they are available, their utility is marginal. A majority of the work in this area is devoted to developing heuristic and metaheuristic algorithms for solving optimal redundancy-allocation problems.

636 citations


Journal ArticleDOI
TL;DR: Evaluating the performance of several state-of-the-art heuristics from the literature on the basis of a standard set of test instances and point out to the most promising procedures is presented.

445 citations


Posted Content
TL;DR: In this paper, the performance of several state-of-the-art heuristics from the literature on the basis of a standard set of test instances and point out to the most promising procedures.
Abstract: We consider heuristic algorithms for the resource-constrained project scheduling problem. Starting with a literature survey, we summarize the basic components of heuristic approaches. We briefly describe so-called X -pass methods which are based on priority rules as well as metaheuristic algorithms. Subsequently, we present the results of our in-depth computational study. Here, we evaluate the performance of several state-of-the-art heuristics from the literature on the basis of a standard set of test instances and point out to the most promising procedures. Moreover, we analyze the behavior of the heuristics with respect to their components such as priority rules and metaheuristic strategy. Finally, we examine the impact of problem characteristics such as project size and resource scarceness on the performance.

438 citations


Journal ArticleDOI
16 May 2000
TL;DR: This paper proposes three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic that incorporates novel optimizations that improve efficiency greatly.
Abstract: Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space.In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.

Journal ArticleDOI
TL;DR: A new optimization principle is presented that is particularly suited for more complex optimization problems (“discontinuous” ones, problems with hard-to-find admissible solutions, Problems with complex objectives or many constraints).

Journal ArticleDOI
TL;DR: A new local optimizer called SOP-3-exchange is presented for the sequential ordering problem that extends a local search for the traveling salesman problem to handle multiple constraints directly without increasing computational complexity.
Abstract: We present a new local optimizer called SOP-3-exchange for the sequential ordering problem that extends a local search for the traveling salesman problem to handle multiple constraints directly without increasing computational complexity. An algorithm that combines the SOP-3-exchange with an Ant Colony Optimization algorithm is described, and we present experimental evidence that the resulting algorithm is more effective than existing methods for the problem. The best-known results for many of a standard test set of 22 problems are improved using the SOP-3-exchange with our Ant Colony Optimization algorithm or in combination with the MPO/AI algorithm (Chen and Smith 1996).

Journal ArticleDOI
TL;DR: It is shown that under certain conditions, the solutions generated in each iteration of this Graph-based Ant System converge with a probability that can be made arbitrarily close to 1 to the optimal solution of the given problem instance.

Journal ArticleDOI
TL;DR: The Nested Partitions (NP) method, a new randomized method for solving global optimization problems that systematically partitions the feasible region and concentrates the search in regions that are the most promising, is proposed.
Abstract: We propose a new randomized method for solving global optimization problems. This method, the Nested Partitions (NP) method, systematically partitions the feasible region and concentrates the search in regions that are the most promising. The most promising region is selected in each iteration based on information obtained from random sampling of the entire feasible region and local search. The method hence combines global and local search. We first develop the method for discrete problems and then show that the method can be extended to continuous global optimization. The method is shown to converge with probability one to a global optimum in finite time. In addition, we provide bounds on the expected number of iterations required for convergence, and we suggest two stopping criteria. Numerical examples are also presented to demonstrate the effectiveness of the method.

Book
10 Feb 2000
TL;DR: For each algorithm, the authors present the procedures of the algorithm, parameter selection criteria, convergence property analysis, and parallelization, and several real-world examples that illustrate various aspects of the algorithms.
Abstract: For each algorithm, the authors present the procedures of the algorithm, parameter selection criteria, convergence property analysis, and parallelization. There are also several real-world examples that illustrate various aspects of the algorithms. The book includes an introduction to fuzzy logic and its application in the formulation of multi-objective optimization problems, a discussion on hybrid techniques that combine features of heuristics, a survey of recent research work, and examples that illustrate required mathematical concepts.

Book ChapterDOI
01 Jan 2000
TL;DR: Concepts from the ”forking GA” (a multi-population evolutionary algorithm proposed to find multiple peaks in a multi-modal landscape) are used to enhance search in a dynamic landscape.
Abstract: Time-dependent optimization problems pose a new challenge to evolutionary algorithms, since they not only require a search for the optimum, but also a continuous tracking of the optimum over time. In this paper, we will will use concepts from the ”forking GA” (a multi-population evolutionary algorithm proposed to find multiple peaks in a multi-modal landscape) to enhance search in a dynamic landscape. The algorithm uses a number of smaller populations to track the most promising peaks over time, while a larger parent population is continuously searching for new peaks. We will show that this approach is indeed suitable for dynamic optimization problems by testing it on the recently proposed Moving Peaks Benchmark.

Book
01 Oct 2000
TL;DR: Theoretical background and core univariate case is given in this article, along with a discussion of parallel global optimization algorithms as decision procedures and their generalizations through Peano Curves.
Abstract: Preface. Acknowledgements. Part One: Global Optimization Algorithms as Decision Procedures. Theoretical Background and Core Univariate Case. 1. Introduction. 2. Global Optimization Algorithms as Statistical Decision Procedures - The Information Approach. 3. Core Global Search Algorithm and Convergence Study. 4. Global Optimization Methods as Bounding Procedures - The Geometric Approach. Part Two: Generalizations for Parallel Computing, Constrained and Multiple Criteria Problems. 5. Parallel Global Optimization Algorithms and Evaluation of the Efficiency of Parallelism. 6. Global Optimization under Non-Convex Constraints - The Index Approach. 7. Algorithms for Multiple Criteria Multiextremal Problems. Part Three: Global Optimization in Many Dimensions. Generalizations through Peano Curves. 8. Peano-Type Space-Filling Curves as Means for Multivariate Problems. 9. Multidimensional Parallel Algorithms. 10. Multiple Peano Scannings and Multidimensional Problems. References. List of Algorithms. List of Figures. List of Tables. Index.

Journal ArticleDOI
TL;DR: Two main advantages of ECTS are pointed out: first its principle is rather basic, directly inspired from combinatorial Tabu Search; secondly it shows a good performance for functions having a large number of variables (more than 10).

Journal ArticleDOI
TL;DR: The results obtained show that the new approach to handle constraints using evolutionary algorithms can consistently outperform the other techniques using relatively small sub-populations, and without a significant sacrifice in terms of performance.
Abstract: This paper presents a new approach to handle constraints using evolutionary algorithms. The new technique treats constraints as objectives, and uses a multiobjective optimization approach to solve the re-stated single-objective optimization problem. The new approach is compared against other numerical and evolutionary optimization techniques in several engineering optimization problems with different kinds of constraints. The results obtained show that the new approach can consistently outperform the other techniques using relatively small sub-populations, and without a significant sacrifice in terms of performance.

Journal ArticleDOI
TL;DR: Using the concept of min–max optimum, a new GA-based multiobjective optimization technique is proposed and two truss design problems are solved using it, proving that this technique generates better trade-offs and that the genetic algorithm can be used as a reliable numerical optimization tool.

Journal ArticleDOI
TL;DR: A reactive GRASP is proposed, in which the basic parameter that defines the restrictiveness of the candidate list during the construction phase is self-adjusted according to the quality of the solutions previously found, which matches the optimal solution found by an exact column-generation with branch-and-bound algorithm.
Abstract: A greedy randomized adaptive search procedure (GRASP) is a metaheuristic for combinatorial optimization. In this paper, we describe a GRASP for a matrix decomposition problem arising in the context of traffic assignment in communication satellites. We review basic concepts of GRASP: construction and local search algorithms. The local search phase is based on the use of a new type of neighborhood defined by constructive and destructive moves. The implementation of a GRASP for the matrix decomposition problem is described in detail. Extensive computational experiments on literature and randomly generated problems are reported. Moreover, we propose a new procedureReactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list during the construction phase is self-adjusted according to the quality of the solutions previously found. The approach is robust and does not require calibration efforts. On most of the literature problems considered, the newReactive GRASP heuristic matches the optimal solution found by an exact column-generation with branch-and-bound algorithm.

Journal ArticleDOI
TL;DR: In this article, the Variable Neighborhood Search metaheuristic is used to solve the problem of finding extremal graphs for expressions involving one or more invariants is viewed as a problem of combinatorial optimization.

Journal ArticleDOI
01 May 2000
TL;DR: The proposed parallel tabu search algorithm has shown to be effective in exploring this type of optimization landscape and is the most comprehensive combinatorial optimization technique available for treating difficult problems such as the transmission expansion planning.
Abstract: Large scale combinatorial problems such as the network expansion problem present an amazingly high number of alternative configurations with practically the same investment, but with substantially different structures (configurations obtained with different sets of circuit/transformer additions). The proposed parallel tabu search algorithm has shown to be effective in exploring this type of optimization landscape. The algorithm is a third generation tabu search procedure with several advanced features. This is the most comprehensive combinatorial optimization technique available for treating difficult problems such as the transmission expansion planning. The method includes features of a variety of other approaches such as heuristic search, simulated annealing and genetic algorithms. In all test cases studied there are new generation, load sites which can be connected to an existing main network: such connections may require more than one line, transformer addition, which makes the problem harder in the sense that more combinations have to be considered.

Journal ArticleDOI
TL;DR: This paper describes a novel implementation of the Simulated Annealing algorithm designed to explore the trade-off between multiple objectives in optimization problems and concludes that the proposed algorithm offers an effective and easily implemented method for exploring thetrade-off in multiobjective optimization problems.
Abstract: This paper describes a novel implementation of the Simulated Annealing algorithm designed to explore the trade-off between multiple objectives in optimization problems. During search, the algorithm maintains and updates an archive of non-dominated solutions between each of the competing objectives. At the end of search, the final archive corresponds to a number of optimal solutions from which the designer may choose a particular configuration. A new acceptance probability formulation based on an annealing schedule with multiple temperatures (one for each objective) is proposed along with a novel restart strategy. The performance of the algorithm is demonstrated on three examples. It is concluded that the proposed algorithm offers an effective and easily implemented method for exploring the trade-off in multiobjective optimization problems.

Book ChapterDOI
18 Sep 2000
TL;DR: An application of the Ant Colony Optimization (ACO) metaheuristic to the single machine total weighted tardiness problem is presented obtaining a novel ACO algorithm that uses a heterogeneous colony of ants and is highly effective in finding the best-known solutions on all instances of a widely used set of benchmark problems.
Abstract: In this article we present an application of the Ant Colony Optimization (ACO) metaheuristic to the single machine total weighted tardiness problem. First, we briefly discuss the constructive phase of ACO in which a colony of artificial ants generates a set of feasible solutions. Then, we introduce some simple but very effective local search. Last, we combine the constructive phase with local search obtaining a novel ACO algorithm that uses a heterogeneous colony of ants and is highly effective in finding the best-known solutions on all instances of a widely used set of benchmark problems.

Proceedings ArticleDOI
06 Sep 2000
TL;DR: A robust surrogate-model-based optimization method is presented here that has good global search properties, and proven local convergence results, and providing a provably convergent method for ensuring local optimality.
Abstract: This paper describes an algorithm and provides test results for surrogate-model-based optimization. In this type of optimization, the objective and constraint functions are represented by global "surrogates", i.e. response models, of the "true" problem responses. In general, guarantees of global optimality are not possible. However, a robust surrogate-model-based optimization method is presented here that has good global search properties, and proven local convergence results. This paper describes methods for handling three key issues in surrogate-model-based optimization. These issues are maintaining a balance of effort between global design space exploration and local optimizer region refinement, maintaining good surrogate model conditioning as points "pile up" in local regions, and providing a provably convergent method for ensuring local optimality. Acknowledgments: Work of the first author was supported by NSERC (Natural Sciences and Engineering Research Council) fellowship PDF-2074321998, and the first three authors was supported by DOE DE-FG03-95ER25257, AFOSR F49620-98-10267, The Boeing Company, Sandia LG-4253, ExxonMobil and CRPC CCR-9120008. Copyright ©2000 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Journal ArticleDOI
TL;DR: This work presents a class of direct search algorithms to provide limit points that satisfy some appropriate necessary conditions for local optimality for such problems and gives a more expensive version of the algorithm that guarantees additional necessary optimality conditions.
Abstract: Many engineering optimization problems involve a special kind of discrete variable that can be represented by a number, but this representation has no significance. Such variables arise when a decision involves some situation like a choice from an unordered list of categories. This has two implications: The standard approach of solving problems with continuous relaxations of discrete variables is not available, and the notion of local optimality must be defined through a user-specified set of neighboring points. We present a class of direct search algorithms to provide limit points that satisfy some appropriate necessary conditions for local optimality for such problems. We give a more expensive version of the algorithm that guarantees additional necessary optimality conditions. A small example illustrates the differences between the two versions. A real thermal insulation system design problem illustrates the efficacy of the user controls for this class of algorithms.

Journal ArticleDOI
TL;DR: Computational results, obtained on a number of standard problem instances, testify the effectiveness of the proposed ANTS metaheuristic, that is, an approach following the ant colony optimization paradigm.

Journal ArticleDOI
TL;DR: The use of response surface estimation in collaborative optimization, an architecture for large-scale multidisciplinary design is described, and how response surface models of subproblem optimization results improve the performance of collaborative optimization is demonstrated.
Abstract: The use of response surface estimation in collaborative optimization, an architecture for large-scale multidisciplinary design is described. Collaborative optimization preserves the autonomy of individual disciplines while providing a mechanism for coordinating the overall design problem and progressing toward improved designs. Collaborative optimization is a two-level optimization architecture, with discipline-specific optimizations free to specify local designs, and a global optimization that ensures that all of the discipline designs eventually agree on a single value for those variables that are shared in common. Results demonstrate how response surface models of subproblem optimization results improve the performance of collaborative optimization. The utility of response surface estimation in collaborative optimization depends on the generation of inexpensive accurate response surface models and the refinement of these models over several fitting cycles. Special properties of the subproblem optimization formulation are exploited to reduce the number of required subproblem optimizations to develop a quadratic model from O(n 2 ) to O(n/2). Response surface refinement is performed using ideas from trust region methods. Results for the combined approaches are compared through the design optimization of a tailless unmanned air vehicle in 44 design variables.