scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2000"


Proceedings ArticleDOI
16 Jul 2000
TL;DR: It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension.
Abstract: The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors.

2,922 citations


Journal ArticleDOI
TL;DR: An overview of the methods that have been developed since 1977 for solving various reliability optimization problems; applications of these methods to various types of design problems; and heuristics, metaheuristic algorithms, exact methods, reliability-redundancy allocation, multi-objective optimization and assignment of interchangeable components in reliability systems.
Abstract: This paper provides: an overview of the methods that have been developed since 1977 for solving various reliability optimization problems; applications of these methods to various types of design problems; and heuristics, metaheuristic algorithms, exact methods, reliability-redundancy allocation, multi-objective optimization and assignment of interchangeable components in reliability systems. Like other applications, exact solutions for reliability optimization problems are not necessarily desirable because exact solutions are difficult to obtain, and even when they are available, their utility is marginal. A majority of the work in this area is devoted to developing heuristic and metaheuristic algorithms for solving optimal redundancy-allocation problems.

636 citations


Journal Article
TL;DR: This paper presents a method to employ particle swarms optimizers in a cooperative configuration by splitting the input vector into several sub-vectors, each which is optimized cooperatively in its own swarm.
Abstract: This paper presents a method to employ particle swarms optimizers in a cooperative configuration. This is achieved by splitting the input vector into several sub-vectors, each which is optimized cooperatively in its own swarm. the application of this technique to neural network training is investigated, with promising results.

344 citations


Book ChapterDOI
01 Jan 2000
TL;DR: Concepts from the ”forking GA” (a multi-population evolutionary algorithm proposed to find multiple peaks in a multi-modal landscape) are used to enhance search in a dynamic landscape.
Abstract: Time-dependent optimization problems pose a new challenge to evolutionary algorithms, since they not only require a search for the optimum, but also a continuous tracking of the optimum over time. In this paper, we will will use concepts from the ”forking GA” (a multi-population evolutionary algorithm proposed to find multiple peaks in a multi-modal landscape) to enhance search in a dynamic landscape. The algorithm uses a number of smaller populations to track the most promising peaks over time, while a larger parent population is continuously searching for new peaks. We will show that this approach is indeed suitable for dynamic optimization problems by testing it on the recently proposed Moving Peaks Benchmark.

342 citations


Book
01 Oct 2000
TL;DR: Theoretical background and core univariate case is given in this article, along with a discussion of parallel global optimization algorithms as decision procedures and their generalizations through Peano Curves.
Abstract: Preface. Acknowledgements. Part One: Global Optimization Algorithms as Decision Procedures. Theoretical Background and Core Univariate Case. 1. Introduction. 2. Global Optimization Algorithms as Statistical Decision Procedures - The Information Approach. 3. Core Global Search Algorithm and Convergence Study. 4. Global Optimization Methods as Bounding Procedures - The Geometric Approach. Part Two: Generalizations for Parallel Computing, Constrained and Multiple Criteria Problems. 5. Parallel Global Optimization Algorithms and Evaluation of the Efficiency of Parallelism. 6. Global Optimization under Non-Convex Constraints - The Index Approach. 7. Algorithms for Multiple Criteria Multiextremal Problems. Part Three: Global Optimization in Many Dimensions. Generalizations through Peano Curves. 8. Peano-Type Space-Filling Curves as Means for Multivariate Problems. 9. Multidimensional Parallel Algorithms. 10. Multiple Peano Scannings and Multidimensional Problems. References. List of Algorithms. List of Figures. List of Tables. Index.

297 citations


Journal ArticleDOI
TL;DR: The results obtained show that the new approach to handle constraints using evolutionary algorithms can consistently outperform the other techniques using relatively small sub-populations, and without a significant sacrifice in terms of performance.
Abstract: This paper presents a new approach to handle constraints using evolutionary algorithms. The new technique treats constraints as objectives, and uses a multiobjective optimization approach to solve the re-stated single-objective optimization problem. The new approach is compared against other numerical and evolutionary optimization techniques in several engineering optimization problems with different kinds of constraints. The results obtained show that the new approach can consistently outperform the other techniques using relatively small sub-populations, and without a significant sacrifice in terms of performance.

271 citations


Journal ArticleDOI
TL;DR: Using the concept of min–max optimum, a new GA-based multiobjective optimization technique is proposed and two truss design problems are solved using it, proving that this technique generates better trade-offs and that the genetic algorithm can be used as a reliable numerical optimization tool.

254 citations


Proceedings ArticleDOI
06 Sep 2000
TL;DR: A robust surrogate-model-based optimization method is presented here that has good global search properties, and proven local convergence results, and providing a provably convergent method for ensuring local optimality.
Abstract: This paper describes an algorithm and provides test results for surrogate-model-based optimization. In this type of optimization, the objective and constraint functions are represented by global "surrogates", i.e. response models, of the "true" problem responses. In general, guarantees of global optimality are not possible. However, a robust surrogate-model-based optimization method is presented here that has good global search properties, and proven local convergence results. This paper describes methods for handling three key issues in surrogate-model-based optimization. These issues are maintaining a balance of effort between global design space exploration and local optimizer region refinement, maintaining good surrogate model conditioning as points "pile up" in local regions, and providing a provably convergent method for ensuring local optimality. Acknowledgments: Work of the first author was supported by NSERC (Natural Sciences and Engineering Research Council) fellowship PDF-2074321998, and the first three authors was supported by DOE DE-FG03-95ER25257, AFOSR F49620-98-10267, The Boeing Company, Sandia LG-4253, ExxonMobil and CRPC CCR-9120008. Copyright ©2000 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

210 citations


Journal ArticleDOI
TL;DR: The use of response surface estimation in collaborative optimization, an architecture for large-scale multidisciplinary design is described, and how response surface models of subproblem optimization results improve the performance of collaborative optimization is demonstrated.
Abstract: The use of response surface estimation in collaborative optimization, an architecture for large-scale multidisciplinary design is described. Collaborative optimization preserves the autonomy of individual disciplines while providing a mechanism for coordinating the overall design problem and progressing toward improved designs. Collaborative optimization is a two-level optimization architecture, with discipline-specific optimizations free to specify local designs, and a global optimization that ensures that all of the discipline designs eventually agree on a single value for those variables that are shared in common. Results demonstrate how response surface models of subproblem optimization results improve the performance of collaborative optimization. The utility of response surface estimation in collaborative optimization depends on the generation of inexpensive accurate response surface models and the refinement of these models over several fitting cycles. Special properties of the subproblem optimization formulation are exploited to reduce the number of required subproblem optimizations to develop a quadratic model from O(n 2 ) to O(n/2). Response surface refinement is performed using ideas from trust region methods. Results for the combined approaches are compared through the design optimization of a tailless unmanned air vehicle in 44 design variables.

157 citations


Proceedings ArticleDOI
06 Sep 2000
TL;DR: The performance of the SAO strategy on this second test case demonstrates the utility of using this optimization method on engineering optimization problems, many of which contain multiple local optima.
Abstract: A trust region-based optimization method has been incorporated into the DAKOTA optimization software toolkit. This trust region approach is designed to manage surrogate models of the objective and constraint functions during the optimization process. In this method, the surrogate functions are employed in a sequence of optimization steps, where the original expensive objective and constraint functions are used to update the surrogates during the optimization process. This sequential approximate optimization (SAO) strategy is demonstrated on two test cases, with comparisons to optimization results obtained with a quasi-Newton method. For both test cases the SAO strategy exhibits desirable convergence trends. In the first test case involving a smooth function, the SAO strategy converges to a slightly better minimum than the quasi-Newton method, although it uses twice as many function evaluations. In the second test case involving a function with many local minima, the SAO strategy generally finds better local minima than does the quasi-Newton method. The performance of the SAO strategy on this second test case demonstrates the utility of using this optimization method on engineering optimization problems, many of which contain multiple local optima.

106 citations


Posted Content
12 Apr 2000
TL;DR: In this paper, a framework for automated optimization of stochastic simulation models using Response Surface Methodology is developed, which is especially intended for simulation models where the calculation of the corresponding response function is very expensive or time-consuming.
Abstract: textWe develop a framework for automated optimization of stochastic simulation models using Response Surface Methodology. The framework is especially intended for simulation models where the calculation of the corresponding stochastic response function is very expensive or time-consuming. Response Surface Methodology is frequently used for the optimization of stochastic simulation models in a non-automated fashion. In scientific applications there is a clear need for a standardized algorithm based on Response Surface Methodology. In addition, an automated algorithm is less time-consuming, since there is no need to interfere in the optimization process. In our framework for automated optimization we describe all choices that have to be made in constructing such an algorithm.

Journal ArticleDOI
11 Jun 2000
TL;DR: A powerful new Aggressive Space Mapping (ASM) optimization algorithm is presented in this paper, which draws upon recent developments in both surrogate-based optimization and microwave device neuromodeling.
Abstract: A powerful new Aggressive Space Mapping (ASM) optimization algorithm is presented It draws upon recent developments in both surrogate-based optimization and microwave device neuromodeling Our surrogate formulation (new to microwave engineering) exploits, in a novel way, a linear frequency-space mapping This is a powerful approach to severe response misalignments

Proceedings ArticleDOI
27 Jul 2000
TL;DR: The paper discusses the problems with using gradient descent to train product unit neural networks, and shows that particle swarm optimization, genetic algorithms and LeapFrog are efficient alternatives to successfully train product units.
Abstract: Product units in the hidden layer of multilayer neural networks provide a powerful mechanism for neural networks to efficiently learn higher-order combinations of inputs. Training product unit networks using local optimization algorithms is difficult due to an increased number of local minima and increased chances of network paralysis. The paper discusses the problems with using gradient descent to train product unit neural networks, and shows that particle swarm optimization, genetic algorithms and LeapFrog are efficient alternatives to successfully train product unit neural networks.

Journal ArticleDOI
TL;DR: In this article, the authors present an overview of the recent advances in deterministic global optimization approaches and their applications in the areas of process design and control, focusing on global optimization methods for (a) twice-differentiable constrained nonlinear optimization problems, (b) mixed-integer nonlinear optimisation problems, and (c) locating all solutions of nonlinear systems of equations.


Book ChapterDOI
01 Jan 2000
TL;DR: The aim of this paper is to propose the parallel version of this Bayesian Optimization Algorithm, where the optimization time decreases linearly with the number of processors.
Abstract: In the last few years there has been a growing interest in the field of Estimation of Distribution Algorithms (EDAs), where crossover and mutation genetic operators are replaced by probability estimation and sampling techniques. The Bayesian Optimization Algorithm incorporates methods for learning Bayesian networks and uses these to model the promising solutions and generate new ones. The aim of this paper is to propose the parallel version of this algorithm, where the optimization time decreases linearly with the number of processors. During the parallel construction of network, the explicit topological ordering of variables is used to keep the model acyclic. The performance of the optimization process seems to be not affected by this constraint and our version of algorithm was successfully tested for the discrete combinatorial problem represented by graph partitioning as well as for deceptive functions.


Journal ArticleDOI
TL;DR: An ANSI FORTRAN implementation of a comprehensive algorithm for the global (or near-global) optimization of DR systems within a radial region of experimentation and results show that DR2 is more effective at locating optimal operating conditions even if the DR system is degenerate.
Abstract: During exploration of an industrial process, the engineer/experimenter must take into account both the mean and variance of the system in order to seek the appropriate parameter settings for better

Journal ArticleDOI
TL;DR: The paper presents a method called MOGA-INS for Multidisciplinary Design Optimization (MDO) of systems that involve multiple competing objectives with a mix of continuous and discrete variables based on the Immune Network Simulation approach that has been extended by combining it with a Multi-Objective Genetic Algorithm.
Abstract: The paper presents a method called MOGA-INS for Multidisciplinary Design Optimization (MDO) of systems that involve multiple competing objectives with a mix of continuous and discrete variables. The method is based on the Immune Network Simulation ( INS) approach that has been extended by combining it with a Multi-Objective Genetic Algorithm ( MOGA). MOGA obtains Pareto solutions for multiple objective optimization problems in an all-at-once manner. INS provides a coordination strategy for subsystems in MDO to interact and is naturally suited for genetic algorithm-based optimization methods. The MOGA-INS method is demonstrated with a speed-reducer example, formulated as a two-level two-objective design optimization problem.

Journal ArticleDOI
TL;DR: This work uses a recently developed coordination language, called Manifold, to implement a distributed optimization of Rosenbrock's function, and shows that this implementation outperforms a sequential optimization algorithm based on standard genetic algorithms.

Proceedings ArticleDOI
07 Apr 2000
TL;DR: This research demonstrates how PSO can be modified to solve multiobjective optimization problems (MOPs) and demonstrates its effectiveness on two MOPs.
Abstract: Evolutionary algorithms (EAs) are search procedures based on natural selection [2]. They have been successfully applied to a wide variety of optimization problems [4]. Particle Swarm Optimization (PSO) [1,7] is a new type of evolutionary paradigm that has been successfully used to solve a number of single objective optimization problems (SOPs). However, to date, no one has applied PSO in an effort to solve multiobjective optimization problems (MOPs). The purpose of our research is to demonstrate how PSO can be modified to solve MOPs. In addition to showing how this can be done, we demonstrate its effectiveness on two MOPs.

Posted Content
TL;DR: The paper first compares the use of optimization heuristics to the classical optimization techniques for the selection of optimal portfolios, and the heuristic approach is applied to problems other than those in the standard mean-variance framework where the classical optimize fails.
Abstract: The paper first compares the use of optimization heuristics to the classical optimization techniques for the selection of optimal portfolios. Second, the heuristic approach is applied to problems other than those in the standard mean-variance framework where the classical optimization fails.

Proceedings ArticleDOI
04 Dec 2000
TL;DR: In this article, a distribution state estimation method using a hybrid particle swarm optimization (HPSO) is proposed, which considers practical measurements in distribution systems and assumes that absolute values of voltage and current can be measured at the secondary side buses of substations (S/Ss) and RTUs (remote terminal units).
Abstract: This paper proposes a distribution state estimation method using a hybrid particle swarm optimization (HPSO). The proposed method considers practical measurements in distribution systems and assumes that absolute values of voltage and current can be measured at the secondary side buses of substations (S/Ss) and RTUs (remote terminal units) in distribution systems. The method can estimate load and distributed generation output values at each node considering nonlinear characteristics of the practical equipment in distribution systems. The feasibility of the proposed method is demonstrated and compared with the original PSO on practical distribution system models. The results indicate the applicability of the proposed state estimation method to the practical distribution systems.


Proceedings Article
01 May 2000
TL;DR: It is demonstrated how extremal optimization can be implemented for a variety of hard optimization problems, and believed that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.
Abstract: The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only one adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.

Journal ArticleDOI
D. Hilding1
TL;DR: This paper presents a heuristic smoothing procedure (HSP) that lessens the risk that gradient-based optimization algorithms get stuck in (nonglobal) local optima of structural optimization problems including unilateral constraints.
Abstract: Structural optimization problems are often solved by gradient-based optimization algorithms, e.g. sequential quadratic programming or the method of moving asymptotes. If the structure is subject to unilateral constraints, then the gradient may be nonexistent for some designs. It follows that difficulties may arise when such structures are to be optimized using gradient-based optimization algorithms. Unilateral constraints arise, for instance, if the structure may come in frictionless contact with an obstacle. This paper presents a heuristic smoothing procedure (HSP) that lessens the risk that gradient-based optimization algorithms get stuck in (nonglobal) local optima of structural optimization problems including unilateral constraints. In the HSP, a sequence of optimization problems must be solved. All these optimization problems have well-defined gradients and are therefore well-suited for gradient-based optimization algorithms. It is proven that the solutions of this sequence of optimization problems converge to the solution of the original structural optimization problem.¶The HSP is illustrated in a few numerical examples. The computational results show that the HSP can be an effective method for avoiding local optima.


Journal ArticleDOI
TL;DR: The proposed optimization methodology is based on an analogy between steady-state operation periods in process operation and iterations in numerical optimization, which is also used by optimization-based run-to-run (RtR) control for batch processes.
Abstract: We present a complementary approach to real-time optimization (RTO) for maximizing the operating profit of an existing chemical plant without requiring a model-updating procedure, which is cumbersome and which may not necessarily improve the model. The proposed optimization methodology is based on an analogy between steady-state operation periods in process operation and iterations in numerical optimization. This analogy is also used by optimization-based run-to-run (RtR) control for batch processes. The process measurements are utilized to correct the gradient information used in optimization computations, resulting in better operating conditions. The plant operation is an integral part of the optimization, and this necessitates certain modifications to the optimization algorithm that we use (feasible sequential quadratic programming). The methodology is tested with a CSTR process and is shown to be robust in the presence of substantial model-plant mismatch.

Book ChapterDOI
18 Sep 2000
TL;DR: It is believed that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.
Abstract: We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems The method, called extremal optimization, is inspired by "self-organized criticality," a concept introduced to describe emergent complexity in many physical systems In contrast to Genetic Algorithms which operate on an entire "genepool" of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones Large fluctuations, called "avalanches," ensue that efficiently explore many local optima Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing With only one adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity

Journal ArticleDOI
TL;DR: A structure of classes based on Object-Oriented Programming, which allows the development of an Optimization Library, where deterministic and stochastic optimization algorithms are considered, as well as algorithms that work with constrained or unconstrained objective functions.
Abstract: This paper presents a structure of classes based on Object-Oriented Programming, which allows the development of an Optimization Library. In this library, deterministic and stochastic optimization algorithms are considered, as well as algorithms that work with constrained or unconstrained objective functions. We present the characteristics of some main optimization methods used in recent years, mainly in the electromagnetic area. Then, based on these characteristics, we show the classes created for the implementation of this optimization library. Finally, we present the communication architecture used for data exchange between this library and a Finite Element Method software.