scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 1998"


Proceedings ArticleDOI
04 May 1998
TL;DR: A new parameter, called inertia weight, is introduced into the original particle swarm optimizer, which resembles a school of flying birds since it adjusts its flying according to its own flying experience and its companions' flying experience.
Abstract: Evolutionary computation techniques, genetic algorithms, evolutionary strategies and genetic programming are motivated by the evolution of nature. A population of individuals, which encode the problem solutions are manipulated according to the rule of survival of the fittest through "genetic" operations, such as mutation, crossover and reproduction. A best solution is evolved through the generations. In contrast to evolutionary computation techniques, Eberhart and Kennedy developed a different algorithm through simulating social behavior (R.C. Eberhart et al., 1996; R.C. Eberhart and J. Kennedy, 1996; J. Kennedy and R.C. Eberhart, 1995; J. Kennedy, 1997). As in other algorithms, a population of individuals exists. This algorithm is called particle swarm optimization (PSO) since it resembles a school of flying birds. In a particle swarm optimizer, instead of using genetic operators, these individuals are "evolved" by cooperation and competition among the individuals themselves through generations. Each particle adjusts its flying according to its own flying experience and its companions' flying experience. We introduce a new parameter, called inertia weight, into the original particle swarm optimizer. Simulations have been done to illustrate the significant and effective impact of this new parameter on the particle swarm optimizer.

9,373 citations


Book ChapterDOI
TL;DR: This paper first analyzes the impact that inertia weight and maximum velocity have on the performance of the particle swarm optimizer, and then provides guidelines for selecting these two parameters.
Abstract: This paper first analyzes the impact that inertia weight and maximum velocity have on the performance of the particle swarm optimizer, and then provides guidelines for selecting these two parameters. Analysis of experiments demonstrates the validity of these guidelines.

3,557 citations


Book ChapterDOI
TL;DR: This paper compares two evolutionary computation paradigms: genetic algorithms and particle swarm optimization, and suggests ways in which performance might be improved by incorporating features from one paradigm into the other.
Abstract: This paper compares two evolutionary computation paradigms: genetic algorithms and particle swarm optimization. The operators of each paradigm are reviewed, focusing on how each affects search behavior in the problem space. The goals of the paper are to provide additional insights into how each paradigm works, and to suggest ways in which performance might be improved by incorporating features from one paradigm into the other.

1,661 citations


Proceedings ArticleDOI
04 May 1998
TL;DR: A hybrid based on the particle swarm algorithm but with the addition of a standard selection mechanism from evolutionary computations is described that shows selection to provide an advantage for some (but not all) complex functions.
Abstract: This paper describes a evolutionary optimization algorithm that is a hybrid based on the particle swarm algorithm but with the addition of a standard selection mechanism from evolutionary computations. A comparison is performed between the hybrid swarm and the ordinary particle swarm that shows selection to provide an advantage for some (but not all) complex functions.

897 citations



Proceedings ArticleDOI
04 May 1998
TL;DR: A multimodal problem generator was used to test three versions of a genetic algorithm and the binary particle swarm algorithm in a factorial time-series experiment.
Abstract: A multimodal problem generator was used to test three versions of a genetic algorithm and the binary particle swarm algorithm in a factorial time-series experiment. Specific strengths and weaknesses of the various algorithms were identified.

450 citations



Journal ArticleDOI
TL;DR: The objective of this paper is to investigate the efficiency of combinatorial optimization methods, in particular algorithms based on evolution strategies (ES) when incorporated into the solution of large-scale, continuous or discrete, structural optimization problems.

205 citations


Journal ArticleDOI
TL;DR: In this paper, the use of genetic algorithm (GA) optimization, a global search technique, is used to determine both the active control and passive mechanical parameters of a vehicle suspension system.

168 citations


Journal ArticleDOI
TL;DR: This paper discusses three classes of dynamic optimization problems with discontinuities: path-constrained problems, hybrid discrete/continuous problems, and mixed-integer dynamic optimize problems.
Abstract: Many engineering tasks can be formulated as dynamic optimization or open-loop optimal control problems, where we search a priori for the input profiles to a dynamic system that optimize a given performance measure over a certain time period. Further, many systems of interest in the chemical processing industries experience significant discontinuities during transients of interest in process design and operation. This paper discusses three classes of dynamic optimization problems with discontinuities: path-constrained problems, hybrid discrete/continuous problems, and mixed-integer dynamic optimization problems. In particular, progress toward a general numerical technology for the solution of large-scale discontinuous dynamic optimization problems is discussed.

126 citations


Proceedings ArticleDOI
07 Jun 1998
TL;DR: This work resolves the well-publicized open problem on the approximabiity of the rooted “orienteering problem” for the case in which the sites are given as points in the plane and the network required is a cycle.
Abstract: WC study a variety of geometric network optimization prob lcms on a set of points, in which we are given a resource bound, a, on the total length of the network, and our ob jcctivc is to maximize the number of points visited (or the total “value” of points visited), In particular, we resolve the well-publicized open problem on the approximabiity of the rooted “orienteering problem” for the case in which the sites are given as points in the plane and the network required is a cycle. We obtain a 2approximation for this problem, We also obtain approximation algorithms for variants of this problem in which the network required is a tree (S-approximation) or a path Q-approximation). No prior approximation bounds were known for any of these problems, We also obtain improved approximation algorithms for geometric instances of the unrooted orienteering problem, where we obtain a 2-approximation for both the cycle and tree versions of the problem on points in the plane, as well as a G-approximation for the tree version in edge-weighted graphs, E’urther, we study generalizations of the basic orienteering problem, to the case of multiple roots, sites that are polygonnl regions, etc., where we again give the first known approximation results. Our methods are based on some new tools which may be of interest in their own right: (1) some new results on m-

Proceedings Article
01 Jul 1998
TL;DR: STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search trajectories, and is used to bias future search trajectory toward better optima.
Abstract: This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search trajectories. The learned evaluation function is used to bias future search trajectories toward better optima. We present positive results on six large-scale optimization domains.

01 Jan 1998
TL;DR: Empirical results demonstrate that using the GADO system can greatly decrease the cost of design space search, and can also improve the quality of the resulting designs.
Abstract: OF THE DISSERTATION GADO: A Genetic Algorithm for Continuous Design Optimization by Khaled Mohamed Rasheed Dissertation Director: Haym Hirsh Genetic algorithms (GAs) have been extensively used as a means for performing global optimization in a simple yet reliable manner. However, in some realistic engineering design optimization domains a general purpose GA is often ine cient and unable to reach the global optimum. In this thesis we describe a GA for continuous designspace optimization that uses new GA operators and strategies tailored to the structure and properties of engineering design domains. Empirical results in several realistic engineering design domains as well as benchmark design domains demonstrate that using our system can greatly decrease the cost of design space search, and can also improve the quality of the resulting designs.

Journal ArticleDOI
TL;DR: This work addresses the robustness of population-based versus point-based optimization on a range of parameter optimization problems when noise is added to the deterministic objective function values and investigates the performance of these optimization methods for varying levels of additive normally distributed fitness-independent noise.
Abstract: Practical optimization problems often require the evaluation of solutions through experimentation, stochastic simulation, sampling, or even interaction with the user. Thus, most practical problems involve noise. We address the robustness of population-based versus point-based optimization on a range of parameter optimization problems when noise is added to the deterministic objective function values. Population-based optimization is realized by a genetic algorithm and an evolution strategy. Point-based optimization is implemented as the classical Hooke-Jeeves pattern search strategy and threshold accepting as a modern local search technique. We investigate the performance of these optimization methods for varying levels of additive normally distributed fitness-independent noise and different sample sizes for evaluating individual solutions. Our results strongly favour population-based optimization, and the evolution strategy in particular.

Patent
29 Oct 1998
TL;DR: In this article, a system for determining schedules and processing other optimization problems includes a local optimization engine and a global optimization engine, which operate based on heuristics, and include a prioritizer, a constructor, and an analyzer to make large coherent moves in the search space.
Abstract: A system for determining schedules and processing other optimization problems includes a local optimization engine and a global optimization engine. The local optimization engine operates based on heuristics, and includes a prioritizer, a constructor, and an analyzer to make large “coherent” moves in the search space, thus helping to avoid local optima without relying entirely on random moves. The global optimization engine takes the individual schedules produced by the local optimization engine and optimizes them using Linear Programming/Integer Programming techniques.

Journal ArticleDOI
TL;DR: The aim of the present paper is to compare two different approaches which make use of anti-optimization, namely a nested optimization, where the search for worst case is integrated with the main optimization and a two step optimization,where anti- Optimization is solved once for all constraints before starting the optimization allowing a great computational saving with respect to the first.

Patent
15 May 1998
TL;DR: In this article, prediction methods are used in conjunction with actual optimization to improve response time and reduce required computational resources for optimization problems having a hierarchical structure, which can reduce the requirements for computational resources and allow more decompositions to be examined within the available time in order to arrive at a more nearly optimal decomposition.
Abstract: Prediction methods that anticipate the outcome of a detailed optimization step are used in lieu of or in conjunction with actual optimization to improve response time and reduce required computational resources for optimization problems having a hierarchical structure. Decomposition of the optimization problem into sub-problems and sub-sub-problems is, itself, an optimization process which is iteratively performed while preferably guided by prediction of the quality of solutions to the problems into which the “master” optimization problem may be decomposed. Prediction also reduces the requirements for computational resources and allows more decompositions to be examined within the available time in order to arrive at a more nearly optimal decomposition as well as a more nearly optimal solution. Prediction is selectively used when it is determined that such a benefit is probable.

Proceedings ArticleDOI
13 Sep 1998
TL;DR: A genetic algorithm based optimization method is proposed for a multi-objective design problem of an automotive engine, that introduces Pareto-optimality based fitness fu nction, similarity based selection and direct real number crossove r.
Abstract: A genetic algorithm based optimization method is proposed for a multi-objective design problem of an automotive engine, that includes several difficulties in practical engineerin g optimization problems. While various optimization techniques have been applied to engineering design problems, a class of realistic engineering design problems face on a mixture of different optimization difficulties, such as the rugged nat ure of system response, the numbers of design variables and objectives, etc. In order to overcome such a situation, this paper proposes a genetic algorithm based multi-objective optimization method, that introduces Pareto-optimality based fitness fu nction, similarity based selection and direct real number crossove r. This optimization method is also applied to the design problem of an automotive engine with the design criteria on a total power t rain. The computational examples show the ability of the proposed method for finding a relevant set of Pareto optima.

Book ChapterDOI
27 Sep 1998
TL;DR: This work empirically investigates the robustness of population-based versus point-based optimization methods on a range of parameter optimization problems when noise is added, and favors population- based optimization, and the evolution strategy in particular.
Abstract: In the optimization literature it is frequently assumed that the quality of solutions can be determined by calculating deterministic objective function values. Practical optimization problems, however, often require the evaluation of solutions through experimentation, stochastic simulation, sampling, or even interaction with the user. Thus, most practical problems involve noise. We empirically investigate the robustness of population-based versus point-based optimization methods on a range of parameter optimization problems when noise is added. Our results favor population-based optimization, and the evolution strategy in particular.

Proceedings ArticleDOI
16 May 1998
TL;DR: The proposed indexes provide a way to compare the performance between different swarm intelligent systems, and apply the indexes to evaluate two swarm Intelligent systems using computer simulation, and discuss the results.
Abstract: Many studies on swarm intelligent systems have been presented. However, analytical treatment on swarm intelligence has not been performed sufficiently, because of difficulties in finding general criteria to evaluate system performance. In this paper, we regard flexibility as one property of the robustness, and evaluate the flexibility of swarm intelligent systems. We propose indexes to evaluate the behavior of swarm intelligent systems, which focus an "flexibility". The proposed indexes provide a way to compare the performance between different swarm intelligent systems. We apply the indexes to evaluate two swarm intelligent systems using computer simulation, and discuss the results.

Journal ArticleDOI
TL;DR: Real-time collision-free trajectory control is dealt with by semi- infinite optimization techniques, which allows an optimal control problem incorporating a robotlobstacle distance function for col lision detection to be reduced to a finite-dimensional parameter- optimization problem.
Abstract: Real-time collision-free trajectory control is dealt with by semi- infinite optimization techniques. This allows an optimal control problem incorporating a robotlobstacle distance function for col lision detection to be reduced to a finite-dimensional parameter- optimization problem. This reduced problem can be solved effi ciently by the numerical parameter-optimization method of sequen tial quadratic programming. In the case of a time-varying robot environment, a series of such optimization problems is solved in an iterative time frame, constituting a real-time optimization loop.

Journal ArticleDOI
01 Jun 1998-Top
TL;DR: A “multi-local” optimization procedure using inexact line searches is presented, and an application of the method to a semi-infinite programming procedure is included.
Abstract: The development of efficient algorithms that provide all the local minima of a function is crucial to solve certain subproblems in many optimization methods. A “multi-local” optimization procedure using inexact line searches is presented, and numerical experiments are also reported. An application of the method to a semi-infinite programming procedure is included.

Journal ArticleDOI
TL;DR: A neural scheme for hierarchical optimization of nonlinear large-scale systems is presented, under which separability for the application of decomposition and coordination strategy is preserved and local stability of the neural network is obtained with the aid of a convexification procedure.
Abstract: A neural scheme for hierarchical optimization of nonlinear large-scale systems is presented. The local optimization subnetworks and co-ordination subnetwork, comprising the hierarchical optimization neural network, work simultaneously to provide the optimal solutions to the original optimization problems, so the waiting time that exists in the processes of local optimization and co-ordination of conventional numerical methods is eliminated naturally. Moreover, local stability of the neural network is obtained with the aid of a convexification procedure, under which separability for the application of decomposition and coordination strategy is preserved. The neural network is efficient in solving large-scale optimization problems and is suitable for realtime applications

Journal ArticleDOI
TL;DR: A method of the compromise region determination for the multistage axial flow compressor stochastic optimization problems is discussed, based on the 2-D axisymmetrical mathematical model of the compressor and on the new multicriteria optimization procedure.
Abstract: The aim of this paper is to discuss a method of the compromise region determination for the multistage axial flow compressor stochastic optimization problems. This method is based on the 2-D axisymmetrical mathematical model of the compressor and on the new multicriteria optimization procedure. A specific feature of the multicriteria optimization procedure is a possibility to obtain a set of the Edgeworth-Pareto optimal solutions within the frame of single optimization task. The paper presents some examples of the compressor’s geometrical parameters multicriteria optimization.

Journal ArticleDOI
TL;DR: This paper considers the evaluation of pGAs for engineering problems where function evaluations of a relatively low cost are important and the best traditional methods may only perform well within a narrow class of problems.
Abstract: Genetic Algorithm (GA) based optimizers are adaptive search algorithms that combine principles of population genetics and natural selection. These algorithms have been successfully applied to several optimization problems which are difficult to solve by conventional mathematical programming. In engineering, GAs are rapidly becoming an important tool for general purpose optimization because the best traditional methods may only perform well within a narrow class of problems. However, in the case of small to medium size problems, GA-based optimizers are generally out-performed by conventional optimizers in terms of computational effort. In order to circumvent this problem, a number of parallel Genetic Algorithms (pGAs) have already been proposed and analysed for different types of functions. In general, these pGAs have been tested on unconstrained optimization which requires function evaluations of a relatively low cost. This paper considers the evaluation of pGAs for engineering problems where fun...

Journal ArticleDOI
TL;DR: It is shown how this optimization problem can be formulated as a parametric optimization involving parameters that have a clear engineering meaning and leads to the definition of a control problem which requires a feedback implementation under the form of an adaptive regulator combined with a software sensor.

Journal ArticleDOI
TL;DR: The technique improves the solutions obtained by parametric search by a log n factor to find two strips that cover a given set S of n points in the plane, so as to minimize the width of the largest of the two strips.
Abstract: In this paper we apply the selection and optimization technique of Frederickson and Johnson to a number of geometric selection and optimization problems, some of which have previously been solved by parametric search, and provide efficient and simple algorithms. Our technique improves the solutions obtained by parametric search by a log n factor. For example, we apply the technique to the two-line center problem, where we want to find two strips that cover a given set S of n points in the plane, so as to minimize the width of the largest of the two strips.


17 Mar 1998
TL;DR: The application of domain decomposition and multigrid techniques to optimization is studied and the resulting algorithms are illustrated by applying them to optimization problems derived from discretizations of partial differential equations, as well as to purely algebraic optimization problems arising in mathematical finance.
Abstract: In this work we study the application of domain decomposition and multigrid techniques to optimization. We illustrate the resulting algorithms by applying them to optimization problems derived from discretizations of partial differential equations, as well as to purely algebraic optimization problems arising in mathematical finance. For the analysis of the presented algorithms we utilize the subspace correction framework (cf. Xu, 1992). We discuss the cases of convex non-smooth and smooth non-convex optimization, as well as constrained optimization, and present the convergence analysis for the multiplicative Schwarz algorithms for these problems. For PDE-based optimization problems we also discuss the effect of coarse grid correction and analyze the convergence rate of the corresponding multiplicative and additive Schwarz methods. We consider the application of the multiplicative subspace correction method to the variational formulation of the elliptic eigenvalue problem and show that, as in the linear case, if the coarse grid correction is used, the convergence rate is independent of both the number of subdomains and the meshsize. We discuss the generalization of this method for simultaneous computation of several eigenfunctions and its applications to the problem of partitioning a graph based on spectral bisection. In the final chapter we consider the application of the subspace correction methods to some algebraic optimization problems arising in mathematical finance. We restrict our attention to the minimization of the Frobenius distance used in covariance matrix estimation, the factor analysis problem, and the gain-loss optimization problem. Numerical results illustrating the convergence behavior of the subspace correction methods applied to these problems are presented.