scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2002"


Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations


Journal ArticleDOI
TL;DR: This paper analyzes a particle's trajectory as it moves in discrete time, then progresses to the view of it in continuous time, leading to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies.
Abstract: The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. This paper analyzes a particle's trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions.

8,287 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight.
Abstract: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight. The approach is validated using several standard test functions from the specialized literature. Our results indicate that our approach is highly competitive with current evolutionary multiobjective optimization techniques.

1,842 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: The effects of various population topologies on the particle swarm algorithm were systematically investigated and it was discovered that previous assumptions may not have been correct.
Abstract: The effects of various population topologies on the particle swarm algorithm were systematically investigated. Random graphs were generated to specifications, and their performance on several criteria was compared. What makes a good population structure? We discovered that previous assumptions may not have been correct.

1,589 citations


Dissertation
01 Jan 2002
TL;DR: This thesis presents a theoretical model that can be used to describe the long-term behaviour of the Particle Swarm Optimiser and results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties.
Abstract: Many scientific, engineering and economic problems involve the optimisation of a set of parameters. These problems include examples like minimising the losses in a power grid by finding the optimal configuration of the components, or training a neural network to recognise images of people's faces. Numerous optimisation algorithms have been proposed to solve these problems, with varying degrees of success. The Particle Swarm Optimiser (PSO) is a relatively new technique that has been empirically shown to perform well on many of these optimisation problems. This thesis presents a theoretical model that can be used to describe the long-term behaviour of the algorithm. An enhanced version of the Particle Swarm Optimiser is constructed and shown to have guaranteed convergence on local minima. This algorithm is extended further, resulting in an algorithm with guaranteed convergence on global minima. A model for constructing cooperative PSO algorithms is developed, resulting in the introduction of two new PSO-based algorithms. Empirical results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties. The various PSO-based algorithms are then applied to the task of training neural networks, corroborating the results obtained on the synthetic benchmark functions.

1,498 citations


Journal ArticleDOI
TL;DR: A Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Abstract: This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and e1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.

1,436 citations


Journal ArticleDOI
TL;DR: In this paper, an evolutionary-based approach to solve the optimal power flow (OPF) problem is presented. And the proposed approach has been examined and tested on the standard IEEE 30bus test system with different objectives that reflect fuel cost minimization, voltage profile improvement, and voltage stability enhancement.

1,209 citations


Journal ArticleDOI
TL;DR: This survey examines the state of the art of a variety of problems related to pseudo-Boolean optimization, i.e. to the optimization of set functions represented by closed algebraic expressions.

903 citations


Proceedings ArticleDOI
11 Mar 2002
TL;DR: Critical aspects of the VEGA approach for Multiobjective Optimization using Genetic Algorithms are adapted to the PSO framework in order to develop a multi-swarm PSO that can cope effectively with MO problems.
Abstract: This paper constitutes a first study of the Particle Swarm Optimization (PSO) method in Multiobjective Optimization (MO) problems. The ability of PSO to detect Pareto Optimal points and capture the shape of the Pareto Front is studied through experiments on well-known non-trivial test functions. The Weighted Aggregation technique with fixed or adaptive weights is considered. Furthermore, critical aspects of the VEGA approach for Multiobjective Optimization using Genetic Algorithms are adapted to the PSO framework in order to develop a multi-swarm PSO that can cope effectively with MO problems. Conclusions are derived and ideas for further research are proposed.

674 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper presents a particle swarm optimization algorithm modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives for multiobjective optimization problems.
Abstract: This paper presents a particle swarm optimization (PSO) algorithm for multiobjective optimization problems. PSO is modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives. Several benchmark cases were tested and showed that PSO could efficiently find multiple Pareto optimal solutions.

671 citations


Journal ArticleDOI
TL;DR: In attaining the approximate region of the optimum, the implementation suggests that the PSOA is superior to the GA, and comparable to gradient based algorithms.
Abstract: Shape and size optimization problems instructural design are addressed using the particle swarm optimization algorithm (PSOA). In our implementation of the PSOA, the social behaviour of birds is mimicked. Individual birds exchange information about their position, velocity and fitness, and the behaviour of the flock is then influenced to increase the probability of migration to regions of high fitness. New operators in the PSOA, namely the elite velocity and the elite particle, are introduced.

Journal ArticleDOI
TL;DR: In this article, a hybrid particle swarm optimization (HPSO) was proposed for a practical distribution state estimation, which considers nonlinear characteristics of the practical equipment and actual limited measurements in distribution systems.
Abstract: This paper proposes a hybrid particle swarm optimization (HPSO) for a practical distribution state estimation. The proposed method considers nonlinear characteristics of the practical equipment and actual limited measurements in distribution systems. The method can estimate load and distributed generation output values at each node by minimizing the difference between measured and calculated voltages and currents. The feasibility of the proposed method is demonstrated and compared with an original particle swarm optimization-based method on practical distribution system models. Effectiveness of the constriction factor approach of particle swarm optimization is also investigated. The results indicate the applicability of the proposed state estimation method to the practical distribution systems.

01 Jan 2002
TL;DR: Particle Swarm Optimization is an efficient and general solution to solve most nonlinear optimization problems with nonlinear inequality constraints with preserving feasibility strategy employed to deal with constraints.
Abstract: This paper presents a Particle Swarm Optimization (PSO) algorithm for constrained nonlinear optimization problems. In PSO, the potential solutions, called particles, are "flown" through the problem space by learning from the current optimal particle and its own memory. In this paper, preserving feasibility strategy is employed to deal with constraints. PSO is started with a group of feasible solutions and a feasibility function is used to check if the new explored solutions satisfy all the constraints. All particles keep only those feasible solutions in their memory. Eleven test cases were tested and showed that PSO is an efficient and general solution to solve most nonlinear optimization problems with nonlinear inequality constraints.

Proceedings ArticleDOI
06 Oct 2002
TL;DR: This paper introduces a new Particle Swarm Optimisation (PSO) algorithm with strong local convergence properties, which performs much better with a smaller number of particles, compared to the original PSO.
Abstract: This paper introduces a new Particle Swarm Optimisation (PSO) algorithm with strong local convergence properties. The new algorithm performs much better with a smaller number of particles, compared to the original PSO. This property is desirable when designing a niching PSO algorithm.

Proceedings ArticleDOI
08 May 2002
TL;DR: In this article, an individual-based continuous time model for swarm aggregation in n-dimensional space and its stability properties were studied. And they showed that the individuals (autonomous agents or biological creatures) will form a cohesive swarm in a finite time.
Abstract: We specify an "individual-based" continuous time model for swarm aggregation in n-dimensional space and study its stability properties. We show that the individuals (autonomous agents or biological creatures) will form a cohesive swarm in a finite time. Moreover, we obtain an explicit bound on the swarm size, which depends only on the parameters of the swarm model.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: The foundations and performance of the two algorithms when applied to the design of a profiled corrugated horn antenna are investigated and the possibility of hybridizing the twogorithms is investigated.
Abstract: Genetic algorithms (GA) have proven to be a useful method of optimization for difficult and discontinuous multidimensional engineering problems. A new method of optimization, particle swarm optimization (PSO), is able to accomplish the same goal as GA optimization in a new and faster way. The purpose of this paper is to investigate the foundations and performance of the two algorithms when applied to the design of a profiled corrugated horn antenna. Also investigated is the possibility of hybridizing the two algorithms.

Journal ArticleDOI
TL;DR: A new optimization algorithm to solve multiobjective design optimization problems based on behavioral concepts similar to that of a real swarm is presented, indicating that the swarm algorithm is capable of generating an extended Pareto front with significantly fewer function evaluations when compared to the nondominated sorting genetic algorithm (NSGA).
Abstract: This paper presents a new optimization algorithm to solve multiobjective design optimization problems based on behavioral concepts similar to that of a real swarm. The individuals of a swarm update their flying direction through communication with their neighboring leaders with an aim to collectively attain a common goal. The success of the swarm is attributed to three fundamental processes: identification of a set of leaders, selection of a leader for information acquisition, and finally a meaningful information transfer scheme. The proposed algorithm mimics the above behavioral processes of a real swarm. The algorithm employs a multilevel sieve to generate a set of leaders, a probabilistic crowding radius-based strategy for leader selection and a simple generational operator for information transfer. Two test problems, one with a discontinuous Pareto front and the other with a multi-modal Pareto front is solved to illustrate the capabilities of the algorithm in handling mathematically complex problems. ...

Journal ArticleDOI
TL;DR: Comparisons show that on average, the bacteria chemotaxis algorithm performs similar to standard evolution strategies and worse than evolution strategies with enhanced convergence properties.
Abstract: We present an optimization algorithm based on a model of bacterial chemotaxis. The original biological model is used to formulate a simple optimization algorithm, which is evaluated on a set of standard test problems. Based on this evaluation, several features are added to the basic algorithm using evolutionary concepts in order to obtain an improved optimization strategy, called the bacteria chemotaxis (BC) algorithm. This strategy is evaluated on a number of test functions for local and global optimization, compared with other optimization techniques, and applied to the problem of inverse airfoil design. The comparisons show that on average, BC performs similar to standard evolution strategies and worse than evolution strategies with enhanced convergence properties.

Proceedings ArticleDOI
12 May 2002
TL;DR: Three variants of PSO are compared with the widely used branch and bound technique, on several integer programming test problems and results indicate that PSO handles efficiently such problems, and in most cases it outperforms the branch and Bound technique.
Abstract: The investigation of the performance of the particle swarm optimization (PSO) method in integer programming problems, is the main theme of the present paper. Three variants of PSO are compared with the widely used branch and bound technique, on several integer programming test problems. Results indicate that PSO handles efficiently such problems, and in most cases it outperforms the branch and bound technique.


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation and shows that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.
Abstract: In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.

Proceedings ArticleDOI
06 Oct 2002
TL;DR: The fundamentals of the method are described, and an application to the problem of loss minimization and voltage control is presented, with very good results.
Abstract: This paper presents a new optimization model EPSO, evolutionary particle swarm optimization, inspired in both evolutionary algorithms and in particle swarm optimization algorithms. The fundamentals of the method are described, and an application to the problem of loss minimization and voltage control is presented, with very good results.

Journal ArticleDOI
TL;DR: The Intelligent Particle Swarm Optimization (IPSO) algorithm as mentioned in this paper uses concepts such as group experiences, unpleasant memories (tabu to be avoided), local landscape models based on virtual neighbors, and memetic replication of successful behavior parameters.
Abstract: The paper describes a new stochastic heuristic algorithm for global optimization. The new optimization algorithm, called intelligent-particle swarm optimization (IPSO), offers more intelligence to particles by using concepts such as: group experiences, unpleasant memories (tabu to be avoided), local landscape models based on virtual neighbors, and memetic replication of successful behavior parameters. The new individual complexity is amplified at the group level and consequently generates a more efficient optimization procedure. A simplified version of the IPSO algorithm was implemented and compared with the classical PSO algorithm for a simple test function and for the Loney's solenoid.

Proceedings ArticleDOI
12 May 2002
TL;DR: The differential evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach and performs well when applied to several test optimization problems from the literature.
Abstract: Differential evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the differential evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

Proceedings ArticleDOI
12 May 2002
TL;DR: A dissipative particle swarm optimization is developed according to the self-organization of dissipative structure where the negative entropy is introduced to construct an opening dissipative system that is far-from-equilibrium so as to driving the irreversible evolution process with better fitness.
Abstract: A dissipative particle swarm optimization is developed according to the self-organization of dissipative structure The negative entropy is introduced to construct an opening dissipative system that is far-from-equilibrium so as to driving the irreversible evolution process with better fitness The testing of two multimodal functions indicates it improves the performance effectively


Journal ArticleDOI
TL;DR: A survey on variousevolutionary methods for MO optimization by considering the usual performancemeasures in MO optimization and a few metrics to examinethe strength and weakness of each evolutionary approach both quantitatively and qualitatively.
Abstract: Evolutionary techniques for multi-objective (MO) optimization are currently gaining significant attention from researchers in various fields due to their effectiveness and robustness in searching for a set of trade-off solutions. Unlike conventional methods that aggregate multiple attributes to form a composite scalar objective function, evolutionary algorithms with modified reproduction schemes for MO optimization are capable of treating each objective component separately and lead the search in discovering the global Pareto-optimal front. The rapid advances of multi-objective evolutionary algorithms, however, poses the difficulty of keeping track of the developments in this field as well as selecting an existing approach that best suits the optimization problem in-hand. This paper thus provides a survey on various evolutionary methods for MO optimization. Many well-known multi-objective evolutionary algorithms have been experimented with and compared extensively on four benchmark problems with different MO optimization difficulties. Besides considering the usual performance measures in MO optimization, e.g., the spread across the Pareto-optimal front and the ability to attain the global trade-offs, the paper also presents a few metrics to examine the strength and weakness of each evolutionary approach both quantitatively and qualitatively. Simulation results for the comparisons are analyzed, summarized and commented.

Proceedings ArticleDOI
M. Lovbjerg1, T. Krink1
12 May 2002
TL;DR: Self-organized criticality (SOC) can help control the PSO and add diversity and seems promising reaching faster convergence and better solutions.
Abstract: Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.

Proceedings ArticleDOI
12 May 2002
TL;DR: This paper examines a particle swarm algorithm which has been applied to the generation of interactive, improvised music and suggests that the algorithm may have applications to dynamic optimisation problems.
Abstract: This paper examines a particle swarm algorithm which has been applied to the generation of interactive, improvised music. An important feature of this algorithm is a balance between particle attraction to the centre of mass and repulsive, collision avoiding forces. These forces are not present in the classic particle swarm optimisation algorithms. A number of experiments illuminate the nature of these new forces and it is suggested that the algorithm may have applications to dynamic optimisation problems.

Proceedings ArticleDOI
26 Aug 2002
TL;DR: An adaptive particle swarm optimization (PSO) on individual level, a replacement criterion, based on the diversity of fitness between the current particle and the best historical experience, is introduced to maintain the social attribution of swarm adaptively by taking off inactive particles.
Abstract: An adaptive particle swarm optimization (PSO) on individual level is presented. By analyzing the social model of PSO, a replacement criterion, based on the diversity of fitness between the current particle and the best historical experience, is introduced to maintain the social attribution of swarm adaptively by taking off inactive particles. The testing of three benchmark functions indicates that it improves the average performance effectively.