scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2004"


Journal ArticleDOI
TL;DR: An approach in which Pareto dominance is incorporated into particle swarm optimization (PSO) in order to allow this heuristic to handle problems with several objective functions and indicates that the approach is highly competitive and that can be considered a viable alternative to solve multiobjective optimization problems.
Abstract: This paper presents an approach in which Pareto dominance is incorporated into particle swarm optimization (PSO) in order to allow this heuristic to handle problems with several objective functions. Unlike other current proposals to extend PSO to solve multiobjective optimization problems, our algorithm uses a secondary (i.e., external) repository of particles that is later used by other particles to guide their own flight. We also incorporate a special mutation operator that enriches the exploratory capabilities of our algorithm. The proposed approach is validated using several test functions and metrics taken from the standard literature on evolutionary multiobjective optimization. Results indicate that the approach is highly competitive and that can be considered a viable alternative to solve multiobjective optimization problems.

3,474 citations


Journal ArticleDOI
TL;DR: A novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations to overcome the difficulties of selecting an appropriate mutation step size for different problems.
Abstract: This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, time-varying acceleration coefficients (TVAC) are introduced in addition to the time-varying inertia weight factor in particle swarm optimization (PSO). From the basis of TVAC, two new strategies are discussed to improve the performance of the PSO. First, the concept of "mutation" is introduced to the particle swarm optimization along with TVAC (MPSO-TVAC), by adding a small perturbation to a randomly selected modulus of the velocity vector of a random particle by predefined probability. Second, we introduce a novel particle swarm concept "self-organizing hierarchical particle swarm optimizer with TVAC (HPSO-TVAC)". Under this method, only the "social" part and the "cognitive" part of the particle swarm strategy are considered to estimate the new velocity of each particle and particles are reinitialized whenever they are stagnated in the search space. In addition, to overcome the difficulties of selecting an appropriate mutation step size for different problems, a time-varying mutation step size was introduced. Further, for most of the benchmarks, mutation probability is found to be insensitive to the performance of MPSO-TVAC method. On the other hand, the effect of reinitialization velocity on the performance of HPSO-TVAC method is also observed. Time-varying reinitialization step size is found to be an efficient parameter optimization strategy for HPSO-TVAC method. The HPSO-TVAC strategy outperformed all the methods considered in this investigation for most of the functions. Furthermore, it has also been observed that both the MPSO and HPSO strategies perform poorly when the acceleration coefficients are fixed at two.

2,753 citations


Journal ArticleDOI
TL;DR: A study of boundary conditions is presented indicating the invisible wall technique outperforms absorbing and reflecting wall techniques and is integrated into a representative example of optimization of a profiled corrugated horn antenna.
Abstract: The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in a PSO algorithm developed by the authors at UCLA specifically for engineering optimizations (UCLA-PSO). Also discussed is recent progress in the development of the PSO and the special considerations needed for engineering implementation including suggestions for the selection of parameter values. Additionally, a study of boundary conditions is presented indicating the invisible wall technique outperforms absorbing and reflecting wall techniques. These concepts are then integrated into a representative example of optimization of a profiled corrugated horn antenna.

2,165 citations


Journal ArticleDOI
TL;DR: A variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm.
Abstract: The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.

2,038 citations


Proceedings ArticleDOI
19 Jun 2004
TL;DR: The results from this study show that DE generally outperforms the other algorithms, however, on two noisy functions, both DE and PSO were outperformed by the EA.
Abstract: Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance in several real-world applications. In this paper, we evaluate the performance of DE, PSO, and EAs regarding their general applicability as numerical optimization techniques. The comparison is performed on a suite of 34 widely used benchmark problems. The results from our study show that DE generally outperforms the other algorithms. However, on two noisy functions, both DE and PSO were outperformed by the EA.

1,252 citations


Journal ArticleDOI
TL;DR: The particle swarm optimizer shares the ability of the genetic algorithm to handle arbitrary nonlinear cost functions, but with a much simpler implementation it clearly demonstrates good possibilities for widespread use in electromagnetic optimization.
Abstract: Particle swarm optimization is a recently invented high-performance optimizer that is very easy to understand and implement. It is similar in some ways to genetic algorithms or evolutionary algorithms, but requires less computational bookkeeping and generally only a few lines of code. In this paper, a particle swarm optimizer is implemented and compared to a genetic algorithm for phased array synthesis of a far-field sidelobe notch, using amplitude-only, phase-only, and complex tapering. The results show that some optimization scenarios are better suited to one method versus the other (i.e., particle swarm optimization performs better in some cases while genetic algorithms perform better in others), which implies that the two methods traverse the problem hyperspace differently. The particle swarm optimizer shares the ability of the genetic algorithm to handle arbitrary nonlinear cost functions, but with a much simpler implementation it clearly demonstrates good possibilities for widespread use in electromagnetic optimization.

877 citations


Journal ArticleDOI
TL;DR: The approaches include transformations of the objective function through the recently proposed deflection and stretching techniques, as well as a repulsion source at each detected minimizer, resulting in an efficient algorithm which has the ability to avoid previously detected solutions and, thus, detect all global minimizers of a function.
Abstract: This paper presents approaches for effectively computing all global minimizers of an objective function. The approaches include transformations of the objective function through the recently proposed deflection and stretching techniques, as well as a repulsion source at each detected minimizer. The aforementioned techniques are incorporated in the context of the particle swarm optimization (PSO) method, resulting in an efficient algorithm which has the ability to avoid previously detected solutions and, thus, detect all global minimizers of a function. Experimental results on benchmark problems originating from the fields of global optimization, dynamical systems, and game theory, are reported, and conclusions are derived.

718 citations


Proceedings ArticleDOI
01 Dec 2004
TL;DR: A so-called mainstream thought of the population is introduced to evaluate the search scope of a particle and thus a novel parameter control method of QPSO is proposed.
Abstract: Based on the quantum-behaved particle swarm optimization (QPSO) algorithm, we formulate the philosophy of QPSO and introduce a so-called mainstream thought of the population to evaluate the search scope of a particle and thus propose a novel parameter control method of QPSO. After that, we test the revised QPSO algorithm on several benchmark functions and the experiment results show its superiority.

676 citations


Journal ArticleDOI
TL;DR: A suite of five test problems offering different patterns of such changes and different difficulties in tracking the dynamic Pareto-optimal front by a multiobjective optimization algorithm is presented.
Abstract: After demonstrating adequately the usefulness of evolutionary multiobjective optimization (EMO) algorithms in finding multiple Pareto-optimal solutions for static multiobjective optimization problems, there is now a growing need for solving dynamic multiobjective optimization problems in a similar manner. In this paper, we focus on addressing this issue by developing a number of test problems and by suggesting a baseline algorithm. Since in a dynamic multiobjective optimization problem, the resulting Pareto-optimal set is expected to change with time (or, iteration of the optimization process), a suite of five test problems offering different patterns of such changes and different difficulties in tracking the dynamic Pareto-optimal front by a multiobjective optimization algorithm is presented. Moreover, a simple example of a dynamic multiobjective optimization problem arising from a dynamic control loop is presented. An extension to a previously proposed direction-based search method is proposed for solving such problems and tested on the proposed test problems. The test problems introduced in this paper should encourage researchers interested in multiobjective optimization and dynamic optimization problems to develop more efficient algorithms in the near future.

557 citations


Proceedings ArticleDOI
19 Jun 2004
TL;DR: This paper reviews the development of the particle swarm optimization method in recent years and modifications to adapt to different and complex environments are reviewed, and real world applications are listed.
Abstract: This paper reviews the development of the particle swarm optimization method in recent years. Included are brief discussions of various parameters. Modifications to adapt to different and complex environments are reviewed, and real world applications are listed.

501 citations


Book ChapterDOI
01 Jan 2004
TL;DR: Results show Discrete PSO is certainly not as powerful as some specific algorithms, but, on the other hand, it can easily be modified for any discrete/combinatorial problem for which the authors have no good specialized algorithm.
Abstract: The classical Particle Swarm Optimization is a powerful method to find the minimum of a numerical function, on a continuous definition domain. As some binary versions have already successfully been used, it seems quite natural to try to define a framework for a discrete PSO. In order to better understand both the power and the limits of this approach, we examine in detail how it can be used to solve the well known Traveling Salesman Problem, which is in principle very “bad” for this kind of optimization heuristic. Results show Discrete PSO is certainly not as powerful as some specific algorithms, but, on the other hand, it can easily be modified for any discrete/combinatorial problem for which we have no good specialized algorithm.

Journal ArticleDOI
TL;DR: A new evolutionary approach, particle swarm optimization, is adapted for single-slice 3D-to-3D biomedical image registration and a new hybrid particle swarm technique is proposed that incorporates initial user guidance.
Abstract: Biomedical image registration, or geometric alignment of two-dimensional and/or three-dimensional (3D) image data, is becoming increasingly important in diagnosis, treatment planning, functional studies, computer-guided therapies, and in biomedical research. Registration based on intensity values usually requires optimization of some similarity metric between the images. Local optimization techniques frequently fail because functions of these metrics with respect to transformation parameters are generally nonconvex and irregular and, therefore, global methods are often required. In this paper, a new evolutionary approach, particle swarm optimization, is adapted for single-slice 3D-to-3D biomedical image registration. A new hybrid particle swarm technique is proposed that incorporates initial user guidance. Multimodal registrations with initial orientations far from the ground truth were performed on three volumes from different modalities. Results of optimizing the normalized mutual information similarity metric were compared with various evolutionary strategies. The hybrid particle swarm technique produced more accurate registrations than the evolutionary strategies in many cases, with comparable convergence. These results demonstrate that particle swarm approaches, along with evolutionary techniques and local methods, are useful in image registration, and emphasize the need for hybrid approaches for difficult registration problems.

Book ChapterDOI
TL;DR: It is shown that the multi-swarm optimizer significantly outperforms single population PSO on this problem, and that multi-quantum swarms are superior to multi-charged swarms and SOS.
Abstract: Many real-world problems are dynamic, requiring an optimization algorithm which is able to continuously track a changing optimum over time. In this paper, we present new variants of Particle Swarm Optimization (PSO) specifically designed to work well in dynamic environments. The main idea is to extend the single population PSO and Charged Particle Swarm Optimization (CPSO) methods by constructing interacting multi-swarms. In addition, a new algorithmic variant, which broadens the implicit atomic analogy of CPSO to a quantum model, is introduced. The multi-swarm algorithms are tested on a multi-modal dynamic function – the moving peaks benchmark – and results are compared to the single population approach of PSO and CPSO, and to results obtained by a state-of-the-art evolutionary algorithm, namely self-organizing scouts (SOS). We show that our multi-swarm optimizer significantly outperforms single population PSO on this problem, and that multi-quantum swarms are superior to multi-charged swarms and SOS.

Journal ArticleDOI
TL;DR: An improved particle swarm optimizer (PSO) for solving mechanical design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables is presented.
Abstract: This paper presents an improved particle swarm optimizer (PSO) for solving mechanical design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. A constraint handling method called the ‘fly-back mechanism’ is introduced to maintain a feasible population. The standard PSO algorithm is also extended to handle mixed variables using a simple scheme. Five benchmark problems commonly used in the literature of engineering optimization and nonlinear programming are successfully solved by the proposed algorithm. The proposed algorithm is easy to implement, and the results and the convergence performance of the proposed algorithm are better than other techniques.

01 Dec 2004
TL;DR: The results suggest that (1) parallel PSO exhibits excellent parallel performance under load‐balanced conditions, (2) an asynchronous implementation would be valuable for real‐life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available.
Abstract: : Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the Particle Swarm Optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima - large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that 1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, 2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and 3) larger population sizes should be considered when multiple processors are available.

Journal ArticleDOI
01 May 2004
TL;DR: The results obtained seem to indicate that Particle Swarm Data Mining Algorithms are competitive, not only with other evolutionary techniques, but also with industry standard algorithms such as the J48 algorithm, and can be successfully applied to more demanding problem domains.
Abstract: Particle Swarm Optimisers are inherently distributed algorithms where the solution for a problem emerges from the interactions between many simple individual agents called particles. This article proposes the use of the Particle Swarm Optimiser as a new tool for Data Mining. In the first phase of our research, three different Particle Swarm Data Mining Algorithms were implemented and tested against a Genetic Algorithm and a Tree Induction Algorithm (J48). From the obtained results, Particle Swarm Optimisers proved to be a suitable candidate for classification tasks. The second phase was dedicated to improving one of the Particle Swarm optimiser variants in terms of attribute type support and temporal complexity. The data sources here used for experimental testing are commonly used and considered as a de facto standard for rule discovery algorithms reliability ranking. The results obtained in these domains seem to indicate that Particle Swarm Data Mining Algorithms are competitive, not only with other evolutionary techniques, but also with industry standard algorithms such as the J48 algorithm, and can be successfully applied to more demanding problem domains.

Book ChapterDOI
Xiaodong Li1
26 Jun 2004
TL;DR: This paper proposes an improved particle swarm optimizer using the notion of species to determine its neighbourhood best values, for solving multimodal optimization problems.
Abstract: This paper proposes an improved particle swarm optimizer using the notion of species to determine its neighbourhood best values, for solving multimodal optimization problems. In the proposed species- based PSO (SPSO), the swarm population is divided into species sub- populations based on their similarity. Each species is grouped around a dominating particle called the species seed. At each iteration step, species seeds are identified from the entire population and then adopted as neig- hbourhood bests for these individual species groups separately. Species are formed adaptively at each step based on the feedback obtained from the multimodal fitness landscape. Over successive iterations, species are able to simultaneously optimize towards multiple optima, regardless of if they are global or local optima. Our experiments demonstrated that SPSO is very effective in dealing with multimodal optimization functions with lower dimensions.

Journal ArticleDOI
TL;DR: Various novel heuristic stochastic search techniques have been proposed for optimization of proportional–integral–derivative gains used in Sugeno fuzzy logic based automatic generation control of multi-area thermal generating plants.

Journal ArticleDOI
TL;DR: The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented, and recommendations for the utilization of the algorithm in future multidisciplinary optimization applications are presented.
Abstract: The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization are the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations for the utilization of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization, as well as the numerical noise and discrete variables present in the current example problem.

Proceedings ArticleDOI
19 Jun 2004
TL;DR: A new discrete particle swarm optimization algorithm based on quantum individual is proposed, which is simpler and more powerful than the algorithms available.
Abstract: The particle swarm optimization algorithm is a new methodology in evolutionary computation. It has been found to be extremely effective is solving a wide range of engineering problems, however, it is of low efficiency in dealing with the discrete problems. In this paper, a new discrete particle swarm optimization algorithm based on quantum individual is proposed. It is simpler and more powerful than the algorithms available. The simulation experiments and its application in the CDMA also prove its high efficiency.

Proceedings ArticleDOI
19 Jun 2004
TL;DR: A heuristic rule, the smallest position value (SPV) rule, is developed to enable the continuous particle swarm optimization algorithm to be applied to all classes of sequencing problems, which are NP-hard in the literature.
Abstract: In This work we present a particle swarm optimization algorithm to solve the single machine total weighted tardiness problem. A heuristic rule, the smallest position value (SPV) rule, is developed to enable the continuous particle swarm optimization algorithm to be applied to all classes of sequencing problems, which are NP-hard in the literature. A simple but very efficient local search method is embedded in the particle swarm optimization algorithm. The computational results show that the particle swarm algorithm is able to find the optimal and best-known solutions on all instances of widely used benchmarks from the OR library.

Journal ArticleDOI
TL;DR: The PSO and its variants are applied to a synthetic test system of five types of candidate units with 6- and 14-year planning horizon and the results obtained are compared with dynamic programming in terms of speed and efficiency.

Proceedings ArticleDOI
19 Jun 2004
TL;DR: This work presents a simple mechanism to handle constraints with a particle swarm optimization algorithm that uses a simple criterion based on closeness of a particle to the feasible region in order to select a leader.
Abstract: This work presents a simple mechanism to handle constraints with a particle swarm optimization algorithm. Our proposal uses a simple criterion based on closeness of a particle to the feasible region in order to select a leader. Additionally, our algorithm incorporates a turbulence operator that improves the exploratory capabilities of our particle swarm optimization algorithm. Despite its relative simplicity, our comparison of results indicates that the proposed approach is highly competitive with respect to three constraint-handling techniques representative of the state-of-the-art in the area.

Proceedings ArticleDOI
27 Sep 2004
TL;DR: In this paper, a particle swarm optimization algorithm-based technique, called PSO-clustering, is proposed to search the cluster center in the arbitrary data set automatically, which can help the user to distinguish the structure of data and simplify the complexity of data from mass information.
Abstract: Clustering analysis is applied generally to pattern recognition, color quantization and image classification. It can help the user to distinguish the structure of data and simplify the complexity of data from mass information. The user can understand the implied information behind extracting these data. In real case, the distribution of information can be any size and shape. A particle swarm optimization algorithm-based technique, called PSO-clustering, is proposed in this article. We adopt the particle swarm optimization to search the cluster center in the arbitrary data set automatically. PSO can search the best solution from the probability option of the social-only model and cognition-only model. This method is quite simple and valid, and it can avoid the minimum local value. Finally, the effectiveness of the PSO-clustering is demonstrated on four artificial data sets.


Journal Article
TL;DR: This paper describes a procedure that uses particle swarm optimization (PSO) combined with the Lagrangian Relaxation (LR) framework to solve a power-generator scheduling problem known as the unit commitment problem (UCP).
Abstract: This paper describes a procedure that uses particle swarm optimization (PSO) combined with the Lagrangian Relaxation (LR) framework to solve a power-generator scheduling problem known as the unit commitment problem (UCP). The UCP consists of determining the schedule and production amount of generating units within a power system subject to operating constraints. The LR framework is applied to relax coupling constraints of the optimization problem. Thus, the UCP is separated into independent optimization functions for each generating unit. Each of these sub-problems is solved using Dynamic Programming (DP). PSO is used to evolve the Lagrangian multipliers. PSO is a population based search technique, which belongs to the swarm intelligence paradigm that is motivated by the simulation of social behavior to manipulate individuals towards better solution areas. The performance of the PSO-LR procedure is compared with results of other algorithms in the literature used to solve the UCP. The comparison shows that the PSO-LR approach is efficient in terms of computational time while providing good solutions.

Book ChapterDOI
26 Jun 2004
TL;DR: An extension of the heuristic called “particle swarm optimization” (PSO) that is able to deal with multiobjective optimization problems that uses the concept of Pareto dominance to determine the flight direction of a particle.
Abstract: In this paper, we present an extension of the heuristic called “particle swarm optimization” (PSO) that is able to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and is based on the idea of having a set of sub-swarms instead of single particles. In each sub-swarm, a PSO algorithm is executed and, at some point, the different sub-swarms exchange information. Our proposed approach is validated using several test functions taken from the evolutionary multiobjective optimization literature. Our results indicate that the approach is highly competitive with respect to algorithms representative of the state-of-the-art in evolutionary multiobjective optimization.

Proceedings ArticleDOI
01 Dec 2004
TL;DR: In this article, a novel particle swarm optimization algorithm based on the Gaussian probability distribution is proposed, which improves the convergence ability of PSO without the necessity of tuning these parameters.
Abstract: In this paper, a novel particle swarm optimization algorithm based on the Gaussian probability distribution is proposed. The standard particle swarm optimization (PSO) algorithm has some parameters that need to be specified before using the algorithm, e.g., the accelerating constants c/sub 1/ and c/sub 2/, the inertia weight w, the maximum velocity V/sub max/, and the number of particles of the swarm. The purpose of this work is the development of an algorithm based on the Gaussian distribution, which improves the convergence ability of PSO without the necessity of tuning these parameters. The only parameter to be specified by the user is the number of particles. The Gaussian PSO algorithm was tested on a suite of well-known benchmark functions and the results were compared with the results of the standard PSO algorithm. The simulation results shows that the Gaussian swarm outperforms the standard one.

Journal ArticleDOI
TL;DR: In this paper, three approaches are presented for generating scenario trees for 3nancial portfolio problems based on simulation, optimization and hybrid simulation/optimization.

Journal ArticleDOI
TL;DR: The particle swarm algorithm is modified to detect the pareto-optimal front, and this paper shows how this can be used to solve multiobjective optimization problems.
Abstract: Real-world optimization problems often require the minimization/maximization of more than one objective, which, in general, conflict with each other. These problems (multiobjective optimization problems, vector optimization problems) are usually treated by using weighted sums or other decision-making schemes. An alternative way is to look for the pareto-optimal front. In this paper, the particle swarm algorithm is modified to detect the pareto-optimal front.