scispace - formally typeset
Search or ask a question

Showing papers on "Particle swarm optimization published in 2003"


Journal ArticleDOI
TL;DR: The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory and graphical parameter selection guidelines are derived, resulting in results superior to previously published results.

2,554 citations


Journal ArticleDOI
TL;DR: In this paper, a particle swarm optimization (PSO) method for solving the economic dispatch (ED) problem in power systems is proposed, and the experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.
Abstract: This paper proposes a particle swarm optimization (PSO) method for solving the economic dispatch (ED) problem in power systems. Many nonlinear characteristics of the generator, such as ramp rate limits, prohibited operating zone, and nonsmooth cost functions are considered using the proposed method in practical generator operation. The feasibility of the proposed method is demonstrated for three different systems, and it is compared with the GA method in terms of the solution quality and computation efficiency. The experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.

1,635 citations


Journal ArticleDOI
TL;DR: This paper summarizes the development of SFLANET, a computer model that links SFLA and the hydraulic simulation software EPANET and its library functions and application of S FLANET to literature network design problems is described.
Abstract: Shuffled Frog Leaping Algorithm (SFLA) is a meta-heuristic for solving discrete optimization problems. Here it is applied to determine optimal discrete pipe sizes for new pipe networks and for network expansions. SFLA is a population based, cooperative search metaphor inspired by natural memetics. The algorithm uses memetic evolution in the form of infection of ideas from one individual to another in a local search. The local search is similar in concept to particle swarm optimization. A shuffling strategy allows for the exchange of information between local searches to move toward a global optimum. This paper summarizes the development of SFLANET, a computer model that links SFLA and the hydraulic simulation software EPANET and its library functions. Application of SFLANET to literature network design problems is then described. Although the algorithm is in its initial stages of development, promising results were obtained.

1,288 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: This paper proposes two new approaches to using PSO to cluster data, one which basically usesPSO to refine the clusters formed by K-means, and the other which uses PSO in a different way to seed the initial swarm.
Abstract: This paper proposes two new approaches to using PSO to cluster data. It is shown how PSO can be used to find the centroids of a user specified number of clusters. The algorithm is then extended to use K-means clustering to seed the initial swarm. This second algorithm basically uses PSO to refine the clusters formed by K-means. The new PSO algorithms are evaluated on six data sets, and compared to the performance of K-means clustering. Results show that both PSO clustering techniques have much potential.

766 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: Some of the mysteries of the particle swarm algorithm are revealed, its similarity to other stochastic population-based problem solving methods is discovered, and new avenues of investigation are suggested or implied.
Abstract: The particle swarm algorithm has just enough moving parts to make it hard to understand. The formula is very simple, it is even easy to describe the working of the algorithm verbally, yet it is very difficult to grasp in one's mind how the particles oscillate around centers that are constantly changing; how they influence one another; how the various parameters affect the trajectory of the particle; how the topology of the swarm affects its performance; and so on. This paper strips away some traditional features of the particle swarm in the search for the properties that make it work. The particle swarm algorithm is modified by eliminating the velocity formula. Variations are compared. In the process some of the mysteries of the algorithm are revealed, we discover its similarity to other stochastic population-based problem solving methods, and new avenues of investigation are suggested or implied.

696 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: The Sigma method is introduced as a new method for finding best local guides for each particle of the population from a set of Pareto-optimal solutions and the results are compared with the results of a multi-objective evolutionary algorithm (MOEA).
Abstract: In multi-objective particle swarm optimization (MOPSO) methods, selecting the best local guide (the global best particle) for each particle of the population from a set of Pareto-optimal solutions has a great impact on the convergence and diversity of solutions, especially when optimizing problems with high number of objectives. This paper introduces the Sigma method as a new method for finding best local guides for each particle of the population. The Sigma method is implemented and is compared with another method, which uses the strategy of an existing MOPSO method for finding the local guides. These methods are examined for different test functions and the results are compared with the results of a multi-objective evolutionary algorithm (MOEA).

679 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: This method combines the traditional velocity and position update rules with the ideas of Gaussian mutation and has succeeded in acquiring better results than those by GA and PSO alone.
Abstract: In this paper we present particle swarm optimization with Gaussian mutation combining the idea of the particle swarm with concepts from evolutionary algorithms. This method combines the traditional velocity and position update rules with the ideas of Gaussian mutation. This model is tested and compared with the standard PSO and standard GA. The comparative experiments have been conducted on unimodal functions and multimodal functions. PSO with Gaussian mutation is able to obtain a result superior to GA. We also apply the PSO with Gaussian mutation to a gene network. Consequently, it has succeeded in acquiring better results than those by GA and PSO alone.

553 citations


Book ChapterDOI
Xiaodong Li1
12 Jul 2003
TL;DR: This paper introduces a modified PSO, Non-dominated Sorting Particle Swarm Optimizer (NSPSO), for better multiobjective optimization by making a better use of particles' personal bests and offspring for more effective nondomination comparisons.
Abstract: This paper introduces a modified PSO, Non-dominated Sorting Particle Swarm Optimizer (NSPSO), for better multiobjective optimization. NSPSO extends the basic form of PSO by making a better use of particles' personal bests and offspring for more effective nondomination comparisons. Instead of a single comparison between a particle's personal best and its offspring, NSPSO compares all particles' personal bests and their offspring in the entire population. This proves to be effective in providing an appropriate selection pressure to propel the swarm population towards the Pareto-optimal front. By using the non-dominated sorting concept and two parameter-free niching methods, NSPSO and its variants have shown remarkable performance against a set of well-known difficult test functions (ZDT series). Our results and comparison with NSGA II show that NSPSO is highly competitive with existing evolutionary and PSO multiobjective algorithms.

511 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: A particle swarm optimization toolbox for use with the Matlab scientific programming environment has been developed and PSO is introduced briefly and the use of the toolbox is explained with some examples.
Abstract: A particle swarm optimization toolbox (PSOt) for use with the Matlab scientific programming environment has been developed. PSO is introduced briefly and then the use of the toolbox is explained with some examples. A link to downloadable code is provided.

504 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: A modification of the particle swarm optimization algorithm (PSO) intended to combat the problem of premature convergence observed in many applications of PSO, which is shown to perform significantly better than the original PSO algorithm and some of its variants, on many different benchmark optimization problems.
Abstract: This paper presents a modification of the particle swarm optimization algorithm (PSO) intended to combat the problem of premature convergence observed in many applications of PSO. The proposed new algorithm moves particles towards nearby particles of higher fitness, instead of attracting each particle towards just the best position discovered so far by any particle. This is accomplished by using the ratio of the relative fitness and the distance of other particles to determine the direction in which each component of the particle position needs to be changed. The resulting algorithm (FDR-PSO) is shown to perform significantly better than the original PSO algorithm and some of its variants, on many different benchmark optimization problems. Empirical examination of the evolution of the particles demonstrates that the convergence of the algorithm does not occur at an early phase of particle evolution, unlike PSO. Avoiding premature convergence allows FDR-PSO to continue search for global optima in difficult multimodal optimization problems.

461 citations


Journal ArticleDOI
TL;DR: The results indicate that the particle swarm optimization algorithm does locate the constrained minimum de-sign in continuous applications with very good precision, albeit at a much highercomputational cost than that of a typical gradient based optimizer.
Abstract: Gerhard Venter (gventer_vrand.conl) *Vanderpla(ds Research and Development, bit.1767 S 8th St'reef. Suite 100, Colorado Springs. CO 80906Jaroslaw Sobieszczanski-Sobieski (j.sobieski:_larc.nasa.gov) *A_4SA Lcmgley Research Ce,_terMS 240, Hampton, I:4 23681-2199The purpose of this paper is to show how the search algorithm, known as par-ticle swarm optimization performs. Here, particle swarm optimization ks appliedto structural design problems, but the method.has a much wider range of possi-ble applications. The paper's new contributions are improvements to the particleswarm optimization algorithm and conclusions and recommendations as to theutility of the algorithm. Results of numerical experiments for both continuousand discrete applications are presented in the paper. The results indicate that theparticle swarm optimization algorithm does locate the constrained minimum de-sign in continuous applications with very good precision, albeit at a much highercomputational cost than that of a typical gradient based optimizer. However, thetrue potential of particle swarm optimization is primarily in applications withdiscrete and/or discontinuous functions and variables. Additionally, particleswarm optimization has the potential of e3_icient computation with very largenumbers of concurrently operating processors.

Proceedings ArticleDOI
10 Nov 2003
TL;DR: A hybrid particle swarm with differential evolution operator, termed DEPSO, which provide the bell-shaped mutations with consensus on the population diversity along with the evolution, while keeping the self-organized particle swarm dynamics, is proposed.
Abstract: A hybrid particle swarm with differential evolution operator, termed DEPSO, which provide the bell-shaped mutations with consensus on the population diversity along with the evolution, while keeping the self-organized particle swarm dynamics, is proposed. Then it is applied to a set of benchmark functions, and the experimental results illustrate its efficiency.

Proceedings ArticleDOI
01 Jan 2003
TL;DR: This paper has developed some special methods for solving TSP using PSO and proposed the concept of swap operator and swap sequence, and redefined some operators on the basis of them, and designed a special PSO.
Abstract: This paper proposes a new application of particle swarm optimization for traveling salesman problem. We have developed some special methods for solving TSP using PSO. We have also proposed the concept of swap operator and swap sequence, and redefined some operators on the basis of them, in this way the paper has designed a special PSO. The experiments show that it can achieve good results.

Proceedings ArticleDOI
23 Jun 2003
TL;DR: It appears that a fully informed particle swarm is more susceptible to alterations in the topology, but with a goodTopology, it can outperform the canonical version.
Abstract: We vary the way an individual in the particle swarm interacts with its neighbors. Performance depends on population topology as well as algorithm version.

Proceedings ArticleDOI
02 Nov 2003
TL;DR: From the experiments, it is clear that a PSO with increasing inertia weight outperforms the one with decreasing inertia weight, both in convergent speed and solution precision, with no additional computing load.
Abstract: A PSO with increasing inertia weight, distinct from a widely used PSO with decreasing inertia weight, is proposed in this paper. Far from drawing conclusions from sole empirical study or rule of thumb, this algorithm is derived from particle trajectory study and convergence analysis. Four standard test functions are used to confirm its validity finally. From the experiments, it is clear that a PSO with increasing inertia weight outperforms the one with decreasing inertia weight, both in convergent speed and solution precision, with no additional computing load.

Proceedings ArticleDOI
13 Jul 2003
TL;DR: In this paper, a hybrid particle swarm optimization for a practical distribution state estimation is proposed, which can estimate load and distributed generation output values at each node by minimizing difference between measured and calculated voltages and currents.
Abstract: This paper proposes a hybrid particle swarm optimization for a practical distribution state estimation. The proposed method considers nonlinear characteristics of the practical equipment and actual limited measurements in distribution systems. The method can estimate load and distributed generation output values at each node by minimizing difference between measured and calculated voltages and currents. The feasibility of the proposed method is demonstrated and compared with an original particle swarm optimization based method on practical distribution system models. Effectiveness of the constriction factor approach of particle swarm optimization is also investigated. The results indicate the applicability of the proposed state estimation method to the practical distribution systems.

Proceedings ArticleDOI
24 Apr 2003
TL;DR: This paper presents a modified dynamic neighborhood particle swarm optimization (DNPSO) algorithm that is modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives.
Abstract: This paper presents a modified dynamic neighborhood particle swarm optimization (DNPSO) algorithm for multiobjective optimization problems. PSO is modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives. An extended memory is introduced to store global Pareto optimal solutions to reduce computation time. Several benchmark cases were tested and the results show that the modified DNPSO is much more efficient than the original DNPSO and other multiobjective optimization techniques.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The results show that mutation hinders the motion of the swarm on the sphere but the combination of CPSO with mutation provides a significant improvement in performance for the Rastrigin and Rosenbrock functions for all dimensions and the Ackley function for dimensions 20 and 30, with no improvement for the 10 dimensional case.
Abstract: The particle swarm optimization algorithms converges rapidly during the initial stages of a search, but often slows considerably and can get trapped in local optima. This paper examines the use of mutation to both speed up convergence and escape local minima. It compares the effectiveness of the basic particle swarm optimization scheme (BPSO) with each of BPSO with mutation, constriction particle swarm optimization (CPSO) with mutation, and CPSO without mutation. The four test functions used were the Sphere, Ackley, Rastrigin and Rosenbrock functions of dimensions 10, 20 and 30. The results show that mutation hinders the motion of the swarm on the sphere but the combination of CPSO with mutation provides a significant improvement in performance for the Rastrigin and Rosenbrock functions for all dimensions and the Ackley function for dimensions 20 and 30, with no improvement for the 10 dimensional case.

Journal ArticleDOI
TL;DR: This paper describes a method of designing a reconfigurable dual‐beam antenna array using a new evolutionary algorithm called particle swarm optimization (PSO) to find element excitations that will result in a sector pattern main beam with low side lobes.
Abstract: Multiple-beam antenna arrays have important applica- tions in communications and radar. This paper describes a method of designing a reconfigurable dual-beam antenna array using a new evolu- tionary algorithm called particle swarm optimization (PSO). The design problem is to find element excitations that will result in a sector pattern main beam with low side lobes with the additional requirement that the same excitation amplitudes applied to the array with zero phase should result in a high directivity, low side lobe, and pencil-shaped main beam. Two approaches to the optimization are detailed. First, the PSO is used to optimize the coefficients of the Woodward-Lawson array synthesis method. Second, the element excitations will be optimized directly using PSO. The performance of the two methods is compared and the viability of the resulting designs are discussed in terms of sensitivity to errors in the excitation. Additionally, a parallel version of the particle swarm code developed for a multi-node Beowulf cluster and the benefits that multi- node computing bring to global optimization will be discussed. © 2003 Wiley Periodicals, Inc. Microwave Opt Technol Lett 38: 168-175, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.11005

Proceedings ArticleDOI
24 Apr 2003
TL;DR: This paper investigates the performance of the guaranteed convergence PSO (GCPSO) using different neighbourhood topologies and compares the results with their standard PSO counterparts.
Abstract: The standard particle swarm optimiser (PSO) may prematurely converge on suboptimal solutions that are not even guaranteed to be local extrema. The guaranteed convergence modifications to the PSO algorithm ensure that the PSO at least converges on a local extremum at the expense of even faster convergence. This faster convergence means that less of the search space is explored reducing the opportunity of the swarm to find better local extrema. Various neighbourhood topologies inhibit premature convergence by preserving swarm diversity during the search. This paper investigates the performance of the guaranteed convergence PSO (GCPSO) using different neighbourhood topologies and compares the results with their standard PSO counterparts.

Proceedings ArticleDOI
24 Apr 2003
TL;DR: A modified particle swarm optimizer which deals with permutation problems and preliminary study on the n-queens problem shows that the modified PSO is promising in solving constraint satisfaction problems.
Abstract: This paper introduces a modified particle swarm optimizer which deals with permutation problems. Particles are defined as permutations of a group of unique values. Velocity updates are redefined based on the similarity of two particles. Particles change their permutations with a random rate defined by their velocities. A mutation factor is introduced to prevent the current pBest from becoming stuck at local minima. Preliminary study on the n-queens problem shows that the modified PSO is promising in solving constraint satisfaction problems.

01 Jan 2003
TL;DR: This paper studies a parallel version of the Vector Evaluated Particle Swarm Optimization (VEPSO) method for multiobjective problems, investigating both the efficiency and the advantages of the parallel implementation.
Abstract: This paper studies a parallel version of the Vector Evaluated Particle Swarm Optimization (VEPSO) method for multiobjective problems. Experiments on well known and widely used test problems are performed, aiming at investigating both the efficiency of VEPSO as well as the advantages of the parallel implementation. The obtained results are compared with the corresponding results of the Vector Evaluated Genetic Algorithm approach, yielding the superiority of VEPSO.

Proceedings ArticleDOI
22 Apr 2003
TL;DR: In the proposed algorithm, the rate of convergence is improved by adding a conscience factor to the self-organizing maps algorithm and the robustness of the result is measured by using a resampling technique.
Abstract: Gene clustering, the process of grouping related genes in the same cluster, is at the foundation of different genomic studies that aim at analyzing the function of genes. Microarray technologies have made it possible to measure gene expression levels for thousand of genes simultaneously. For knowledge to be extracted from the datasets generated by these technologies, the datasets have to be presented to a scientist in a meaningful way. Gene clustering methods serve this purpose. In this paper, a hybrid clustering approach that is based on self-organizing maps and particle swarm optimization is proposed. In the proposed algorithm, the rate of convergence is improved by adding a conscience factor to the self-organizing maps algorithm. The robustness of the result is measured by using a resampling technique. The algorithm is implemented on a cluster of workstations.

Book ChapterDOI
12 Jul 2003
TL;DR: The resulting algorithm, known as Fitness-Distance-Ratio based PSO (FDR-PSO), is shown to perform significantly better than the original PSO algorithm and several of its variants, on many different benchmark optimization problems.
Abstract: This paper presents a modification of the particle swarm optimization algorithm (PSO) intended to combat the problem of premature convergence observed in many applications of PSO. In the new algorithm, each particle is attracted towards the best previous positions visited by its neighbors, in addition to the other aspects of particle dynamics in PSO. This is accomplished by using the ratio of the relative fitness and the distance of other particles to determine the direction in which each component of the particle position needs to be changed. The resulting algorithm, known as Fitness-Distance-Ratio based PSO (FDR-PSO), is shown to perform significantly better than the original PSO algorithm and several of its variants, on many different benchmark optimization problems. Avoiding premature convergence allows FDR-PSO to continue search for global optima in difficult multimodal optimization problems, reaching better solutions than PSO and several of its variants.

Journal ArticleDOI
TL;DR: The optimal sizing design of truss structures is studied using the recently proposed particle swarm optimization algorithm (PSOA), which mimics the social behavior of birds.
Abstract: The optimal sizing design of truss structures is studied using the recently proposed particle swarm optimization algorithm (PSOA). The algorithm mimics the social behavior of birds. Individual birds in the flock exchange information about their position, velocity and fitness, and the behavior of the flock is then influenced to increase the probability of migration to regions of high fitness. A simple approach is presented to accommodate the stress and displacement constraints in the initial stages of the swarm searches. Increased social pressure, at the cost of cognitive learning, is exerted on infeasible birds to increase their rate of migration to feasible regions. Numerical results are presented for a number of well-known test functions, with dimensionality of up to 21.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The proposed approach is based on the particle swarm optimization method and it is used for the detection of proper weight matrices that lead the fuzzy cognitive map to desired steady states.
Abstract: We introduce a new algorithm for fuzzy cognitive maps learning. The proposed approach is based on the particle swarm optimization method and it is used for the detection of proper weight matrices that lead the fuzzy cognitive map to desired steady states. For this purpose a properly defined objective function that incorporates experts' knowledge is constructed and minimized. The application of the proposed methodology to an industrial control problem supports the claim that the proposed technique is efficient and robust.

Proceedings ArticleDOI
24 Apr 2003
TL;DR: New ways an individual can be influenced by its neighbors are introduced in particle swarm optimization, where a population of candidate problem solution vectors evolves "social" norms by being influenced by their topological neighbors.
Abstract: Particle swarm optimization is a novel algorithm where a population of candidate problem solution vectors evolves "social" norms by being influenced by their topological neighbors. Until now, an individual was influenced by its best performance acquired in the past and the best experience observed in its neighborhood. In this paper, we introduce new ways an individual can be influenced by its neighbors.

Proceedings ArticleDOI
13 Jul 2003
TL;DR: The simulation results show that the proposed BPSO method is indeed capable of obtaining higher quality solutions, and the test results are compared with those obtained by the GA method in terms of solution quality and convergence characteristic.
Abstract: This paper proposes integrating a discrete binary particle swarm optimization (BPSO) method with the Lambda-iteration method for solving unit commitment (UC) problems. The LJC problem is considered as two linked optimization sub-problems: the unit-scheduled problem that can be solved by the BPSO method for the minimization of the transition cost, and the economic dispatch (ED) problem that can be solved by the Lambda-iteration method for the minimization of the production cost. The feasibility of the proposed method is demonstrated for 10 and 26 unit systems, respectively, and the test results are compared with those obtained by the GA method in terms of solution quality and convergence characteristic. The simulation results show that the proposed method is indeed capable of obtaining higher quality solutions.

Book ChapterDOI
14 Sep 2003
TL;DR: This paper studies aggregation in a swarm of simple robots, called s-bots, having the capability to self-organize and self- assemble to form a robotic system, called a swarm-bot, and shows that artificial evolution is able to produce simple but general solutions to the aggregation problem.
Abstract: In this paper, we study aggregation in a swarm of simple robots, called s-bots, having the capability to self-organize and self- assemble to form a robotic system, called a swarm-bot The aggregation process, observed in many biological systems, is of fundamental impor- tance since it is the prerequisite for other forms of cooperation that in- volve self-organization and self-assembling We consider the problem of defining the control system for the swarm-bot using artificial evolution The results obtained in a simulated 3D environment are presented and analyzed They show that artificial evolution, exploiting the complex in- teractions among s-bots and between s-bots and the environment, is able to produce simple but general solutions to the aggregation problem

Journal ArticleDOI
TL;DR: In this paper, the particle swarm was used to find solutions to a specific dual-beam array problem in two ways: first, the PSO optimized the Woodward-Lawson coefficients, and second, the element excitation amplitudes and phases directly.
Abstract: ing. The particle swarm was used to find solutions to a specific dual-beam array problem in two ways. First, the PSO optimized the Woodward–Lawson coefficients. Second, the PSO optimized the element excitation amplitudes and phases directly. Both methods provided acceptable solutions to the problem, but the second method was found to be more straightforward both conceptually and in practice. In real-life applications, errors in power dividing networks and sampling error associated with binary phase shifters are inevitable, and practical arrays must be able to maintain acceptable performance in spite of these imperfections. The particle swarm-optimized reconfigurable array designs were found to be resistant to simulated variations in the excitation coefficients. Future work with the particle swarm might extend to many different areas of antenna design and analysis. The PSO program developed at UCLA has the ability to be linked to nearly any numerical simulation program available, and is therefore capable of optimizing any structure that can be numerically simulated. In particular, multiple-beam array problems could be approached using more degrees of freedom than are utilized in the present work. For example, the element position and geometry could be optimized in addition to the array excitation to achieve even more complex multiple-beam patterns.