scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2005"


Journal ArticleDOI
TL;DR: Comparisons among the formulation and results of five recent evolutionary-based algorithms: genetic algorithms, memetic algorithms, particle swarm, ant-colony systems, and shuffled frog leaping are compared.

1,268 citations


Journal ArticleDOI
Bo Liu1, Ling Wang1, Yihui Jin1, Fang Tang2, Dexian Huang1 
TL;DR: Simulation results and comparisons with the standard PSO and several meta-heuristics show that the CPSO can effectively enhance the searching efficiency and greatly improve the searching quality.
Abstract: As a novel optimization technique, chaos has gained much attention and some applications during the past decade. For a given energy or cost function, by following chaotic ergodic orbits, a chaotic dynamic system may eventually reach the global optimum or its good approximation with high probability. To enhance the performance of particle swarm optimization (PSO), which is an evolutionary computation technique through individual improvement plus population cooperation and competition, hybrid particle swarm optimization algorithm is proposed by incorporating chaos. Firstly, adaptive inertia weight factor (AIWF) is introduced in PSO to efficiently balance the exploration and exploitation abilities. Secondly, PSO with AIWF and chaos are hybridized to form a chaotic PSO (CPSO), which reasonably combines the population-based evolutionary searching ability of PSO and chaotic searching behavior. Simulation results and comparisons with the standard PSO and several meta-heuristics show that the CPSO can effectively enhance the searching efficiency and greatly improve the searching quality.

879 citations


Book ChapterDOI
09 Mar 2005
TL;DR: A new Multi-Objective Particle Swarm Optimizer is proposed, which is based on Pareto dominance and the use of a crowding factor to filter out the list of available leaders and incorporates the ∈-dominance concept to fix the size of the set of final solutions produced by the algorithm.
Abstract: In this paper, we propose a new Multi-Objective Particle Swarm Optimizer, which is based on Pareto dominance and the use of a crowding factor to filter out the list of available leaders. We also propose the use of different mutation (or turbulence) operators which act on different subdivisions of the swarm. Finally, the proposed approach also incorporates the ∈-dominance concept to fix the size of the set of final solutions produced by the algorithm. Our approach is compared against five state-of-the-art algorithms, including three PSO-based approaches recently proposed. The results indicate that the proposed approach is highly competitive, being able to approximate the front even in cases where all the other PSO-based approaches fail.

659 citations


Journal ArticleDOI
TL;DR: The results obtained from the computational study have shown that the proposed algorithm is a viable and effective approach for the multi-objective FJSP, especially for problems on a large scale.

639 citations


Journal ArticleDOI
TL;DR: This paper describes the synthesis method of linear array geometry with minimum sidelobe level and null control using the particle swarm optimization (PSO) algorithm, a newly discovered, high-performance evolutionary algorithm capable of solving general N-dimensional, linear and nonlinear optimization problems.
Abstract: This paper describes the synthesis method of linear array geometry with minimum sidelobe level and null control using the particle swarm optimization (PSO) algorithm. The PSO algorithm is a newly discovered, high-performance evolutionary algorithm capable of solving general N-dimensional, linear and nonlinear optimization problems. Compared to other evolutionary methods such as genetic algorithms and simulated annealing, the PSO algorithm is much easier to understand and implement and requires the least of mathematical preprocessing. The array geometry synthesis is first formulated as an optimization problem with the goal of sidelobe level (SLL) suppression and/or null placement in certain directions, and then solved by the PSO algorithm for the optimum element locations. Three design examples are presented that illustrate the use of the PSO algorithm, and the optimization goal in each example is easily achieved. The results of the PSO algorithm are validated by comparing with results obtained using the quadratic programming method (QPM).

634 citations


Proceedings ArticleDOI
25 Jun 2005
TL;DR: An approach that extends the Particle Swarm Optimization algorithm to handle multiobjective optimization problems by incorporating the mechanism of crowding distance computation into the algorithm of PSO and in the deletion method of an external archive of nondominated solutions.
Abstract: In this paper, we present an approach that extends the Particle Swarm Optimization (PSO) algorithm to handle multiobjective optimization problems by incorporating the mechanism of crowding distance computation into the algorithm of PSO, specifically on global best selection and in the deletion method of an external archive of nondominated solutions. The crowding distance mechanism together with a mutation operator maintains the diversity of nondominated solutions in the external archive. The performance of this approach is evaluated on test functions and metrics from literature. The results show that the proposed approach is highly competitive in converging towards the Pareto front and generates a well distributed set of nondominated solutions.

578 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: This study reports how the differential evolution (DE) algorithm performed on the test bed developed for the CEC05 contest for real parameter optimization.
Abstract: This study reports how the differential evolution (DE) algorithm performed on the test bed developed for the CEC05 contest for real parameter optimization The test bed includes 25 scalable functions, many of which are both non-separable and highly multi-modal Results include DE's performance on the 10 and 30-dimensional versions of each function

555 citations


Journal ArticleDOI
TL;DR: A solution to the reactive power dispatch problem with a novel particle swarm optimization approach based on multiagent systems (MAPSO) is presented and it is shown that the proposed approach converges to better solutions much faster than the earlier reported approaches.
Abstract: Reactive power dispatch in power systems is a complex combinatorial optimization problem involving nonlinear functions having multiple local minima and nonlinear and discontinuous constraints. In this paper, a solution to the reactive power dispatch problem with a novel particle swarm optimization approach based on multiagent systems (MAPSO) is presented. This method integrates the multiagent system (MAS) and the particle swarm optimization (PSO) algorithm. An agent in MAPSO represents a particle to PSO and a candidate solution to the optimization problem. All agents live in a lattice-like environment, with each agent fixed on a lattice point. In order to obtain optimal solution quickly, each agent competes and cooperates with its neighbors, and it can also learn by using its knowledge. Making use of these agent-agent interactions and evolution mechanism of PSO, MAPSO realizes the purpose of optimizing the value of objective function. MAPSO applied to optimal reactive power dispatch is evaluated on an IEEE 30-bus power system and a practical 118-bus power system. Simulation results show that the proposed approach converges to better solutions much faster than the earlier reported approaches. The optimization strategy is general and can be used to solve other power system optimization problems as well.

550 citations


Proceedings ArticleDOI
08 Jun 2005
TL;DR: A novel dynamic multi-swarm particle swarm optimizer (PSO) is introduced and results show its better performance when compared with some recent PSO variants.
Abstract: In this paper, a novel dynamic multi-swarm particle swarm optimizer (PSO) is introduced. Different from the existing multi-swarm PSOs and the local version of PSO, the swarms are dynamic and the swarms' size is small. The whole population is divided into many small swarms, these swarms are regrouped frequently by using various regrouping schedules and information is exchanged among the swarms. Experiments are conducted on a set of shifted rotated benchmark functions and results show its better performance when compared with some recent PSO variants.

481 citations


Journal ArticleDOI
TL;DR: In this article, a particle swarm optimization (PSO) technique was used for loss reduction in the IEEE 118-bus system by using a developed optimal power flow based on loss minimization function by expanding the original PSO.
Abstract: This paper presents a particle swarm optimization (PSO) as a tool for loss reduction study. This issue can be formulated as a nonlinear optimization problem. The proposed application consists of using a developed optimal power flow based on loss minimization function by expanding the original PSO. The study is carried out in two steps. First, by using the tangent vector technique, the critical area of the power system is identified under the point of view of voltage instability. Second, once this area is identified, the PSO technique calculates the amount of shunt reactive power compensation that takes place in each bus. The proposed approach has been examined and tested with promising numerical results using the IEEE 118-bus system.

467 citations


Proceedings ArticleDOI
06 Nov 2005
TL;DR: A general formulation of MO optimization is given in this chapter, the Pareto optimality concepts introduced, and solution approaches with examples of MO problems in the power systems field are given.
Abstract: The goal of this chapter is to give fundamental knowledge on solving multi-objective optimization problems. The focus is on the intelligent metaheuristic approaches (evolutionary algorithms or swarm-based techniques). The focus is on techniques for efficient generation of the Pareto frontier. A general formulation of MO optimization is given in this chapter, the Pareto optimality concepts introduced, and solution approaches with examples of MO problems in the power systems field are given

Proceedings ArticleDOI
25 Jun 2005
TL;DR: Two new, improved variants of differential evolution are presented, shown to be statistically significantly better on a seven-function test bed for the following performance measures: solution quality, time to find the solution, frequency of finding the solutions, and scalability.
Abstract: Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical particle swarm optimization (PSO), and (c) two PSO-variants. The new DE-variants are shown to be statistically significantly better on a seven-function test bed for the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability.

Journal ArticleDOI
01 Dec 2005
TL;DR: A hierarchical version of the particle swarm optimization (PSO) metaheuristic, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm, is introduced.
Abstract: A hierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes.

Journal ArticleDOI
TL;DR: It is shown that inclusion of dynamic inertia renders the PSOA relatively insensitive to the values of the cognitive and social scaling factors, and a parameter sensitivity analysis is performed for these two variants.
Abstract: A number of recently proposed variants of the particle swarm optimization algorithm (PSOA) are applied to an extended Dixon-Szeg und constrained test set in global optimization. Of the variants considered, it is shown that constriction as proposed by Clerc, and dynamic inertia and maximum velocity reduction as proposed by Fourie and Groenwold, represent the main contenders from a cost efficiency point of view. A parameter sensitivity analysis is then performed for these two variants in the interests of finding a reliable general purpose off-the-shelf PSOA for global optimization. In doing so, it is shown that inclusion of dynamic inertia renders the PSOA relatively insensitive to the values of the cognitive and social scaling factors.

Book
01 Jan 2005
TL;DR: This paper presents a simple approach to evolutionary multi-objective optimization, using the PS-EA algorithm for multi-Criteria Optimization of Finite State Automata.
Abstract: Evolutionary Multiobjective Optimization Recent Trends in Evolutionary Multiobjective Optimization Self-adaptation and Convergence of Multiobjective Evolutionary Algorithms in Continuous Search Spaces A simple approach to evolutionary multi-objective optimization Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism Scalable Test Problems for Evolutionary Multi-Objective Optimization Particle Swarm Inspired Evolutionary Algorithm (PS-EA) for Multi-Criteria Optimization Problems Evolving Continuous Pareto Regions MOGADES: Multi-Objective Genetic Algorithm with Distributed Environment Scheme Use of Multiobjective Optimization Concepts to Handle Constraints in Genetic Algorithms Multi- Criteria Optimization of Finite State Automata: Maximizing Performance while Minimizing Description Length Multi-objective Optimization of Space Structures under Static and Seismic Loading Conditions

Journal ArticleDOI
TL;DR: An image clustering method that is based on the particle swarm optimizer (PSO) that is applied to synthetic, MRI and satellite images and shows that the PSO image classifier performs better than state-of-the-art image classifiers in all measured criteria.
Abstract: An image clustering method that is based on the particle swarm optimizer (PSO) is developed in this paper. The algorithm finds the centroids of a user specified number of clusters, where each cluster groups together with similar image primitives. To illustrate its wide applicability, the proposed image classifier has been applied to synthetic, MRI and satellite images. Experimental results show that the PSO image classifier performs better than state-of-the-art image classifiers (namely, K-means, Fuzzy C-means, K-Harmonic means and Genetic Algorithms) in all measured criteria. The influence of different values of PSO control parameters on performance is also illustrated.

Proceedings ArticleDOI
12 Dec 2005
TL;DR: Sequential parameter optimization as discussed by the authors is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms, and it can be performed algorithmically and requires basically the specification of the relevant algorithm's parameters.
Abstract: Sequential parameter optimization is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms. To demonstrate its flexibility, three scenarios are discussed: (1) no experience how to choose the parameter setting of an algorithm is available, (2) a comparison with other algorithms is needed, and (3) an optimization algorithm has to be applied effectively and efficiently to a complex real-world optimization problem. Although sequential parameter optimization relies on enhanced statistical techniques such as design and analysis of computer experiments, it can be performed algorithmically and requires basically the specification of the relevant algorithm's parameters

Proceedings ArticleDOI
12 Dec 2005
TL;DR: The quasi-Newton method is combined to improve its local search ability and the performance of a modified dynamic multi-swarm particle swarm optimizer (DMS-PSO) on the set of benchmark functions provided by CEC2005 is reported.
Abstract: In this paper, the performance of a modified dynamic multi-swarm particle swarm optimizer (DMS-PSO) on the set of benchmark functions provided by CEC2005 is reported. Different from the existing multi-swarm PSOs and local versions of PSO, the swarms are dynamic and the swarms' size is small. The whole population is divided into many small swarms, these swarms are regrouped frequently by using various regrouping schedules and information is exchanged among the swarms. The quasi-Newton method is combined to improve its local search ability

Journal Article
TL;DR: A parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data, which demonstrates the usefulness of the proposed PPSO algorithm.
Abstract: Particle swarm optimization (PSO) is an alternative population-based evolutionary computation technique. It has been shown to be capable of optimizing hard mathematical problems in continuous or binary space. We present here a parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data. The first strategy is designed for solution parameters that are independent or are only loosely correlated, such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to parameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid communication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO algorithm.

Journal ArticleDOI
TL;DR: The results show that the proposed SAPSO- based ANN has a better ability to escape from a local optimum and is more effective than the conventional PSO-based ANN.

Book ChapterDOI
27 Aug 2005
TL;DR: A penalty function approach is employed and the algorithm is modified to preserve feasibility of the encountered solutions to investigate the performance of the recently proposed Unified Particle Swarm Optimization method on constrained engineering optimization problems.
Abstract: We investigate the performance of the recently proposed Unified Particle Swarm Optimization method on constrained engineering optimization problems. For this purpose, a penalty function approach is employed and the algorithm is modified to preserve feasibility of the encountered solutions. The algorithm is illustrated on four well–known engineering problems with promising results. Comparisons with the standard local and global variant of Particle Swarm Optimization are reported and discussed.

Proceedings ArticleDOI
10 Oct 2005
TL;DR: This paper focuses on discussing two adaptive parameter control methods for QPSO, a quantum-behaved particle swarm optimization algorithm that outperforms traditional PSOs in search ability as well as having less parameter to control.
Abstract: Particle swarm optimization (PSO) is a population-based evolutionary search technique, which has comparable performance with genetic algorithm. The existing PSOs, however, are not global-convergence-guaranteed algorithms. In the previous work, we proposed quantum-behaved particle swarm optimization (QPSO) algorithm that outperforms traditional PSOs in search ability as well as having less parameter to control. This paper focuses on discussing two adaptive parameter control methods for QPSO. After the ideology of QPSO is formulated, the experiment results of stochastic simulation are given to show how to select the parameter value to guarantee the convergence of the particle in QPSO. Finally, two adaptive parameter control methods are presented and experiment results on benchmark functions testify their efficiency.

Proceedings ArticleDOI
12 Dec 2005
TL;DR: DynDE is described, a multipopulation DE algorithm developed specifically to solve dynamic optimization problems that doesn't need any parameter control strategy for the F or CR parameters.
Abstract: This paper presents an approach of using differential evolution (DE) to solve dynamic optimization problems. Careful setting of parameters is necessary for DE algorithms to successfully solve optimization problems. This paper describes DynDE, a multipopulation DE algorithm developed specifically to solve dynamic optimization problems that doesn't need any parameter control strategy for the F or CR parameters. Experimental evidence has been gathered to show that this new algorithm is capable of efficiently solving the moving peaks benchmark.

Journal ArticleDOI
TL;DR: The paper presents a new arithmetic based on a hybrid method of chaotic particle swarm optimization and linear interior point to handle the problems remaining in the traditional arithmetic of time-consuming convergence and demanding initial values.


Book ChapterDOI
09 Mar 2005
TL;DR: An optimization procedure which specializes in solving multi-objective, multi-global problems, carefully designed so as to degenerate to efficient algorithms for solving other simpler optimization problems, such as single-Objective uni-global Problems.
Abstract: Due to the vagaries of optimization problems encountered in practice, users resort to different algorithms for solving different optimization problems. In this paper, we suggest an optimization procedure which specializes in solving multi-objective, multi-global problems. The algorithm is carefully designed so as to degenerate to efficient algorithms for solving other simpler optimization problems, such as single-objective uni-global problems, single-objective multi-global problems and multi-objective uni-global problems. The efficacy of the proposed algorithm in solving various problems is demonstrated on a number of test problems. Because of it's efficiency in handling different types of problems with equal ease, this algorithm should find increasing use in real-world optimization problems.

Journal ArticleDOI
TL;DR: A particle swarm optimization algorithm embedded with constraint fitness priority-based ranking method is proposed in this paper to solve nonlinear programming problem and is proved to be efficient and robust by testing some example and benchmarks of the constrained non linear programming problems.
Abstract: Particle swarm optimization (PSO) is an optimization technique based on population, which has similarities to other evolutionary algorithms. It is initialized with a population of random solutions and searches for optima by updating generations. Particle swarm optimization has become the hotspot of evolutionary computation because of its excellent performance and simple implementation. After introducing the basic principle of the PSO, a particle swarm optimization algorithm embedded with constraint fitness priority-based ranking method is proposed in this paper to solve nonlinear programming problem. By designing the fitness function and constraints-handling method, the proposed PSO can evolve with a dynamic neighborhood and varied inertia weighted value to find the global optimum. The results from this preliminary investigation are quite promising and show that this algorithm is reliable and applicable to almost all of the problems in multiple-dimensional, nonlinear and complex constrained programming. It is proved to be efficient and robust by testing some example and benchmarks of the constrained nonlinear programming problems.

Journal Article
Wei Pang, Kangping Wang, Chunguang Zhou, Lan Huang1, Xiaohui Ji1 
TL;DR: A modified particle swarm optimization (PSO) algorithm was proposed to solve a typical combinatorial optimization problem: traveling salesman problem (TSP), which is a well-known NP-hard problem.
Abstract: Particle Swarm Optimization has succeeded in many continuous problems, but research about discrete problems especially routing problems has been done little. In this paper, an improved Particle Swarm Optimization (PSO) algorithm to solve Traveling Salesman Problem was proposed. Fuzzy Matrix was used to represent the position and velocity of the particles in PSO and the operators in the original PSO formulas were redefined. Then the algorithm was tested with several concrete examples from TSPLIB, experiment shows that the algorithm can achieve good results.

Journal ArticleDOI
01 Jul 2005
TL;DR: A new learning algorithm for Fuzzy Cognitive Maps, which is based on the application of a swarm intelligence algorithm, namely Particle Swarm Optimization, is introduced, which overcomes some deficiencies of other learning algorithms and improves the efficiency and robustness of FuzzY Cognitive Maps.
Abstract: This paper introduces a new learning algorithm for Fuzzy Cognitive Maps, which is based on the application of a swarm intelligence algorithm, namely Particle Swarm Optimization. The proposed approach is applied to detect weight matrices that lead the Fuzzy Cognitive Map to desired steady states, thereby refining the initial weight approximation provided by the experts. This is performed through the minimization of a properly defined objective function. This novel method overcomes some deficiencies of other learning algorithms and, thus, improves the efficiency and robustness of Fuzzy Cognitive Maps. The operation of the new method is illustrated on an industrial process control problem, and the obtained simulation results support the claim that it is robust and efficient.

Proceedings ArticleDOI
12 Dec 2005
TL;DR: This paper investigates the use of evolutionary multi-objective optimization methods (EMOs) for solving single-objectives optimization problems in dynamic environments and adopts the non-dominated sorting genetic algorithm version 2 (NSGA2).
Abstract: This paper investigates the use of evolutionary multi-objective optimization methods (EMOs) for solving single-objective optimization problems in dynamic environments. A number of authors proposed the use of EMOs for maintaining diversity in a single objective optimization task, where they transform the single objective optimization problem into a multi-objective optimization problem by adding an artificial objective function. We extend this work by looking at the dynamic single objective task and examine a number of different possibilities for the artificial objective function. We adopt the non-dominated sorting genetic algorithm version 2 (NSGA2). The results show that the resultant formulations are promising and competitive to other methods for handling dynamic environments.