scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2007"


Journal ArticleDOI
TL;DR: Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm that is used for optimizing multivariable functions and the results showed that ABC outperforms the other algorithms.
Abstract: Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize. An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees' swarming around their hive is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used for optimizing multivariable functions and the results produced by ABC, Genetic Algorithm (GA), Particle Swarm Algorithm (PSO) and Particle Swarm Inspired Evolutionary Algorithm (PS-EA) have been compared. The results showed that ABC outperforms the other algorithms.

6,377 citations


Proceedings ArticleDOI
01 Apr 2007
TL;DR: A standard algorithm is defined here which is designed to be a straightforward extension of the original algorithm while taking into account more recent developments that can be expected to improve performance on standard measures.
Abstract: Particle swarm optimization has become a common heuristic technique in the optimization community, with many researchers exploring the concepts, issues, and applications of the algorithm. In spite of this attention, there has as yet been no standard definition representing exactly what is involved in modern implementations of the technique. A standard is defined here which is designed to be a straightforward extension of the original algorithm while taking into account more recent developments that can be expected to improve performance on standard measures. This standard algorithm is intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community

1,269 citations


Book ChapterDOI
18 Jun 2007
TL;DR: The ABC algorithm has been extended for solving constrained optimization problems and applied to a set of constrained problems to show superior performance on these kind of problems.
Abstract: This paper presents the comparison results on the performance of the Artificial Bee Colony (ABC) algorithm for constrained optimization problems. The ABC algorithm has been firstly proposed for unconstrained optimization problems and showed that it has superior performance on these kind of problems. In this paper, the ABC algorithm has been extended for solving constrained optimization problems and applied to a set of constrained problems .

1,218 citations


Journal ArticleDOI
Qie He1, Ling Wang1
TL;DR: A co-evolutionary particle swarm optimization approach (CPSO) for constrained optimization problems, where PSO is applied with two kinds of swarms for evolutionary exploration and exploitation in spaces of both solutions and penalty factors.

939 citations


Journal ArticleDOI
TL;DR: Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995 as discussed by the authors, and the authors have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm.
Abstract: Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors' perspective, including variations in the algorithm, current and ongoing research, applications and open problems.

720 citations


Journal ArticleDOI
TL;DR: Improvements, effect of the different setting parameters, and functionality of the algorithm are shown in the scope of classical structural optimization problems, and results show the ability of the proposed methodology to find better optimal solutions for structural optimization tasks than other optimization algorithms.

646 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with back-propagation (BP) algorithm, also referred to as PSO-BP algorithm, is proposed to train the weights of feedforward neural network (FNN), the hybrid algorithm can make use of not only strong global searching ability of the PSOA, but also strong local searching capability of the BP algorithm.

591 citations


Journal ArticleDOI
TL;DR: This paper aims to offer a compendious and timely review of the field and the challenges and opportunities offered by this welcome addition to the optimization toolbox.
Abstract: Particle Swarm Optimization (PSO), in its present form, has been in existence for roughly a decade, with formative research in related domains (such as social modelling, computer graphics, simulation and animation of natural swarms or flocks) for some years before that; a relatively short time compared with some of the other natural computing paradigms such as artificial neural networks and evolutionary computation. However, in that short period, PSO has gained widespread appeal amongst researchers and has been shown to offer good performance in a variety of application domains, with potential for hybridisation and specialisation, and demonstration of some interesting emergent behaviour. This paper aims to offer a compendious and timely review of the field and the challenges and opportunities offered by this welcome addition to the optimization toolbox. Part I discusses the location of PSO within the broader domain of natural computing, considers the development of the algorithm, and refinements introduced to prevent swarm stagnation and tackle dynamic environments. Part II considers current research in hybridisation, combinatorial problems, multicriteria and constrained optimization, and a range of indicative application areas.

585 citations


Journal ArticleDOI
TL;DR: This work calls this approach a multialgorithm, genetically adaptive multiobjective, or AMALGAM, method, to evoke the image of a procedure that merges the strengths of different optimization algorithms.
Abstract: In the last few decades, evolutionary algorithms have emerged as a revolutionary approach for solving search and optimization problems involving multiple conflicting objectives. Beyond their ability to search intractably large spaces for multiple solutions, these algorithms are able to maintain a diverse population of solutions and exploit similarities of solutions by recombination. However, existing theory and numerical experiments have demonstrated that it is impossible to develop a single algorithm for population evolution that is always efficient for a diverse set of optimization problems. Here we show that significant improvements in the efficiency of evolutionary search can be achieved by running multiple optimization algorithms simultaneously using new concepts of global information sharing and genetically adaptive offspring creation. We call this approach a multialgorithm, genetically adaptive multiobjective, or AMALGAM, method, to evoke the image of a procedure that merges the strengths of different optimization algorithms. Benchmark results using a set of well known multiobjective test problems show that AMALGAM approaches a factor of 10 improvement over current optimization algorithms for the more complex, higher dimensional problems. The AMALGAM method provides new opportunities for solving previously intractable optimization problems.

548 citations


Journal ArticleDOI
TL;DR: A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non-dominated fronts, while retaining at the same time the convergence to the Pareto-optimal front.

482 citations


Journal ArticleDOI
TL;DR: A hybrid approach involving genetic algorithms (GA) and bacterial foraging algorithms for function optimization problems and results clearly illustrate that the proposed approach is very efficient and could easily be extended for other global optimization problems.

Journal ArticleDOI
Qie He1, Ling Wang1
TL;DR: A hybrid PSO (HPSO) with a feasibility-based rule is proposed to solve constrained optimization problems and it is shown that the rule requires no additional parameters and can guide the swarm to the feasible region quickly.

Journal ArticleDOI
TL;DR: This letter presents a formal stochastic convergence analysis of the standard particle swarm optimization (PSO) algorithm, which involves with randomness.

Journal ArticleDOI
TL;DR: A novel particle swarm optimization (PSO)-based algorithm for the traveling salesman problem (TSP) is presented and it has been shown that the size of the solved problems could be increased by using the proposed algorithm.

Journal ArticleDOI
TL;DR: The efficiency of the hybrid optimization algorithms is influenced by the statistical property of chaotic/Stochastic sequences generated from chaotic/stochastic algorithms, and the location of the global optimum of nonlinear functions.
Abstract: Chaos optimization algorithms as a novel method of global optimization have attracted much attention, which were all based on Logistic map. However, we have noticed that the probability density function of the chaotic sequences derived from Logistic map is a Chebyshev-type one, which may affect the global searching capacity and computational efficiency of chaos optimization algorithms considerably. Considering the statistical property of the chaotic sequences of Logistic map and Kent map, the improved hybrid chaos-BFGS optimization algorithm and the Kent map based hybrid chaos-BFGS algorithm are proposed. Five typical nonlinear functions with multimodal characteristic are tested to compare the performance of five hybrid optimization algorithms, which are the conventional Logistic map based chaos-BFGS algorithm, improved Logistic map based chaos-BFGS algorithm, Kent map based chaos-BFGS algorithm, Monte Carlo-BFGS algorithm, mesh-BFGS algorithm. The computational performance of the five algorithms is compared, and the numerical results make us question the high efficiency of the chaos optimization algorithms claimed in some references. It is concluded that the efficiency of the hybrid optimization algorithms is influenced by the statistical property of chaotic/stochastic sequences generated from chaotic/stochastic algorithms, and the location of the global optimum of nonlinear functions. In addition, it is inappropriate to advocate the high efficiency of the global optimization algorithms only depending on several numerical examples of low-dimensional functions.

Proceedings ArticleDOI
27 Jun 2007
TL;DR: This algorithm is shown to be a better interpretation of continuous PSO into discrete PSO than the older versions and a number of benchmark optimization problems are solved using this concept and quite satisfactory results are obtained.
Abstract: Particle swarm optimization (PSO) as a novel computational intelligence technique, has succeeded in many continuous problems. But in discrete or binary version there are still some difficulties. In this paper a novel binary PSO is proposed. This algorithm proposes a new definition for the velocity vector of binary PSO. It will be shown that this algorithm is a better interpretation of continuous PSO into discrete PSO than the older versions. Also a number of benchmark optimization problems are solved using this concept and quite satisfactory results are obtained.

Journal ArticleDOI
TL;DR: This paper develops, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives, and shows that the resulting algorithm is highly competitive with other global optimization methods also based on function values.
Abstract: In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.

Journal ArticleDOI
TL;DR: The hybrid NM-PSO algorithm based on the Nelder–Mead (NM) simplex search method and particle swarm optimization (PSO) for unconstrained optimization is proposed to demonstrate how the standard particle swarm optimizers can be improved by incorporating a hybridization strategy.

Journal ArticleDOI
TL;DR: A simple pheromone-guided mechanism to improve the performance of PSO method for optimization of multimodal continuous functions is explored and numerical results comparisons with different metaheuristics demonstrate the effectiveness and efficiency of the proposed PSACO method.

Proceedings ArticleDOI
24 Jun 2007
TL;DR: In this paper, the optimal location and size of capacitors on radial distribution systems to improve voltage profile and reduce the active power loss are done by loss sensitivity factors and particle swarm optimization respectively.
Abstract: This paper presents a novel approach that determines the optimal location and size of capacitors on radial distribution systems to improve voltage profile and reduce the active power loss. Capacitor placement & sizing are done by loss sensitivity factors and particle swarm optimization respectively. The concept of loss sensitivity factors and can be considered as the new contribution in the area of distribution systems. Loss sensitivity factors offer the important information about the sequence of potential nodes for capacitor placement. These factors are determined using single base case load flow study. particle swarm optimization is well applied and found to be very effective in radial distribution systems. The proposed method is tested on 10,15, 34, 69 and 85 bus distribution systems.

Journal ArticleDOI
TL;DR: The honey-bee mating optimization (HBMO) algorithm is presented and tested with a nonlinear, continuous constrained problem with continuous decision and state variables to demonstrate the efficiency of the algorithm in handling the single reservoir operation optimization problems.
Abstract: In recent years, evolutionary and meta-heuristic algorithms have been extensively used as search and optimization tools in various problem domains, including science, commerce, and engineering. Ease of use, broad applicability, and global perspective may be considered as the primary reason for their success. The honey-bee mating process has been considered as a typical swarm-based approach to optimization, in which the search algorithm is inspired by the process of real honey-bee mating. In this paper, the honey-bee mating optimization (HBMO) algorithm is presented and tested with a nonlinear, continuous constrained problem with continuous decision and state variables to demonstrate the efficiency of the algorithm in handling the single reservoir operation optimization problems. It is shown that the performance of the model is quite comparable with the results of the well-developed traditional linear programming (LP) solvers such as LINGO 8.0. Results obtained are quite promising and compare well with the final results of the other approach.

Journal ArticleDOI
TL;DR: In this paper, an improved particle swarm optimization (IPSO) algorithm is proposed, where a population of points sampled randomly from the feasible space is partitioned into several sub-swarms, each of which is made to evolve based on PSO algorithm.

01 Jan 2007
TL;DR: A new optimization algorithm, namely, Cat Swarm Optimization (CSO) is proposed, which is generated by observing the behavior of cats, and composed of two sub-models by simulating thebehavior of cats.
Abstract: Optimization problems are very important in many fields To the present, many optimization algorithms based on computational intelligence have been proposed, such as the Genetic Algorithm, Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO) In this paper, a new optimization algorithm, namely, Cat Swarm Optimization (CSO) is proposed CSO is generated by observing the behavior of cats, and composed of two sub-models by simulating the behavior of cats According to the experiments, the results reveal that CSO is superior to PSO

Proceedings ArticleDOI
01 Sep 2007
TL;DR: An Opposition-based PSO (OPSO) to accelerate the convergence of PSO and avoid premature convergence is presented, which employs opposition-based learning for each particle and applies a dynamic Cauchy mutation on the best particle.
Abstract: Particle swarm optimization (PSO) has shown its fast search speed in many complicated optimization and search problems. However, PSO could often easily fall into local optima. This paper presents an Opposition-based PSO (OPSO) to accelerate the convergence of PSO and avoid premature convergence. The proposed method employs opposition-based learning for each particle and applies a dynamic Cauchy mutation on the best particle. Experimental results on many well- known benchmark optimization problems have shown that OPSO could successfully deal with those difficult multimodal functions while maintaining fast search speed on those simple unimodal functions in the function optimization.

Journal ArticleDOI
TL;DR: Two CPSO methods based on the logistic equation and the Tent equation are presented to solve economic dispatch (ED) problems with generator constraints and applied in two power system cases.

Journal ArticleDOI
TL;DR: In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance, and it is seen that EMPSO is yielding better quality solutions with less number of function evaluations.
Abstract: This paper presents an efficient and reliable swarm intelligence-based approach, namely elitist-mutated particle swarm opti- mization EMPSO technique, to derive reservoir operation policies for multipurpose reservoir systems. Particle swarm optimizers are inherently distributed algorithms, in which the solution for a problem emerges from the interactions between many simple individuals called particles. In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance. The proposed approach is first tested on a hypothetical multireservoir system, used by earlier researchers. EMPSO showed promising results, when compared with other techniques. To show practical utility, EMPSO is then applied to a realistic case study, the Bhadra reservoir system in India, which serves multiple purposes, namely irrigation and hydropower generation. To handle multiple objectives of the problem, a weighted approach is adopted. The results obtained demonstrate that EMPSO is consistently performing better than the standard PSO and genetic algorithm techniques. It is seen that EMPSO is yielding better quality solutions with less number of function evaluations.

Journal ArticleDOI
TL;DR: This work chooses the training of feed-forward neural networks for pattern classification as a test case for a first ACO variant for continuous optimization, and proposes hybrid algorithm variants that incorporate short runs of classical gradient techniques such as backpropagation.
Abstract: Ant colony optimization (ACO) is an optimization technique that was inspired by the foraging behaviour of real ant colonies. Originally, the method was introduced for the application to discrete optimization problems. Recently we proposed a first ACO variant for continuous optimization. In this work we choose the training of feed-forward neural networks for pattern classification as a test case for this algorithm. In addition, we propose hybrid algorithm variants that incorporate short runs of classical gradient techniques such as backpropagation. For evaluating our algorithms we apply them to classification problems from the medical field, and compare the results to some basic algorithms from the literature. The results show, first, that the best of our algorithms are comparable to gradient-based algorithms for neural network training, and second, that our algorithms compare favorably with a basic genetic algorithm.

Journal ArticleDOI
01 Feb 2007
TL;DR: A new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning.
Abstract: In this paper, a new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning. A new particle updating strategy is proposed based upon the concept of fuzzy global-best to deal with the problem of premature convergence and diversity maintenance within the swarm. The proposed features are examined to show their individual and combined effects in MO optimization. The comparative study shows the effectiveness of the proposed MA, which produces solution sets that are highly competitive in terms of convergence, diversity, and distribution

Journal ArticleDOI
TL;DR: This paper proposes and compares different one-dimensional maps as chaotic search patterns in the constraint nonlinear optimization problems and applies them in specific optimization algorithm (weighted gradient direction based chaos optimization algorithm) and compares them based on numerical simulation results.

Proceedings ArticleDOI
01 Apr 2007
TL;DR: A multi-search algorithm inspired by particle swarm optimization is presented, modified by modifying the particle Swarm optimization algorithm to mimic the multi-robot search process, thereby allowing it to model at an abstracted level the effects of changing aspects and parameters of the system.
Abstract: Within the field of multi-robot systems, multi-robot search is one area which is currently receiving a lot of research attention. One major challenge within this area is to design effective algorithms that allow a team of robots to work together to find their targets. Techniques have been adopted for multi-robot search from the particle swarm optimization algorithm, which uses a virtual multi-agent search to find optima in a multi-dimensional function space. We present here a multi-search algorithm inspired by particle swarm optimization. Additionally, we exploit this inspiration by modifying the particle swarm optimization algorithm to mimic the multi-robot search process, thereby allowing us to model at an abstracted level the effects of changing aspects and parameters of the system such as number of robots and communication range