scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2012"


Journal ArticleDOI
TL;DR: An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions.

1,359 citations


Journal ArticleDOI
TL;DR: A new nature‐inspired metaheuristic optimization algorithm, called bat algorithm (BA), based on the echolocation behavior of bats is introduced, and the optimal solutions obtained are better than the best solutions obtained by the existing methods.
Abstract: – Nature‐inspired algorithms are among the most powerful algorithms for optimization. The purpose of this paper is to introduce a new nature‐inspired metaheuristic optimization algorithm, called bat algorithm (BA), for solving engineering optimization tasks., – The proposed BA is based on the echolocation behavior of bats. After a detailed formulation and explanation of its implementation, BA is verified using eight nonlinear engineering optimization problems reported in the specialized literature., – BA has been carefully implemented and carried out optimization for eight well‐known optimization tasks; then a comparison has been made between the proposed algorithm and other existing algorithms., – The optimal solutions obtained by the proposed algorithm are better than the best solutions obtained by the existing methods. The unique search features used in BA are analyzed, and their implications for future research are also discussed in detail.

1,316 citations


Journal ArticleDOI
TL;DR: A novel hybrid Genetic Algorithm (GA) / Particle Swarm Optimization (PSO) for solving the problem of optimal location and sizing of DG on distributed systems is presented to minimize network power loss and better voltage regulation in radial distribution systems.

920 citations


Journal ArticleDOI
TL;DR: The experimental results and analysis suggest that CCPSO2 is a highly competitive optimization algorithm for solving large-scale and complex multimodal optimization problems.
Abstract: This paper presents a new cooperative coevolving particle swarm optimization (CCPSO) algorithm in an attempt to address the issue of scaling up particle swarm optimization (PSO) algorithms in solving large-scale optimization problems (up to 2000 real-valued variables). The proposed CCPSO2 builds on the success of an early CCPSO that employs an effective variable grouping technique random grouping. CCPSO2 adopts a new PSO position update rule that relies on Cauchy and Gaussian distributions to sample new points in the search space, and a scheme to dynamically determine the coevolving subcomponent sizes of the variables. On high-dimensional problems (ranging from 100 to 2000 variables), the performance of CCPSO2 compared favorably against a state-of-the-art evolutionary algorithm sep-CMA-ES, two existing PSO algorithms, and a cooperative coevolving differential evolution algorithm. In particular, CCPSO2 performed significantly better than sep-CMA-ES and two existing PSO algorithms on more complex multimodal problems (which more closely resemble real-world problems), though not as well as the existing algorithms on unimodal functions. Our experimental results and analysis suggest that CCPSO2 is a highly competitive optimization algorithm for solving large-scale and complex multimodal optimization problems.

649 citations


Journal ArticleDOI
TL;DR: The pyOpt framework as discussed by the authors is an object-oriented framework for formulating and solving nonlinear constrained optimization problems in an efficient, reusable and portable manner, which allows for easy integration of optimization software programmed in Fortran, C, C+?+, and other languages.
Abstract: We present pyOpt, an object-oriented framework for formulating and solving nonlinear constrained optimization problems in an efficient, reusable and portable manner. The framework uses object-oriented concepts, such as class inheritance and operator overloading, to maintain a distinct separation between the problem formulation and the optimization approach used to solve the problem. This creates a common interface in a flexible environment where both practitioners and developers alike can solve their optimization problems or develop and benchmark their own optimization algorithms. The framework is developed in the Python programming language, which allows for easy integration of optimization software programmed in Fortran, C, C+?+, and other languages. A variety of optimization algorithms are integrated in pyOpt and are accessible through the common interface. We solve a number of problems of increasing complexity to demonstrate how a given problem is formulated using this framework, and how the framework can be used to benchmark the various optimization algorithms.

434 citations


Journal ArticleDOI
01 Jun 2012
TL;DR: A novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems, which can enable a particle to choose the optimal strategy according to its own local fitness landscape.
Abstract: Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.

348 citations


Journal ArticleDOI
Jun Sun1, Wei Fang1, Xiaojun Wu1, Vasile Palade2, Wenbo Xu1 
TL;DR: This paper presents a comprehensive analysis of the QPSO algorithm, and performs empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient in order to obtain generally good algorithmic performance in real world applications.
Abstract: Quantum-behaved particle swarm optimization (QPSO), motivated by concepts from quantum mechanics and particle swarm optimization (PSO), is a probabilistic optimization algorithm belonging to the bare-bones PSO family. Although it has been shown to perform well in finding the optimal solutions for many optimization problems, there has so far been little analysis on how it works in detail. This paper presents a comprehensive analysis of the QPSO algorithm. In the theoretical analysis, we analyze the behavior of a single particle in QPSO in terms of probability measure. Since the particle's behavior is influenced by the contraction-expansion (CE) coefficient, which is the most important parameter of the algorithm, the goal of the theoretical analysis is to find out the upper bound of the CE coefficient, within which the value of the CE coefficient selected can guarantee the convergence or boundedness of the particle's position. In the experimental analysis, the theoretical results are first validated by stochastic simulations for the particle's behavior. Then, based on the derived upper bound of the CE coefficient, we perform empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient, in order to obtain generally good algorithmic performance in real world applications. Finally, a further performance comparison between QPSO and other variants of PSO on the benchmarks is made to show the efficiency of the QPSO algorithm with the proposed parameter control and selection methods.

289 citations


Journal ArticleDOI
TL;DR: An efficient optimization algorithm called teaching–learning-based optimization (TLBO) is proposed in this article to solve continuous unconstrained and constrained optimization problems and the results show the better performance of the proposed algorithm.
Abstract: An efficient optimization algorithm called teaching–learning-based optimization (TLBO) is proposed in this article to solve continuous unconstrained and constrained optimization problems. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The algorithm is tested on 25 different unconstrained benchmark functions and 35 constrained benchmark functions with different characteristics. For the constrained benchmark functions, TLBO is tested with different constraint handling techniques such as superiority of feasible solutions, self-adaptive penalty, ϵ-constraint, stochastic ranking and ensemble of constraints. The performance of the TLBO algorithm is compared with that of other optimization algorithms and the results show the better performance of the proposed algorithm.

267 citations


Journal ArticleDOI
TL;DR: A new bare-bones multi-objective particle swarm optimization algorithm which has three distinctive features: a particle updating strategy which does not require tuning up control parameters; a mutation operator with action range varying over time to expand the search capability; and an approach based on particle diversity to update the global particle leaders.

264 citations


Journal ArticleDOI
TL;DR: The application of Ant Colony Optimization and Particle Swarm Optimization on the optimization of the membership functions' parameters of a fuzzy logic controller in order to find the optimal intelligent controller for an autonomous wheeled mobile robot is described.

258 citations


Journal ArticleDOI
TL;DR: A new method based on the firefly algorithm to construct the codebook of vector quantization, called FF-LBG algorithm, which shows that the reconstructed images get higher quality than those generated form the LBG, PSO and QPSO, but it is no significant superiority to the HBMO algorithm.
Abstract: The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) was adapted to obtain the near-global optimal codebook of vector quantization. An alterative method, called the quantum particle swarm optimization (QPSO) had been developed to improve the results of original PSO algorithm. The honey bee mating optimization (HBMO) was also used to develop the algorithm for vector quantization. In this paper, we proposed a new method based on the firefly algorithm to construct the codebook of vector quantization. The proposed method uses LBG method as the initial of FF algorithm to develop the VQ algorithm. This method is called FF-LBG algorithm. The FF-LBG algorithm is compared with the other four methods that are LBG, particle swarm optimization, quantum particle swarm optimization and honey bee mating optimization algorithms. Experimental results show that the proposed FF-LBG algorithm is faster than the other four methods. Furthermore, the reconstructed images get higher quality than those generated form the LBG, PSO and QPSO, but it is no significant superiority to the HBMO algorithm.

BookDOI
01 Jan 2012
TL;DR: This paper presents a work inspired by the Pachycondyla apicalis ants behavior for the clustering problem, which combines API with the ability of ants to sort and cluster, and introduces new concepts to ant-based models.
Abstract: This paper presents a work inspired by the Pachycondyla apicalis ants behavior for the clustering problem. These ants have a simple but efficient prey search strategy: when they capture their prey, they return straight to their nest, drop off the prey and systematically return back to their original position. This behavior has already been applied to optimization, as the API meta-heuristic. API is a shortage of api-calis. Here, we combine API with the ability of ants to sort and cluster. We provide a comparison against Ant clustering Algorithm and K-Means using Machine Learning repository datasets. API introduces new concepts to ant-based models and gives us promising results.

Journal ArticleDOI
01 Mar 2012
TL;DR: Experimental results demonstrated good performance of the θ-QPSO in planning a safe and flyable path for UAV when compared with the GA, DE, and three other PSO-based algorithms.
Abstract: A new variant of particle swarm optimization (PSO), named phase angle-encoded and quantum-behaved particle swarm optimization (θ-QPSO), is proposed. Six versions of θ-QPSO using different mappings are presented and compared through their application to solve continuous function optimization problems. Several representative benchmark functions are selected as testing functions. The real-valued genetic algorithm (GA), differential evolution (DE), standard particle swarm optimization (PSO), phase angle-encoded particle swarm optimization ( θ-PSO), quantum-behaved particle swarm optimization (QPSO), and θ-QPSO are tested and compared with each other on the selected unimodal and multimodal functions. To corroborate the results obtained on the benchmark functions, a new route planner for unmanned aerial vehicle (UAV) is designed to generate a safe and flyable path in the presence of different threat environments based on the θ-QPSO algorithm. The PSO, θ-PSO, and QPSO are presented and compared with the θ-QPSO algorithm as well as GA and DE through the UAV path planning application. Each particle in swarm represents a potential path in search space. To prune the search space, constraints are incorporated into the pre-specified cost function, which is used to evaluate whether a particle is good or not. Experimental results demonstrated good performance of the θ-QPSO in planning a safe and flyable path for UAV when compared with the GA, DE, and three other PSO-based algorithms.

Journal ArticleDOI
TL;DR: A novel combined genetic algorithm (GA)/particle swarm optimization (PSO) is presented for optimal location and sizing of DG on distribution systems to minimize network power losses, to obtain better voltage regulation, and to improve the voltage stability within the framework of system operation and security constraints in radial distribution systems.
Abstract: Distributed generation (DG) sources are becoming more prominent in distribution systems due to the incremental demands for electrical energy. Locations and capacities of DG sources have profoundly impacted on the system losses in a distribution network. In this paper, a novel combined genetic algorithm (GA)/particle swarm optimization (PSO) is presented for optimal location and sizing of DG on distribution systems. The objective is to minimize network power losses, to obtain better voltage regulation, and to improve the voltage stability within the framework of system operation and security constraints in radial distribution systems. This multi-objective optimization problem is transformed to single objective problem by employing fuzzy optimal theory. A detailed performance analysis is carried out on 33 and 69 bus systems to demonstrate the effectiveness of the proposed methodology.

Journal ArticleDOI
TL;DR: This paper focuses on three very similar evolutionary algorithms: genetic algorithm, particle swarm optimization (PSO), and differential evolution (DE), while GA is more suitable for discrete optimization, PSO and DE are more natural for continuous optimization.
Abstract: This paper focuses on three very similar evolutionary algorithms: genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE). While GA is more suitable for discrete optimization, PSO and DE are more natural for continuous optimization. The paper first gives a brief introduction to the three EA techniques to highlight the common computational procedures. The general observations on the similarities and differences among the three algorithms based on computational steps are discussed, contrasting the basic performances of algorithms. Summary of relevant literatures is given on job shop, flexible job shop, vehicle routing, location-allocation, and multimode resource constrained project scheduling problems.

Journal ArticleDOI
TL;DR: The analysis results reveal that the proposed MOL based PID controller for the AVR system performs better than the other similar recently reported population based optimization algorithms.
Abstract: This paper presents the design and performance analysis of Proportional Integral Derivate (PID) controller for an Automatic Voltage Regulator (AVR) system using recently proposed simplified Particle Swarm Optimization (PSO) also called Many Optimizing Liaisons (MOL) algorithm. MOL simplifies the original PSO by randomly choosing the particle to update, instead of iterating over the entire swarm thus eliminating the particles best known position and making it easier to tune the behavioral parameters. The design problem of the proposed PID controller is formulated as an optimization problem and MOL algorithm is employed to search for the optimal controller parameters. For the performance analysis, different analysis methods such as transient response analysis, root locus analysis and bode analysis are performed. The superiority of the proposed approach is shown by comparing the results with some recently published modern heuristic optimization algorithms such as Artificial Bee Colony (ABC) algorithm, Particle Swarm Optimization (PSO) algorithm and Differential Evolution (DE) algorithm. Further, robustness analysis of the AVR system tuned by MOL algorithm is performed by varying the time constants of amplifier, exciter, generator and sensor in the range of −50% to +50% in steps of 25%. The analysis results reveal that the proposed MOL based PID controller for the AVR system performs better than the other similar recently reported population based optimization algorithms.

Journal ArticleDOI
TL;DR: According to obtained results, relative estimation errors of the HAPE model are the lowest of them and quadratic form (HAPEQ) provides better-fit solutions due to fluctuations of the socio-economic indicators.

Journal ArticleDOI
TL;DR: A comparison of state-of-the-art optimization techniques to solve multi-pass turning optimization problems is presented and a hybrid technique based on differential evolution algorithm is introduced for solving manufacturing optimization problems.

Journal ArticleDOI
TL;DR: In this paper, two types of meta-heuristics called Particle Swarm Optimization (PSO) and Firefly algorithms were devised to find optimal solutions of noisy non-linear continuous mathematical models.
Abstract: There are various noisy non-linear mathematical optimization problems that can be effectively solved by Metaheuristic Algorithms. These are iterative search processes that efficiently perform the exploration and exploitation in the solution space, aiming to efficiently find near optimal solutions. Considering the solution space in a specified region, some models contain global optimum and multiple local optima. In this context, two types of meta-heuristics called Particle Swarm Optimization (PSO) and Firefly algorithms were devised to find optimal solutions of noisy non-linear continuous mathematical models. Firefly Algorithm is one of the recent evolutionary computing models which is inspired by fireflies behavior in nature. PSO is population based optimization technique inspired by social behavior of bird flocking or fish schooling. A series of computational experiments using each algorithm were conducted. The results of this experiment were analyzed and compared to the best solutions found so far on the basis of mean of execution time to converge to the optimum. The Firefly algorithm seems to perform better for higher levels of noise.

Journal ArticleDOI
TL;DR: Compared to the basic Binary Particle Swarm Optimization (BPSO), this improved algorithm introduces a new probability function which maintains the diversity in the swarm and makes it more explorative, effective and efficient in solving KPs.

Journal ArticleDOI
TL;DR: This paper presents a variant of single-objective PSO called Dynamic Neighborhood Learning Particle Swarm Optimizer (DNLPSO), which uses learning strategy whereby all other particles' historical best information is used to update a particle's velocity as in CLPSO.

Journal ArticleDOI
TL;DR: A metaheuristic algorithm inspired in evolutionary computation and swarm intelligence concepts and fundamentals of echolocation of micro bats is presented, showing the feasibility of this newly introduced technique to high nonlinear problems in electromagnetics.
Abstract: This paper presents a metaheuristic algorithm inspired in evolutionary computation and swarm intelligence concepts and fundamentals of echolocation of micro bats. The aim is to optimize the mono and multiobjective optimization problems related to the brushless DC wheel motor problems, which has 5 design parameters and 6 constraints for the mono-objective problem and 2 objectives, 5 design parameters, and 5 constraints for multiobjective version. Furthermore, results are compared with other optimization approaches proposed in the recent literature, showing the feasibility of this newly introduced technique to high nonlinear problems in electromagnetics.

Journal ArticleDOI
01 Jun 2012-Energy
TL;DR: In this paper, a data-driven approach was used to optimize the energy consumption of a heating, ventilating, and air conditioning (HVAC) system by using a dynamic neural network.

Journal ArticleDOI
TL;DR: A new multiobjective evolutionary algorithm (MOEA) is proposed by extending the existing cat swarm optimization (CSO) and finds the nondominated solutions along the search process using the concept of Pareto dominance and uses an external archive for storing them.
Abstract: Highlights? A new multiobjective cat swarm optimization (MOCSO) algorithm is proposed. ? MOCSO is more efficient than MOPSO and NSGA-II. ? This algorithm is tested using benchmark functions. ? Sensitivity analysis of different parameters of MOCSO algorithm is carried out. This paper proposes a new multiobjective evolutionary algorithm (MOEA) by extending the existing cat swarm optimization (CSO). It finds the nondominated solutions along the search process using the concept of Pareto dominance and uses an external archive for storing them. The performance of our proposed approach is demonstrated using standard test functions. A quantitative assessment of the proposed approach and the sensitivity test of different parameters is carried out using several performance metrics. The simulation results reveal that the proposed approach can be a better candidate for solving multiobjective problems (MOPs).

Journal ArticleDOI
01 Sep 2012
TL;DR: A new hybrid intrusion detection system by using intelligent dynamic swarm based rough set (IDS-RS) for feature selection and simplified swarm optimization for intrusion data classification and a new weighted local search (WLS) strategy incorporated in SSO is proposed.
Abstract: The network intrusion detection techniques are important to prevent our systems and networks from malicious behaviors. However, traditional network intrusion prevention such as firewalls, user authentication and data encryption have failed to completely protect networks and systems from the increasing and sophisticated attacks and malwares. In this paper, we propose a new hybrid intrusion detection system by using intelligent dynamic swarm based rough set (IDS-RS) for feature selection and simplified swarm optimization for intrusion data classification. IDS-RS is proposed to select the most relevant features that can represent the pattern of the network traffic. In order to improve the performance of SSO classifier, a new weighted local search (WLS) strategy incorporated in SSO is proposed. The purpose of this new local search strategy is to discover the better solution from the neighborhood of the current solution produced by SSO. The performance of the proposed hybrid system on KDDCup 99 dataset has been evaluated by comparing it with the standard particle swarm optimization (PSO) and two other most popular benchmark classifiers. The testing results showed that the proposed hybrid system can achieve higher classification accuracy than others with 93.3% and it can be one of the competitive classifier for the intrusion detection system.

Journal ArticleDOI
TL;DR: A novel local search technique is integrated with some existing PSO based multimodal optimization algorithms to enhance their local search ability and the experimental results suggest that the proposed technique not only increases the probability of finding both global and local optima but also reduces the average number of function evaluations.

Journal ArticleDOI
TL;DR: A Fuzzy Self Adaptive Particle Swarm Optimization (FSAPSO) algorithm is proposed and implemented to dispatch the generations in a typical micro-grid considering economy and emission as competitive objectives.
Abstract: Nowadays, it becomes the head of concern for many modern power girds and energy management systems to derive an optimal operational planning with regard to energy costs minimization, pollutant emissions reduction and better utilization of renewable resources of energy such as wind and solar. Considering all the above objectives in a unified problem provides the desired optimal solution. In this paper, a Fuzzy Self Adaptive Particle Swarm Optimization (FSAPSO) algorithm is proposed and implemented to dispatch the generations in a typical micro-grid considering economy and emission as competitive objectives. The problem is formulated as a nonlinear constraint multi-objective optimization problem with different equality and inequality constraints to minimize the total operating cost of the micro-grid considering environmental issues at the same time. The superior performance of the proposed algorithm is shown in comparison with those of other evolutionary optimization methods such as conventional PSO and genetic algorithm (GA) and its efficiency is verified over the test cases consequently.

Journal ArticleDOI
TL;DR: This work considers the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy systems.

Journal ArticleDOI
TL;DR: This paper compares the performance of RCCRO with a large number of optimization techniques on a large set of standard continuous benchmark functions and finds that RCC RO outperforms all the others on the average, showing that CRO is suitable for solving problems in the continuous domain.
Abstract: Optimization problems can generally be classified as continuous and discrete, based on the nature of the solution space. A recently developed chemical-reaction-inspired metaheuristic, called chemical reaction optimization (CRO), has been shown to perform well in many optimization problems in the discrete domain. This paper is dedicated to proposing a real-coded version of CRO, namely, RCCRO, to solve continuous optimization problems. We compare the performance of RCCRO with a large number of optimization techniques on a large set of standard continuous benchmark functions. We find that RCCRO outperforms all the others on the average. We also propose an adaptive scheme for RCCRO which can improve the performance effectively. This shows that CRO is suitable for solving problems in the continuous domain.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This article illustrates that particles tend to leave the boundaries of the search space irrespective of the initialization approach, resulting in wasted search effort, and shows that random initialization increases the number of roaming particles, and that this has a negative impact on convergence time.
Abstract: Since its birth in 1995, particle swarm optimization (PSO) has been well studied and successfully applied. While a better understanding of PSO and particle behaviors have been obtained through theoretical and empirical analysis, some issues about the beavior of particles remain unanswered. One such issue is how velocities should be initialized. Though zero initial velocities have been advocated, a popular initialization strategy is to set initial weights to random values within the domain of the optimization problem. This article first illustrates that particles tend to leave the boundaries of the search space irrespective of the initialization approach, resulting in wasted search effort. It is also shown that random initialization increases the number of roaming particles, and that this has a negative impact on convergence time. It is also shown that enforcing a boundary constraint on personal best positions does not help much to address this problem. The main objective of the article is to show that the best approach is to initialize particles to zero, or random values close to zero, without imposing a personal best bound.