scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Self-adaptive learning based particle swarm optimization

01 Oct 2011-Information Sciences (Elsevier)-Vol. 181, Iss: 20, pp 4515-4538
TL;DR: A self-adaptive learning based PSO (SLPSO) is proposed to make up the above demerits, which can update the best solution records on 26 numerical optimization problems with different characteristics such as uni-modality, multi- modality, rotation, ill-condition, mis-scale and noise.
About: This article is published in Information Sciences.The article was published on 2011-10-01. It has received 313 citations till now. The article focuses on the topics: Particle swarm optimization & Evolutionary algorithm.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper introduces social learning mechanisms into particle swarm optimization (PSO) to develop a social learning PSO (SL-PSO), which performs well on low-dimensional problems and is promising for solving large-scale problems as well.

566 citations


Cites background from "Self-adaptive learning based partic..."

  • ...The poor performance of PSO can largely be attributed to its weak robustness to various problem structures [84]....

    [...]

Journal ArticleDOI
TL;DR: This paper reviews recent studies on the Particle Swarm Optimization (PSO) algorithm and presents some potential areas for future study.
Abstract: This paper reviews recent studies on the Particle Swarm Optimization PSO algorithm. The review has been focused on high impact recent articles that have analyzed and/or modified PSO algorithms. This paper also presents some potential areas for future study.

532 citations


Cites methods from "Self-adaptive learning based partic..."

  • ...For example, a self-adaptive method was proposed in which the most effective PSO variant was selected during the run for the problem at hand (Wang et al., 2011)....

    [...]

  • ...Wang et al. (2011) and later Changhe et al. (2012) used this idea and selected different update rules randomly during the run while the probability of selection was updated....

    [...]

Journal ArticleDOI
TL;DR: Several classes of optimization problems, such as discrete, continuous, constrained, multi-objective and characterized by uncertainties, are addressed by indicating the memetic “recipes” proposed in the literature.
Abstract: Memetic computing is a subject in computer science which considers complex structures such as the combination of simple agents and memes, whose evolutionary interactions lead to intelligent complexes capable of problem-solving. The founding cornerstone of this subject has been the concept of memetic algorithms, that is a class of optimization algorithms whose structure is characterized by an evolutionary framework and a list of local search components. This article presents a broad literature review on this subject focused on optimization problems. Several classes of optimization problems, such as discrete, continuous, constrained, multi-objective and characterized by uncertainties, are addressed by indicating the memetic “recipes” proposed in the literature. In addition, this article focuses on implementation aspects and especially the coordination of memes which is the most important and characterizing aspect of a memetic structure. Finally, some considerations about future trends in the subject are given.

522 citations


Cites methods from "Self-adaptive learning based partic..."

  • ...By means of a similar logic, in [222] a memetic swarm intelligence approach is used for multimodal optimization....

    [...]

Journal ArticleDOI
TL;DR: A novel swarm algorithm called the Social Spider Optimization (SSO) is proposed for solving optimization tasks based on the simulation of cooperative behavior of social-spiders, and is compared to other well-known evolutionary methods.
Abstract: Swarm intelligence is a research field that models the collective behavior in swarms of insects or animals. Several algorithms arising from such models have been proposed to solve a wide range of complex optimization problems. In this paper, a novel swarm algorithm called the Social Spider Optimization (SSO) is proposed for solving optimization tasks. The SSO algorithm is based on the simulation of cooperative behavior of social-spiders. In the proposed algorithm, individuals emulate a group of spiders which interact to each other based on the biological laws of the cooperative colony. The algorithm considers two different search agents (spiders): males and females. Depending on gender, each individual is conducted by a set of different evolutionary operators which mimic different cooperative behaviors that are typically found in the colony. In order to illustrate the proficiency and robustness of the proposed approach, it is compared to other well-known evolutionary methods. The comparison examines several standard benchmark functions that are commonly considered within the literature of evolutionary algorithms. The outcome shows a high performance of the proposed method for searching a global optimum with several benchmark functions.

427 citations


Cites background from "Self-adaptive learning based partic..."

  • ...Although PSO and ABC are the most popular swarm algorithms for solving complex optimization problems, they present serious flaws such as premature convergence and difficulty to overcome local minima [10,11]....

    [...]

  • ...However, they present serious flaws such as premature convergence and difficulty to overcome local minima [10,11]....

    [...]

Journal ArticleDOI
TL;DR: A hybrid PSO algorithm is proposed, called DNSPSO, which employs a diversity enhancing mechanism and neighborhood search strategies to achieve a trade-off between exploration and exploitation abilities.

366 citations


Cites background from "Self-adaptive learning based partic..."

  • ...[56] introduced a self-adaptive learning strategy to improve the performance of CLPSO....

    [...]

References
More filters
Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations


"Self-adaptive learning based partic..." refers background or methods in this paper

  • ...In original PSO, the velocity Vi and position X d i of the dth dimension of the ith particle are updated as follows [23]:...

    [...]

  • ...Inspired by the concerted actions of flocks of birds, shoals of fish, and swarms of insects searching for food, Kennedy and Eberhart originally proposed particle swarm optimization (PSO) in the mid-1990s [23,24]....

    [...]

Proceedings ArticleDOI
04 May 1998
TL;DR: A new parameter, called inertia weight, is introduced into the original particle swarm optimizer, which resembles a school of flying birds since it adjusts its flying according to its own flying experience and its companions' flying experience.
Abstract: Evolutionary computation techniques, genetic algorithms, evolutionary strategies and genetic programming are motivated by the evolution of nature. A population of individuals, which encode the problem solutions are manipulated according to the rule of survival of the fittest through "genetic" operations, such as mutation, crossover and reproduction. A best solution is evolved through the generations. In contrast to evolutionary computation techniques, Eberhart and Kennedy developed a different algorithm through simulating social behavior (R.C. Eberhart et al., 1996; R.C. Eberhart and J. Kennedy, 1996; J. Kennedy and R.C. Eberhart, 1995; J. Kennedy, 1997). As in other algorithms, a population of individuals exists. This algorithm is called particle swarm optimization (PSO) since it resembles a school of flying birds. In a particle swarm optimizer, instead of using genetic operators, these individuals are "evolved" by cooperation and competition among the individuals themselves through generations. Each particle adjusts its flying according to its own flying experience and its companions' flying experience. We introduce a new parameter, called inertia weight, into the original particle swarm optimizer. Simulations have been done to illustrate the significant and effective impact of this new parameter on the particle swarm optimizer.

9,373 citations


"Self-adaptive learning based partic..." refers background in this paper

  • ...PSO-w: PSO with inertia weight [49]; PSO-cf: PSO with constriction factor [12]; PSO-cf-local: local version of PSO with constriction factor [25]; FIPS-PSO: fully informed PSO [36]; FDR-PSO: Fitness-distance-ratio based PSO [41]; CPSO-H: cooperative based PSO [4]; CLPSO: comprehensive learning PSO [28]....

    [...]

  • ...Moreover, in order to strengthen the local search ability, [49] introduced an inertia weight parameter w, which is usually set to be in (0,1), into the velocity updating:...

    [...]

Journal ArticleDOI
TL;DR: This paper analyzes a particle's trajectory as it moves in discrete time, then progresses to the view of it in continuous time, leading to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies.
Abstract: The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. This paper analyzes a particle's trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions.

8,287 citations

Journal ArticleDOI
TL;DR: This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation and reveals local and global search properties of the evolution strategy with and without covariance matrix adaptation.
Abstract: This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.

3,752 citations


"Self-adaptive learning based partic..." refers methods in this paper

  • ...As discussed in the above subsection, the uni-modal problem by Rosenbrock and the multi-modal problem by Rastrigin are two typical hard tasks and we generalize three mis-scaled test problems of these two functions from group 2, which is similar to [18]....

    [...]

Book ChapterDOI
TL;DR: This paper first analyzes the impact that inertia weight and maximum velocity have on the performance of the particle swarm optimizer, and then provides guidelines for selecting these two parameters.
Abstract: This paper first analyzes the impact that inertia weight and maximum velocity have on the performance of the particle swarm optimizer, and then provides guidelines for selecting these two parameters. Analysis of experiments demonstrates the validity of these guidelines.

3,557 citations


"Self-adaptive learning based partic..." refers background in this paper

  • ...analyzed the impact of the inertia weight and maximum velocity in PSO [50]....

    [...]