scispace - formally typeset
Search or ask a question
Author

M. Clerc

Bio: M. Clerc is an academic researcher from Orange S.A.. The author has contributed to research in topics: Multi-swarm optimization & Swarm behaviour. The author has an hindex of 1, co-authored 1 publications receiving 1493 citations.

Papers
More filters
Proceedings ArticleDOI
M. Clerc1
06 Jul 1999
TL;DR: A very simple particle swarm optimization iterative algorithm is presented, with just one equation and one social/confidence parameter, and the results are good enough so that it is certainly worthwhile trying the method on more complex problems.
Abstract: A very simple particle swarm optimization iterative algorithm is presented, with just one equation and one social/confidence parameter. We define a "no-hope" convergence criterion and a "rehope" method so that, from time to time, the swarm re-initializes its position, according to some gradient estimations of the objective function and to the previous re-initialization (it means it has a kind of very rudimentary memory). We then study two different cases, a quite "easy" one (the Alpine function) and a "difficult" one (the Banana function), but both just in dimension two. The process is improved by taking into account the swarm gravity center (the "queen") and the results are good enough so that it is certainly worthwhile trying the method on more complex problems.

1,550 citations


Cited by
More filters
Proceedings ArticleDOI
Eberhart1, Yuhui Shi
27 May 2001
TL;DR: Developments in the particle swarm algorithm since its origin in 1995 are reviewed and brief discussions of constriction factors, inertia weights, and tracking dynamic systems are included.
Abstract: This paper focuses on the engineering and computer science aspects of developments, applications, and resources related to particle swarm optimization. Developments in the particle swarm algorithm since its origin in 1995 are reviewed. Included are brief discussions of constriction factors, inertia weights, and tracking dynamic systems. Applications, both those already developed, and promising future application areas, are reviewed. Finally, resources related to particle swarm optimization are listed, including books, Web sites, and software. A particle swarm optimization bibliography is at the end of the paper.

4,041 citations

Proceedings ArticleDOI
16 Jul 2000
TL;DR: It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension.
Abstract: The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors.

2,922 citations

Journal ArticleDOI
TL;DR: A novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations to overcome the difficulties of selecting an appropriate mutation step size for different problems.
Abstract: This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, time-varying acceleration coefficients (TVAC) are introduced in addition to the time-varying inertia weight factor in particle swarm optimization (PSO). From the basis of TVAC, two new strategies are discussed to improve the performance of the PSO. First, the concept of "mutation" is introduced to the particle swarm optimization along with TVAC (MPSO-TVAC), by adding a small perturbation to a randomly selected modulus of the velocity vector of a random particle by predefined probability. Second, we introduce a novel particle swarm concept "self-organizing hierarchical particle swarm optimizer with TVAC (HPSO-TVAC)". Under this method, only the "social" part and the "cognitive" part of the particle swarm strategy are considered to estimate the new velocity of each particle and particles are reinitialized whenever they are stagnated in the search space. In addition, to overcome the difficulties of selecting an appropriate mutation step size for different problems, a time-varying mutation step size was introduced. Further, for most of the benchmarks, mutation probability is found to be insensitive to the performance of MPSO-TVAC method. On the other hand, the effect of reinitialization velocity on the performance of HPSO-TVAC method is also observed. Time-varying reinitialization step size is found to be an efficient parameter optimization strategy for HPSO-TVAC method. The HPSO-TVAC strategy outperformed all the methods considered in this investigation for most of the functions. Furthermore, it has also been observed that both the MPSO and HPSO strategies perform poorly when the acceleration coefficients are fixed at two.

2,753 citations

Journal ArticleDOI
TL;DR: The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory and graphical parameter selection guidelines are derived, resulting in results superior to previously published results.

2,554 citations

Journal ArticleDOI
TL;DR: A variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm.
Abstract: The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.

2,038 citations