scispace - formally typeset
Search or ask a question
Author

Rui Mendes

Other affiliations: National Central University
Bio: Rui Mendes is an academic researcher from University of Minho. The author has contributed to research in topics: Particle swarm optimization & Metaheuristic. The author has an hindex of 12, co-authored 37 publications receiving 4644 citations. Previous affiliations of Rui Mendes include National Central University.

Papers
More filters
Journal ArticleDOI
TL;DR: The canonical particle swarm algorithm is a new approach to optimization, drawing inspiration from group behavior and the establishment of social norms, but each individual is not simply influenced by the best performer among his neighbors.
Abstract: The canonical particle swarm algorithm is a new approach to optimization, drawing inspiration from group behavior and the establishment of social norms. It is gaining popularity, especially because of the speed of convergence and the fact that it is easy to use. However, we feel that each individual is not simply influenced by the best performer among his neighbors. We, thus, decided to make the individuals "fully informed." The results are very promising, as informed individuals seem to find better solutions in all the benchmark functions.

1,682 citations

Proceedings ArticleDOI
12 May 2002
TL;DR: The effects of various population topologies on the particle swarm algorithm were systematically investigated and it was discovered that previous assumptions may not have been correct.
Abstract: The effects of various population topologies on the particle swarm algorithm were systematically investigated. Random graphs were generated to specifications, and their performance on several criteria was compared. What makes a good population structure? We discovered that previous assumptions may not have been correct.

1,589 citations

Journal ArticleDOI
01 Jul 2006
TL;DR: It appears that a fully informed particle swarm is more susceptible to alterations in the topology, but with a goodTopology, it can outperform the canonical version.
Abstract: In this study, we vary the way an individual in the particle swarm interacts with its neighbors. The performance of an individual depends on population topology as well as algorithm version. It appears that a fully informed particle swarm is more susceptible to alterations in the topology, but with a good topology, it can outperform the canonical version

331 citations

Proceedings ArticleDOI
07 Aug 2002
TL;DR: Particle swarm is an optimization paradigm for real-valued functions, based on the social dynamics of group interaction, and its application to the training of neural networks is proposed.
Abstract: Particle swarm is an optimization paradigm for real-valued functions, based on the social dynamics of group interaction. We propose its application to the training of neural networks. Comparative tests were carried out, for classification and regression tasks.

274 citations

Proceedings ArticleDOI
23 Jun 2003
TL;DR: It appears that a fully informed particle swarm is more susceptible to alterations in the topology, but with a goodTopology, it can outperform the canonical version.
Abstract: We vary the way an individual in the particle swarm interacts with its neighbors. Performance depends on population topology as well as algorithm version.

268 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Abstract: Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.

4,321 citations

Journal ArticleDOI
TL;DR: The comprehensive learning particle swarm optimizer (CLPSO) is presented, which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity.
Abstract: This paper presents a variant of particle swarm optimizers (PSOs) that we call the comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. Experiments were conducted (using codes available from http://www.ntu.edu.sg/home/epnsugan) on multimodal test functions such as Rosenbrock, Griewank, Rastrigin, Ackley, and Schwefel and composition functions both with and without coordinate rotation. The results demonstrate good performance of the CLPSO in solving multimodal problems when compared with eight other recent variants of the PSO.

3,217 citations

Journal ArticleDOI
TL;DR: Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems.
Abstract: A new differential evolution (DE) algorithm, JADE, is proposed to improve optimization performance by implementing a new mutation strategy ldquoDE/current-to-p bestrdquo with optional external archive and updating control parameters in an adaptive manner. The DE/current-to-pbest is a generalization of the classic ldquoDE/current-to-best,rdquo while the optional archive operation utilizes historical data to provide information of progress direction. Both operations diversify the population and improve the convergence performance. The parameter adaptation automatically updates the control parameters to appropriate values and avoids a user's prior knowledge of the relationship between the parameter settings and the characteristics of optimization problems. It is thus helpful to improve the robustness of the algorithm. Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems. JADE with an external archive shows promising results for relatively high dimensional problems. In addition, it clearly shows that there is no fixed control parameter setting suitable for various problems or even at different optimization stages of a single problem.

2,778 citations

Journal ArticleDOI
TL;DR: This paper presents a detailed overview of the basic concepts of PSO and its variants, and provides a comprehensive survey on the power system applications that have benefited from the powerful nature ofPSO as an optimization technique.
Abstract: Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale nonlinear optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.

2,147 citations

Journal ArticleDOI
TL;DR: A variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm.
Abstract: The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.

2,038 citations