Adaptive Particle Swarm Optimization
read more
Citations
Parameter tuning for configuring and analyzing evolutionary algorithms
Book review: particle swarm optimization for single objective continuous space problems: A review
Particle Swarm Optimization With an Aging Leader and Challengers
A Self-Learning Particle Swarm Optimizer for Global Optimization Problems
Improved artificial bee colony algorithm for global optimization
References
Particle swarm optimization
A modified particle swarm optimizer
Evolutionary programming made faster
Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients
Using selection to improve particle swarm optimization
Related Papers (5)
A variant with a time varying PID controller of particle swarm optimizers
Frequently Asked Questions (10)
Q2. What are the future works mentioned in the paper "Adaptive particle swarm optimization" ?
Further work includes research into adaptive control of topological structures based on ESE and applications of the ESE technique to other evolutionary computation algorithms.
Q3. What are the main techniques used to combine with PSO?
In addition to the normal GA operators, e.g., selection [21], crossover [22], and mutation [23], other techniques such as local search [24] and differential evolution [39] have been used to combine with PSO.
Q4. What is the importance of a search algorithm in a unimodal space?
In a unimodal space, it is important for an optimization or search algorithm to converge fast and to refine the solution for high accuracy.
Q5. What is the simplest way to represent a swarm of particles?
In PSO, a swarm of particles are represented as potential solutions, and each particle i is associated with two vectors, i.e., the velocity vector V i = [v1i , v 2 i , . . . , v D i ] and the position vector Xi = [x1i , x 2 i , . . . , x D i ], where D stands for the dimensions of the solution space.
Q6. What is the consequence of a swarm jumping out of the local optimum?
The consequence is that the swarm will strongly be attracted by the current best region, causing premature convergence, which is harmful if the current best region is a local optimum.
Q7. What is the effect of elitist learning on the swarm?
trials in elitist learning perturb the particle that leads the swarm, which is reflected in the slight divergence between c1 and c2 that follows.
Q8. What is the APSO's ability to find a potential optimal region in a?
These plots confirm that, in a multimodal space, the APSO can also find a potential optimal region (maybe a local optimum) fast in an early phase and converge fast with a rapid decreasing diversity due to theAuthorized licensed use limited to: UNIVERSITY OF GLASGOW.
Q9. What is the simplest way to denote the PSO?
In this paper, the authors focus on the PSO with an inertia weight and use a global version of PSO (GPSO) [13] to denote the traditional global-version PSO with an inertia weight as given by (3).
Q10. What is the significance of the constriction factor in PSO?
In Kennedy’s two extreme cases [36], i.e., the “social-only” model and the “cognitive-only” model, experiments have shown that both acceleration coefficients are essential to the success of PSO.