About: The article was published on 2007-01-01 and is currently open access. It has received 192 citations till now. The article focuses on the topics: Multi-swarm optimization & Metaheuristic.
Particle Swarm Optimization (PSO) is a versatile population-based optimization technique, in many respects similar to evolutionary algorithms (EAs).
(Moving peaks, arguably representative of real world problems, consist of a number of peaks of changing with and height and in lateral motion [12, 7].).
There are four principle mechanisms for either re-diversification or diversity maintenance: randomization [23], repulsion [5], dynamic networks [24, 36] and multi-populations [29, 6].
Multi-swarms combine repulsion with multi-populations [6, 7].
This chapter starts with a description of the canonical PSO algorithm and then, in Section 3, explains why dynamic environments pose particular problems for unmodified PSO.
2 Canonical PSO
In PSO, population members possess a memory of the best (with respect to an objective function) location that they have visited in the past, pbest, and of its fitness.
Every particle will share information with every other particle in the swarm so that there is a single gbest global best attractor representing the best location found by the entire swarm.
Convergence towards a good solution will not follow from these dynamics alone; the particle flight must progressively contract.
For their purposes here, the Clerc-Kennedy PSO will be taken as the canonical swarm; χ replaces other energy draining factors extant in the literature such as a decreasing ‘inertial weight’ and velocity clamping.
3 PSO problems with moving peaks
As has been mentioned in Sect 1, PSO must be modified for optimal results on dynamic environments typified by the moving peaks benchmark (MPB).
These modifications must solve the problems of outdated memory, and of lost diversity.
This explains the origins of these problems in the context of MPB, and shows how memory loss is easily addressed.
The following section then considers the second, more severe, problem.
3.1 Moving Peaks
The dynamic objective function of MPB, f(x, t), is optimized at ‘peak’ locations x∗ and has a global optimum at x∗∗ = argmax{f(x∗)} (once more, assuming optimization means maximizing).
There are p peaks in total, although some peaks may become obscured.
This scenario, which is not the most general, nevertheless has been put forward as representative of real world dynamic problems [12] and a benchmark function is publicly available for download from [11].
Note that small changes in f(x∗) can still invoke large changes in x∗∗ due to peak promotion, so the many peaks model is far from trivial.
3.2 The problem of outdated memory
Outdated memory happens at environment change when the optima may shift in location and/or value.
Particle memory (namely the best location visited in the past, and its corresponding fitness) may no longer be true at change, with potentially disastrous effects on the search.
The problem of outdated memory is typically solved by either assuming that the algorithm knows just when the environment change occurs, or that it can detect change.
In either case, the algorithm must invoke an appropriate response.
One possible drawback is that the function has not changed at the chosen pi, but has changed elsewhere.
3.3 The problem of lost diversity
The population takes time to re-diversify and re-converge, effectively unable to track a moving optimum.
It is helpful at this stage to introduce the swarm diameter |S|, defined as the largest distance, along any axis, between any two particles [2], as a measure of swarm diversity (Fig. 1).
If the optimum shift is significantly far from the swarm, the low velocities of the particles (which are of order |S|) will inhibit re-diversification and tracking, and the swarm can even oscillate about a false attractor and along a line perpendicular to the true optimum, in a phenomenon known as linear collapse [5].
These considerations can be quantified with the help of a prediction for the rate of diversity loss [2, 3, 10].
The number of function evaluations between change, K, can be converted into a period measured in iterations, L by considering the total number of function evaluations per iteration including, where necessary, the extra test-for-change evaluations.
4 Diversity lost and diversity regained
There are two solutions in the literature to the problem of insufficient diversity.
These modifications are the subject of this section.
4.1 Re-diversification
These all involve randomization of the entire, or part of, the swarm.
Clearly the problem with this approach is the arbitrariness of the extra parameters.
Since randomization implies information loss, there is a danger of erasing too much information and effectively re-starting the swarm.
Such a scheme could infer details about f ’s dynamism during the run, making appropriate adjustments to the re-diversification parameters.
So far, though, higher level modifications such as these have not been studied.
4.2 Maintaining diversity by repulsion
A constant, and hopefully good enough, degree of swarm diversity can be maintained at all times either through some type of repulsive mechanism, or by adjustments to the information sharing neighborhood.
Neither technique, however, has been applied to the dynamic scenario.
The model can be depicted as a cloud of charged particles orbiting a contracting, neutral, PSO nucleus, Fig. 3. The charged particles can be either classical or quantum particles; either type are discussed in some depth in references [6, 7] and in the following section.
Charge enhances diversity in the vicinity of the converging PSO sub-swarm, so that optimum shifts within this cloud should be trackable.
Good tracking (outperforming canonical PSO) has been demonstrated for unimodal dynamic environments of varying severities [3].
4.3 Maintaining diversity with dynamic network topology
Adjustments to the information sharing topology can be made with the intention of reducing, maybe temporarily, the desire to move towards the global best position, thereby enhancing population diversity.
Li and Dam use a gridlike neighborhood structure, and Jansen and Middendorf test a hierarchical structure, reporting improvements over unmodified PSO for unimodal dynamic environments [24, 36].
4.4 Maintaining diversity with multi-populations
The multi-population idea is particularly helpful in multi-modal environments such as many peaks.
Multi-population techniques include niching, speciation and multi-swarms.
In the static context, the niching PSO of Brits et al [16] can successfully optimize some static benchmark problems.
This technique, as the authors point out, would fail in a dynamic environment because niching depends on a homogeneous distribution of particles in the search space, and on a training phase.
Other related work includes using different swarms in cooperation to optimize different parts of a solution [35], a two swarm min-max optimization algorithm [33] and iteration by iteration clustering of particles into sub-swarms [26].
5 Multi-swarms
A combined approach might be to incorporate the virtues of the multipopulation approach and of swarm diversity enhancing mechanisms such as repulsion.
Such an optimizer would be well suited to the many peaks environment.
The extension of multi-swarms to a dynamic optimizer was made by Blackwell and Branke [6], and is inspired by Branke’s own selforganizing scouts (SOS) [13].
The scouts have been shown to give excellent results on the many peaks benchmark.
The motivation for these operators is that a mechanism must be found to prevent two or more swarms from trying to optimize the same peak and also to maintain multi-swarm diversity, that is to say the diversity amongst the population of swarms as a whole (anti-convergence).
5.1 Atom Analogy
All particles are in fact members of the same information network Algorithm 2 Multi-Swarm //Initialization FOR EACH particle ni Randomly initialize vni,xni = pni Evaluate f(pni) FOR EACH swarm n png := argmax{f(pni)} REPEAT // Anti-Convergence IF all swarms have converged THEN Re-initialize worst swarm.
FOR EACH swarm m 6= n IF swarm attractor png is within rexcl of pmg THEN IF f(png) ≤ f(pmg) THEN Re-initialize swarm n ELSE Re-initialize swarm m FOR EACH particle in re-initialized swarm Re-evaluate function value.
UNTIL number of function evaluations performed > max so that they all (in the star topology) have access to pg.
So far, two uniform distributions have been tested.
An order of magnitude estimation for the parameters rcloud (for quantum swarms), or Q, for classically charged clouds, can be made by supposing that good tracking will occur if the mean charged particle separation < |x−−pg| > is comparable to s.
5.2 Exclusion
A many-swarm has M swarms, and each swarm, for symmetrical configurations has N0 neutral and N− charged particles.
The multi-swarm approach is to seek a spatial interaction between swarms.
Exclusion is inspired by the exclusion principle in atomic and molecular physics.
An order of magnitude estimation for rexcl can be made by assuming that all p peaks are evenly distributed in Xd.
It is reasonable to assume that swarms that are closer than this distance should experience exclusion, since the overall strategy is to place one swarm on each peak.
5.3 Anti-Convergence
A free swarm is one that is patrolling the search space rather than converging on a peak.
The idea is that if the number of swarms is less than the number of peaks, all swarms may converge, leaving some peaks unwatched.
One of these unwatched peaks may later become optimal.
On the other hand, rconv should certainly be less than rexcl because exclusion occurs before convergence.
Note that there are two levels of diversity.
5.5 Results
A exhaustive series of multi-swarm experiments has been conducted for the many peaks benchmark.
Each experiment actually consists of 50 runs for a given set of multi-swarm parameters.
Other non-standard MPB’s were also tested for comparisons.
Multi-swarm performance is quantified by the offline error which is the average, at any point in time, of the error of the best solution found since the last environment change.
6 Self-adapting multi-swarms
The multi-swarm model of the previous section introduced a number of new parameters.
A general purpose method that might perform reasonably well across a spectrum of problems is certainly attractive.
Here the authors will describe self-adaptations at the level of the multi-swarm.
This recipe gave the best results for the environments studied in the previous section.
(Previously, rexcl, rconv and M were determined with knowledge of p and K.).
6.1 Swarm birth and death
The basic idea is to allow the multi-swarm to regulate its size by bringing new swarms into existence, or by removing redundant swarms.
The aim, as before, is to place a swarm on each peak, and to maintain multi-swarm diversity with (at least one) patrolling swarm.
Alternatively, if there are too many free swarms (i.e. those that fail the convergence criterion), a free swarm should be removed.
A simple choice is to suppose that nexcess = 1, but this may not give sufficient diversity if there are many peaks.
The exclusion radius is replaced by r(t).
6.2 Results
A number of experiments using the MPB of Section 5.5 with 10 and 200 peaks were conducted to test the efficacy of the self-adapting multi-swarm for various values of nexcess.
The uniform volume distribution described in Section 5.1 was used.
An empirical investigation revealed that rcloud = 0.5s for shift length s yields optimum tracking for these MPB’s, and this was the value used here.
Only the rounded errors are significant, but the pre-rounded values have been reported in order to examine algorithm functionality.
6.3 Discussion
Inspection of the numbers of free and converged swarms for a single function instance revealed that the self-adaptation mechanism at nexcess = 1 frequently adds a swarm, only to remove one at the subsequent iteration.
This will cause the generation of another free swarm, with no means of removal.
This will only be a problem when Mconv > p, a situation that might not even happen within the time-scale of the run.
The results for p = 10 and p = 200 indicate that tuning of nexcess can improve performance for runs where the multi-swarm has found all the peaks.
The convergence criterion could take into account both the swarm diameter and the rate of improvement of f(pg).
7 Summary
This chapter has reviewed the application of particle swarms to dynamic optimization.
The canonical PSO algorithm must be modified for good performance in environments such as many peaks.
New work on self-adaptation has also been presented here.
Some progress has been made at the multi-swarm level, where a mechanism for swarm birth and death has been suggested; this scheme eliminates one operator and allows the number of swarms and an exclusion parameter to adjust dynamically.
Self-adaptations at the level of each swarm, in particular allowing particles to be born and to die, and self-regulation of the charged cloud radius remain unexplored.
TL;DR: The components and concepts that are used in various metaheuristics are outlined in order to analyze their similarities and differences and the classification adopted in this paper differentiates between single solution based metaheURistics and population based meta heuristics.
Abstract: Metaheuristics are widely recognized as efficient approaches for many hard optimization problems. This paper provides a survey of some of the main metaheuristics. It outlines the components and concepts that are used in various metaheuristics in order to analyze their similarities and differences. The classification adopted in this paper differentiates between single solution based metaheuristics and population based metaheuristics. The literature survey is accompanied by the presentation of references for further details, including applications. Recent trends are also briefly discussed.
1,343 citations
Cites methods from "Particle Swarm Optimization in Dyna..."
...Considerable research has been also conducted into further refinement of the original formulat ion of PSO in both continuous and discrete problem spaces [146], and areas such as dynamic environments [29], parallel implementati on [16] and MultiObject ive Optimiza tion [223]....
TL;DR: Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995 as discussed by the authors, and the authors have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm.
Abstract: Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors' perspective, including variations in the algorithm, current and ongoing research, applications and open problems.
TL;DR: An in-depth survey of the state-of-the-art of academic research in the field of EDO and other meta-heuristics in four areas: benchmark problems/generators, performance measures, algorithmic approaches, and theoretical studies is carried out.
Abstract: Optimization in dynamic environments is a challenging but important task since many real-world optimization problems are changing over time. Evolutionary computation and swarm intelligence are good tools to address optimization problems in dynamic environments due to their inspiration from natural self-organized systems and biological evolution, which have always been subject to changing environments. Evolutionary optimization in dynamic environments, or evolutionary dynamic optimization (EDO), has attracted a lot of research effort during the last 20 years, and has become one of the most active research areas in the field of evolutionary computation. In this paper we carry out an in-depth survey of the state-of-the-art of academic research in the field of EDO and other meta-heuristics in four areas: benchmark problems/generators, performance measures, algorithmic approaches, and theoretical studies. The purpose is to for the first time (i) provide detailed explanations of how current approaches work; (ii) review the strengths and weaknesses of each approach; (iii) discuss the current assumptions and coverage of existing EDO research; and (iv) identify current gaps, challenges and opportunities in EDO.
TL;DR: A broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications, and some considerations about future directions in the subject are given.
Abstract: Swarm intelligence (SI) algorithms, including ant colony optimization, particle swarm optimization, bee-inspired algorithms, bacterial foraging optimization, firefly algorithms, fish swarm optimization and many more, have been proven to be good methods to address difficult optimization problems under stationary environments. Most SI algorithms have been developed to address stationary optimization problems and hence, they can converge on the (near-) optimum solution efficiently. However, many real-world problems have a dynamic environment that changes over time. For such dynamic optimization problems (DOPs), it is difficult for a conventional SI algorithm to track the changing optimum once the algorithm has converged on a solution. In the last two decades, there has been a growing interest of addressing DOPs using SI algorithms due to their adaptation capabilities. This paper presents a broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications. In addition, this paper focuses on the enhancement strategies integrated in SI algorithms to address dynamic changes, the performance measurements and benchmark generators used in SIDO. Finally, some considerations about future directions in the subject are given.
421 citations
Cites methods from "Particle Swarm Optimization in Dyna..."
...Blackwell [163] proposed the self-adaptive multi-swarm optimizer (SAMO) which is the first adaptive method regarding the number of populations....
TL;DR: An adaptive LS starting strategy is proposed by utilizing the proposed quasi-entropy index to address its key issue, i.e., when to start LS.
Abstract: A comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) is proposed to pursue higher optimization performance by taking the advantages of CLPSO’s strong global search capability and LS’s fast convergence ability. This paper proposes an adaptive LS starting strategy by utilizing our proposed quasi-entropy index to address its key issue, i.e., when to start LS. The changes of the index as the optimization proceeds are analyzed in theory and via numerical tests. The proposed algorithm is tested on multimodal benchmark functions. Parameter sensitivity analysis is performed to demonstrate its robustness. The comparison results reveal overall higher convergence rate and accuracy than those of CLPSO, state-of-the-art particle swarm optimization variants.
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.
TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.
Abstract: "...a blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) ...and the illustrations include many superb examples of computer graphics that are works of art in their own right." Nature
TL;DR: In this article, an approach based on simulation as an alternative to scripting the paths of each bird individually is explored, with the simulated birds being the particles and the aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course.
Abstract: The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle systems, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the "animator." The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.
TL;DR: A variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm.
Abstract: The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.
TL;DR: A Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Abstract: This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and e1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Q1. What contributions have the authors mentioned in the paper "Particle swarm optimization in dynamic environments" ?
Particle Swarm Optimization ( PSO ) is a population-based optimization technique, in many respects similar to evolutionary algorithms ( EAs ) this paper.