# Particle Swarm Optimization in Dynamic Environments

## Summary (5 min read)

### 1 Introduction

- Particle Swarm Optimization (PSO) is a versatile population-based optimization technique, in many respects similar to evolutionary algorithms (EAs).
- (Moving peaks, arguably representative of real world problems, consist of a number of peaks of changing with and height and in lateral motion [12, 7].).
- There are four principle mechanisms for either re-diversification or diversity maintenance: randomization [23], repulsion [5], dynamic networks [24, 36] and multi-populations [29, 6].
- Multi-swarms combine repulsion with multi-populations [6, 7].
- This chapter starts with a description of the canonical PSO algorithm and then, in Section 3, explains why dynamic environments pose particular problems for unmodified PSO.

### 2 Canonical PSO

- In PSO, population members possess a memory of the best (with respect to an objective function) location that they have visited in the past, pbest, and of its fitness.
- Every particle will share information with every other particle in the swarm so that there is a single gbest global best attractor representing the best location found by the entire swarm.
- Convergence towards a good solution will not follow from these dynamics alone; the particle flight must progressively contract.
- For their purposes here, the Clerc-Kennedy PSO will be taken as the canonical swarm; χ replaces other energy draining factors extant in the literature such as a decreasing ‘inertial weight’ and velocity clamping.

### 3 PSO problems with moving peaks

- As has been mentioned in Sect 1, PSO must be modified for optimal results on dynamic environments typified by the moving peaks benchmark (MPB).
- These modifications must solve the problems of outdated memory, and of lost diversity.
- This explains the origins of these problems in the context of MPB, and shows how memory loss is easily addressed.
- The following section then considers the second, more severe, problem.

### 3.1 Moving Peaks

- The dynamic objective function of MPB, f(x, t), is optimized at ‘peak’ locations x∗ and has a global optimum at x∗∗ = argmax{f(x∗)} (once more, assuming optimization means maximizing).
- There are p peaks in total, although some peaks may become obscured.
- This scenario, which is not the most general, nevertheless has been put forward as representative of real world dynamic problems [12] and a benchmark function is publicly available for download from [11].
- Note that small changes in f(x∗) can still invoke large changes in x∗∗ due to peak promotion, so the many peaks model is far from trivial.

### 3.2 The problem of outdated memory

- Outdated memory happens at environment change when the optima may shift in location and/or value.
- Particle memory (namely the best location visited in the past, and its corresponding fitness) may no longer be true at change, with potentially disastrous effects on the search.
- The problem of outdated memory is typically solved by either assuming that the algorithm knows just when the environment change occurs, or that it can detect change.
- In either case, the algorithm must invoke an appropriate response.
- One possible drawback is that the function has not changed at the chosen pi, but has changed elsewhere.

### 3.3 The problem of lost diversity

- The population takes time to re-diversify and re-converge, effectively unable to track a moving optimum.
- It is helpful at this stage to introduce the swarm diameter |S|, defined as the largest distance, along any axis, between any two particles [2], as a measure of swarm diversity (Fig. 1).
- If the optimum shift is significantly far from the swarm, the low velocities of the particles (which are of order |S|) will inhibit re-diversification and tracking, and the swarm can even oscillate about a false attractor and along a line perpendicular to the true optimum, in a phenomenon known as linear collapse [5].
- These considerations can be quantified with the help of a prediction for the rate of diversity loss [2, 3, 10].
- The number of function evaluations between change, K, can be converted into a period measured in iterations, L by considering the total number of function evaluations per iteration including, where necessary, the extra test-for-change evaluations.

### 4 Diversity lost and diversity regained

- There are two solutions in the literature to the problem of insufficient diversity.
- These modifications are the subject of this section.

### 4.1 Re-diversification

- These all involve randomization of the entire, or part of, the swarm.
- Clearly the problem with this approach is the arbitrariness of the extra parameters.
- Since randomization implies information loss, there is a danger of erasing too much information and effectively re-starting the swarm.
- Such a scheme could infer details about f ’s dynamism during the run, making appropriate adjustments to the re-diversification parameters.
- So far, though, higher level modifications such as these have not been studied.

### 4.2 Maintaining diversity by repulsion

- A constant, and hopefully good enough, degree of swarm diversity can be maintained at all times either through some type of repulsive mechanism, or by adjustments to the information sharing neighborhood.
- Neither technique, however, has been applied to the dynamic scenario.
- The model can be depicted as a cloud of charged particles orbiting a contracting, neutral, PSO nucleus, Fig. 3. The charged particles can be either classical or quantum particles; either type are discussed in some depth in references [6, 7] and in the following section.
- Charge enhances diversity in the vicinity of the converging PSO sub-swarm, so that optimum shifts within this cloud should be trackable.
- Good tracking (outperforming canonical PSO) has been demonstrated for unimodal dynamic environments of varying severities [3].

### 4.3 Maintaining diversity with dynamic network topology

- Adjustments to the information sharing topology can be made with the intention of reducing, maybe temporarily, the desire to move towards the global best position, thereby enhancing population diversity.
- Li and Dam use a gridlike neighborhood structure, and Jansen and Middendorf test a hierarchical structure, reporting improvements over unmodified PSO for unimodal dynamic environments [24, 36].

### 4.4 Maintaining diversity with multi-populations

- The multi-population idea is particularly helpful in multi-modal environments such as many peaks.
- Multi-population techniques include niching, speciation and multi-swarms.
- In the static context, the niching PSO of Brits et al [16] can successfully optimize some static benchmark problems.
- This technique, as the authors point out, would fail in a dynamic environment because niching depends on a homogeneous distribution of particles in the search space, and on a training phase.
- Other related work includes using different swarms in cooperation to optimize different parts of a solution [35], a two swarm min-max optimization algorithm [33] and iteration by iteration clustering of particles into sub-swarms [26].

### 5 Multi-swarms

- A combined approach might be to incorporate the virtues of the multipopulation approach and of swarm diversity enhancing mechanisms such as repulsion.
- Such an optimizer would be well suited to the many peaks environment.
- The extension of multi-swarms to a dynamic optimizer was made by Blackwell and Branke [6], and is inspired by Branke’s own selforganizing scouts (SOS) [13].
- The scouts have been shown to give excellent results on the many peaks benchmark.
- The motivation for these operators is that a mechanism must be found to prevent two or more swarms from trying to optimize the same peak and also to maintain multi-swarm diversity, that is to say the diversity amongst the population of swarms as a whole (anti-convergence).

### 5.1 Atom Analogy

- All particles are in fact members of the same information network Algorithm 2 Multi-Swarm //Initialization FOR EACH particle ni Randomly initialize vni,xni = pni Evaluate f(pni) FOR EACH swarm n png := argmax{f(pni)} REPEAT // Anti-Convergence IF all swarms have converged THEN Re-initialize worst swarm.
- FOR EACH swarm m 6= n IF swarm attractor png is within rexcl of pmg THEN IF f(png) ≤ f(pmg) THEN Re-initialize swarm n ELSE Re-initialize swarm m FOR EACH particle in re-initialized swarm Re-evaluate function value.
- UNTIL number of function evaluations performed > max so that they all (in the star topology) have access to pg.
- So far, two uniform distributions have been tested.
- An order of magnitude estimation for the parameters rcloud (for quantum swarms), or Q, for classically charged clouds, can be made by supposing that good tracking will occur if the mean charged particle separation < |x−−pg| > is comparable to s.

### 5.2 Exclusion

- A many-swarm has M swarms, and each swarm, for symmetrical configurations has N0 neutral and N− charged particles.
- The multi-swarm approach is to seek a spatial interaction between swarms.
- Exclusion is inspired by the exclusion principle in atomic and molecular physics.
- An order of magnitude estimation for rexcl can be made by assuming that all p peaks are evenly distributed in Xd.
- It is reasonable to assume that swarms that are closer than this distance should experience exclusion, since the overall strategy is to place one swarm on each peak.

### 5.3 Anti-Convergence

- A free swarm is one that is patrolling the search space rather than converging on a peak.
- The idea is that if the number of swarms is less than the number of peaks, all swarms may converge, leaving some peaks unwatched.
- One of these unwatched peaks may later become optimal.
- On the other hand, rconv should certainly be less than rexcl because exclusion occurs before convergence.
- Note that there are two levels of diversity.

### 5.5 Results

- A exhaustive series of multi-swarm experiments has been conducted for the many peaks benchmark.
- Each experiment actually consists of 50 runs for a given set of multi-swarm parameters.
- Other non-standard MPB’s were also tested for comparisons.
- Multi-swarm performance is quantified by the offline error which is the average, at any point in time, of the error of the best solution found since the last environment change.

### 6 Self-adapting multi-swarms

- The multi-swarm model of the previous section introduced a number of new parameters.
- A general purpose method that might perform reasonably well across a spectrum of problems is certainly attractive.
- Here the authors will describe self-adaptations at the level of the multi-swarm.
- This recipe gave the best results for the environments studied in the previous section.
- (Previously, rexcl, rconv and M were determined with knowledge of p and K.).

### 6.1 Swarm birth and death

- The basic idea is to allow the multi-swarm to regulate its size by bringing new swarms into existence, or by removing redundant swarms.
- The aim, as before, is to place a swarm on each peak, and to maintain multi-swarm diversity with (at least one) patrolling swarm.
- Alternatively, if there are too many free swarms (i.e. those that fail the convergence criterion), a free swarm should be removed.
- A simple choice is to suppose that nexcess = 1, but this may not give sufficient diversity if there are many peaks.
- The exclusion radius is replaced by r(t).

### 6.2 Results

- A number of experiments using the MPB of Section 5.5 with 10 and 200 peaks were conducted to test the efficacy of the self-adapting multi-swarm for various values of nexcess.
- The uniform volume distribution described in Section 5.1 was used.
- An empirical investigation revealed that rcloud = 0.5s for shift length s yields optimum tracking for these MPB’s, and this was the value used here.
- Only the rounded errors are significant, but the pre-rounded values have been reported in order to examine algorithm functionality.

### 6.3 Discussion

- Inspection of the numbers of free and converged swarms for a single function instance revealed that the self-adaptation mechanism at nexcess = 1 frequently adds a swarm, only to remove one at the subsequent iteration.
- This will cause the generation of another free swarm, with no means of removal.
- This will only be a problem when Mconv > p, a situation that might not even happen within the time-scale of the run.
- The results for p = 10 and p = 200 indicate that tuning of nexcess can improve performance for runs where the multi-swarm has found all the peaks.
- The convergence criterion could take into account both the swarm diameter and the rate of improvement of f(pg).

### 7 Summary

- This chapter has reviewed the application of particle swarms to dynamic optimization.
- The canonical PSO algorithm must be modified for good performance in environments such as many peaks.
- New work on self-adaptation has also been presented here.
- Some progress has been made at the multi-swarm level, where a mechanism for swarm birth and death has been suggested; this scheme eliminates one operator and allows the number of swarms and an exclusion parameter to adjust dynamically.
- Self-adaptations at the level of each swarm, in particular allowing particles to be born and to die, and self-regulation of the charged cloud radius remain unexplored.

Did you find this useful? Give us your feedback

##### Citations

1,343 citations

### Cites methods from "Particle Swarm Optimization in Dyna..."

...Considerable research has been also conducted into further refinement of the original formulat ion of PSO in both continuous and discrete problem spaces [146], and areas such as dynamic environments [29], parallel implementati on [16] and MultiObject ive Optimiza tion [223]....

[...]

720 citations

566 citations

421 citations

### Cites methods from "Particle Swarm Optimization in Dyna..."

...Blackwell [163] proposed the self-adaptive multi-swarm optimizer (SAMO) which is the first adaptive method regarding the number of populations....

[...]

...[140, 141, 142, 143, 144, 145, 146, 147, 148, 98, 149, 99, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 97, 164, 102, 165, 166, 101, 167, 168] [169, 170, 171, 172] [173, 174, 175] [176, 177] [178, 179, 180, 181, 182, 183]...

[...]

...To efficiently solve DOPs by the multi-swarm scheme, one key issue is to adapt the number of swarms [200, 163, 164]....

[...]

288 citations

##### References

35,104 citations

24,199 citations

7,365 citations

2,038 citations

1,436 citations