scispace - formally typeset
Search or ask a question
Book ChapterDOI

Particle Swarm Optimization in Dynamic Environments

01 Jan 2007-pp 29-49
About: The article was published on 2007-01-01 and is currently open access. It has received 192 citations till now. The article focuses on the topics: Multi-swarm optimization & Metaheuristic.

Summary (5 min read)

1 Introduction

  • Particle Swarm Optimization (PSO) is a versatile population-based optimization technique, in many respects similar to evolutionary algorithms (EAs).
  • (Moving peaks, arguably representative of real world problems, consist of a number of peaks of changing with and height and in lateral motion [12, 7].).
  • There are four principle mechanisms for either re-diversification or diversity maintenance: randomization [23], repulsion [5], dynamic networks [24, 36] and multi-populations [29, 6].
  • Multi-swarms combine repulsion with multi-populations [6, 7].
  • This chapter starts with a description of the canonical PSO algorithm and then, in Section 3, explains why dynamic environments pose particular problems for unmodified PSO.

2 Canonical PSO

  • In PSO, population members possess a memory of the best (with respect to an objective function) location that they have visited in the past, pbest, and of its fitness.
  • Every particle will share information with every other particle in the swarm so that there is a single gbest global best attractor representing the best location found by the entire swarm.
  • Convergence towards a good solution will not follow from these dynamics alone; the particle flight must progressively contract.
  • For their purposes here, the Clerc-Kennedy PSO will be taken as the canonical swarm; χ replaces other energy draining factors extant in the literature such as a decreasing ‘inertial weight’ and velocity clamping.

3 PSO problems with moving peaks

  • As has been mentioned in Sect 1, PSO must be modified for optimal results on dynamic environments typified by the moving peaks benchmark (MPB).
  • These modifications must solve the problems of outdated memory, and of lost diversity.
  • This explains the origins of these problems in the context of MPB, and shows how memory loss is easily addressed.
  • The following section then considers the second, more severe, problem.

3.1 Moving Peaks

  • The dynamic objective function of MPB, f(x, t), is optimized at ‘peak’ locations x∗ and has a global optimum at x∗∗ = argmax{f(x∗)} (once more, assuming optimization means maximizing).
  • There are p peaks in total, although some peaks may become obscured.
  • This scenario, which is not the most general, nevertheless has been put forward as representative of real world dynamic problems [12] and a benchmark function is publicly available for download from [11].
  • Note that small changes in f(x∗) can still invoke large changes in x∗∗ due to peak promotion, so the many peaks model is far from trivial.

3.2 The problem of outdated memory

  • Outdated memory happens at environment change when the optima may shift in location and/or value.
  • Particle memory (namely the best location visited in the past, and its corresponding fitness) may no longer be true at change, with potentially disastrous effects on the search.
  • The problem of outdated memory is typically solved by either assuming that the algorithm knows just when the environment change occurs, or that it can detect change.
  • In either case, the algorithm must invoke an appropriate response.
  • One possible drawback is that the function has not changed at the chosen pi, but has changed elsewhere.

3.3 The problem of lost diversity

  • The population takes time to re-diversify and re-converge, effectively unable to track a moving optimum.
  • It is helpful at this stage to introduce the swarm diameter |S|, defined as the largest distance, along any axis, between any two particles [2], as a measure of swarm diversity (Fig. 1).
  • If the optimum shift is significantly far from the swarm, the low velocities of the particles (which are of order |S|) will inhibit re-diversification and tracking, and the swarm can even oscillate about a false attractor and along a line perpendicular to the true optimum, in a phenomenon known as linear collapse [5].
  • These considerations can be quantified with the help of a prediction for the rate of diversity loss [2, 3, 10].
  • The number of function evaluations between change, K, can be converted into a period measured in iterations, L by considering the total number of function evaluations per iteration including, where necessary, the extra test-for-change evaluations.

4 Diversity lost and diversity regained

  • There are two solutions in the literature to the problem of insufficient diversity.
  • These modifications are the subject of this section.

4.1 Re-diversification

  • These all involve randomization of the entire, or part of, the swarm.
  • Clearly the problem with this approach is the arbitrariness of the extra parameters.
  • Since randomization implies information loss, there is a danger of erasing too much information and effectively re-starting the swarm.
  • Such a scheme could infer details about f ’s dynamism during the run, making appropriate adjustments to the re-diversification parameters.
  • So far, though, higher level modifications such as these have not been studied.

4.2 Maintaining diversity by repulsion

  • A constant, and hopefully good enough, degree of swarm diversity can be maintained at all times either through some type of repulsive mechanism, or by adjustments to the information sharing neighborhood.
  • Neither technique, however, has been applied to the dynamic scenario.
  • The model can be depicted as a cloud of charged particles orbiting a contracting, neutral, PSO nucleus, Fig. 3. The charged particles can be either classical or quantum particles; either type are discussed in some depth in references [6, 7] and in the following section.
  • Charge enhances diversity in the vicinity of the converging PSO sub-swarm, so that optimum shifts within this cloud should be trackable.
  • Good tracking (outperforming canonical PSO) has been demonstrated for unimodal dynamic environments of varying severities [3].

4.3 Maintaining diversity with dynamic network topology

  • Adjustments to the information sharing topology can be made with the intention of reducing, maybe temporarily, the desire to move towards the global best position, thereby enhancing population diversity.
  • Li and Dam use a gridlike neighborhood structure, and Jansen and Middendorf test a hierarchical structure, reporting improvements over unmodified PSO for unimodal dynamic environments [24, 36].

4.4 Maintaining diversity with multi-populations

  • The multi-population idea is particularly helpful in multi-modal environments such as many peaks.
  • Multi-population techniques include niching, speciation and multi-swarms.
  • In the static context, the niching PSO of Brits et al [16] can successfully optimize some static benchmark problems.
  • This technique, as the authors point out, would fail in a dynamic environment because niching depends on a homogeneous distribution of particles in the search space, and on a training phase.
  • Other related work includes using different swarms in cooperation to optimize different parts of a solution [35], a two swarm min-max optimization algorithm [33] and iteration by iteration clustering of particles into sub-swarms [26].

5 Multi-swarms

  • A combined approach might be to incorporate the virtues of the multipopulation approach and of swarm diversity enhancing mechanisms such as repulsion.
  • Such an optimizer would be well suited to the many peaks environment.
  • The extension of multi-swarms to a dynamic optimizer was made by Blackwell and Branke [6], and is inspired by Branke’s own selforganizing scouts (SOS) [13].
  • The scouts have been shown to give excellent results on the many peaks benchmark.
  • The motivation for these operators is that a mechanism must be found to prevent two or more swarms from trying to optimize the same peak and also to maintain multi-swarm diversity, that is to say the diversity amongst the population of swarms as a whole (anti-convergence).

5.1 Atom Analogy

  • All particles are in fact members of the same information network Algorithm 2 Multi-Swarm //Initialization FOR EACH particle ni Randomly initialize vni,xni = pni Evaluate f(pni) FOR EACH swarm n png := argmax{f(pni)} REPEAT // Anti-Convergence IF all swarms have converged THEN Re-initialize worst swarm.
  • FOR EACH swarm m 6= n IF swarm attractor png is within rexcl of pmg THEN IF f(png) ≤ f(pmg) THEN Re-initialize swarm n ELSE Re-initialize swarm m FOR EACH particle in re-initialized swarm Re-evaluate function value.
  • UNTIL number of function evaluations performed > max so that they all (in the star topology) have access to pg.
  • So far, two uniform distributions have been tested.
  • An order of magnitude estimation for the parameters rcloud (for quantum swarms), or Q, for classically charged clouds, can be made by supposing that good tracking will occur if the mean charged particle separation < |x−−pg| > is comparable to s.

5.2 Exclusion

  • A many-swarm has M swarms, and each swarm, for symmetrical configurations has N0 neutral and N− charged particles.
  • The multi-swarm approach is to seek a spatial interaction between swarms.
  • Exclusion is inspired by the exclusion principle in atomic and molecular physics.
  • An order of magnitude estimation for rexcl can be made by assuming that all p peaks are evenly distributed in Xd.
  • It is reasonable to assume that swarms that are closer than this distance should experience exclusion, since the overall strategy is to place one swarm on each peak.

5.3 Anti-Convergence

  • A free swarm is one that is patrolling the search space rather than converging on a peak.
  • The idea is that if the number of swarms is less than the number of peaks, all swarms may converge, leaving some peaks unwatched.
  • One of these unwatched peaks may later become optimal.
  • On the other hand, rconv should certainly be less than rexcl because exclusion occurs before convergence.
  • Note that there are two levels of diversity.

5.5 Results

  • A exhaustive series of multi-swarm experiments has been conducted for the many peaks benchmark.
  • Each experiment actually consists of 50 runs for a given set of multi-swarm parameters.
  • Other non-standard MPB’s were also tested for comparisons.
  • Multi-swarm performance is quantified by the offline error which is the average, at any point in time, of the error of the best solution found since the last environment change.

6 Self-adapting multi-swarms

  • The multi-swarm model of the previous section introduced a number of new parameters.
  • A general purpose method that might perform reasonably well across a spectrum of problems is certainly attractive.
  • Here the authors will describe self-adaptations at the level of the multi-swarm.
  • This recipe gave the best results for the environments studied in the previous section.
  • (Previously, rexcl, rconv and M were determined with knowledge of p and K.).

6.1 Swarm birth and death

  • The basic idea is to allow the multi-swarm to regulate its size by bringing new swarms into existence, or by removing redundant swarms.
  • The aim, as before, is to place a swarm on each peak, and to maintain multi-swarm diversity with (at least one) patrolling swarm.
  • Alternatively, if there are too many free swarms (i.e. those that fail the convergence criterion), a free swarm should be removed.
  • A simple choice is to suppose that nexcess = 1, but this may not give sufficient diversity if there are many peaks.
  • The exclusion radius is replaced by r(t).

6.2 Results

  • A number of experiments using the MPB of Section 5.5 with 10 and 200 peaks were conducted to test the efficacy of the self-adapting multi-swarm for various values of nexcess.
  • The uniform volume distribution described in Section 5.1 was used.
  • An empirical investigation revealed that rcloud = 0.5s for shift length s yields optimum tracking for these MPB’s, and this was the value used here.
  • Only the rounded errors are significant, but the pre-rounded values have been reported in order to examine algorithm functionality.

6.3 Discussion

  • Inspection of the numbers of free and converged swarms for a single function instance revealed that the self-adaptation mechanism at nexcess = 1 frequently adds a swarm, only to remove one at the subsequent iteration.
  • This will cause the generation of another free swarm, with no means of removal.
  • This will only be a problem when Mconv > p, a situation that might not even happen within the time-scale of the run.
  • The results for p = 10 and p = 200 indicate that tuning of nexcess can improve performance for runs where the multi-swarm has found all the peaks.
  • The convergence criterion could take into account both the swarm diameter and the rate of improvement of f(pg).

7 Summary

  • This chapter has reviewed the application of particle swarms to dynamic optimization.
  • The canonical PSO algorithm must be modified for good performance in environments such as many peaks.
  • New work on self-adaptation has also been presented here.
  • Some progress has been made at the multi-swarm level, where a mechanism for swarm birth and death has been suggested; this scheme eliminates one operator and allows the number of swarms and an exclusion parameter to adjust dynamically.
  • Self-adaptations at the level of each swarm, in particular allowing particles to be born and to die, and self-regulation of the charged cloud radius remain unexplored.

Did you find this useful? Give us your feedback

Figures (6)

Content maybe subject to copyright    Report

Particle Swarm Optimization in Dynamic
Environments
Tim Blackwell
Department of Computing, Goldsmiths College London SE14 6NW, UK
t.blackwell@gold.ac.uk
1 Introduction
Particle Swarm Optimization (PSO) is a versatile population-based o ptimiza-
tion technique, in many respects simila r to evolutionary algorithms (EAs).
PSO has been shown to perform well fo r ma ny static problems [30]. However,
many real-world problems are dynamic in the s ense that the global optimum
location and value may change with time. The task for the optimization al-
gorithm is to track this shifting optimum. It has been argued [14] that EAs
are potentially well-suited to such tasks, and a re view of EA variants tested
in the dy namic problem is given in [13, 15]. It might be wondered, therefore,
what promise P SO holds for dynamic problems.
Optimization w ith particle swarms has two major ingredients, the particle
dynamics and the particle information network. The particle dynamics are de-
rived from swarm simulations in computer graphics [21], and the information
sharing component is inspired by social networks [32, 25]. These ingredients
combine to make PSO a robust and efficient optimizer of real-valued objective
functions (although PSO has also been successfully applied to combinatorial
and discrete problems too). PSO is an accepted computational intelligence
technique, sharing some qualities with Evolutionary Computation [1].
The application of PSO to dynamic problems has been explored by various
authors [30, 23, 17, 9, 6, 24]. The overa ll consequence of this work is that PSO,
just like EAs, must be modified for optimal results on dyna mic environments
typified by the moving peaks benchmark (MPB). (Moving peaks, arguably
representative of real world problems, consist of a number of peaks of changing
with and height and in lateral motion [12, 7 ].) The or igin of the difficulty lies
in the dual problems of outdated memory due to environment dynamism, and
diversity loss, due to convergence.
Of these two problems, diversity loss is by far the more serious; it has
been demonstrated that the time taken for a partia lly converged swarm to
re-diversify, find the shifted peak, and then re-converge is quite deleterious
to performance [3]. Clearly, either a re-diversification mechanism must be

2 Tim Blackwell
employed at (or before) function change, and/or a measur e of diversity can be
maintained throughout the run. There are four principle mechanisms for either
re-diversification or diversity maintenance: randomization [23], repulsion [5],
dynamic networks [24, 36] and multi-populations [29, 6].
Multi-swarms combine repulsion with multi-populations [6, 7]. Interest-
ingly, the repulsion occurs between particles, and between swarms. The multi-
population in this case is an interacting super-swarm of charged swarms . A
charged swarm is inspired by mo dels of the atom: a conventional PSO nu-
cleus is surrounded by a cloud of ‘charged’ particles. The charged particles
are responsible for maintaining the diversity of the swarm. Furthermore, and
in ana logy to the exclusion pr inciple in atomic physics, each swarm is subject
to an ex c lus ion pr e ssure that operates when the swarms collide. This prohibits
two or more swarms from sur rounding a single peak, thereby enabling swarms
to watch s e c ondary peaks in the eventuality that these peaks might become
optimal. This strategy has proven to be very effective for MPB environments.
This chapter starts with a description of the canonical PSO algo rithm
and then, in Section 3, explains why dyna mic environments pose par ticula r
problems for unmodified PSO. The MPB framework is also introduced in
this se c tion. The following section describes some PSO variants that have
been proposed to dea l with diversity loss. Section 5 outlines the multi-swarm
approach and the subsequent section pres e nts new results for a self-adapting
multi-swarm, a multi-population with swarm birth and death.
2 Canonical PSO
In PSO, population members (particles) possess a memory of the best (with
respect to an objective function) loc ation that they have visited in the past,
pbest, and of its fitness. In addition, particles have access to the best location
of any other particle in their own network. These two locations (which will
coincide for the best pa rticle in any networ k) become attractors in the search
space of the swarm. Each par ticle will b e repeatedly drawn back to spatial
neighborhoods close to these two attractors, which themselves will be updated
if the glo bal best and/or particle best is be ttere d at each particle update.
Several network topologies have been tr ied, with the star or fully connected
network remaining a popular choice for unimodal functions. In this network,
every particle will share information with every other particle in the swarm so
that there is a sing le gbest g lobal best attrac tor representing the best location
found by the entire swarm.
Particles p ossess a velocity which influences position updates according to
a simple disc retization of particle motion
v(t + 1) = v(t) + a(t + 1) (1)
x(t + 1) = x(t) + v(t + 1) (2)

Particle Swarm Optimization in Dy namic Environments 3
where a, v, x and t are acceleration, velocity, p osition and time (iteration
counter) respectively. Eqs. 1, 2 are similar to particle dynamics in swarm sim-
ulations, but PSO particles do not follow a smooth trajectory, instead moving
in jumps, in a motion known as a flight [28] (notice that the time increment
dt is missing from these rules). The particles experience a linear or spring-
like a ttraction, weighted by a random number, (particle mass is set to unity)
towards each attra c tor. Convergence towards a good solution will not fo llow
from these dynamics alone; the particle flight must progressively contract.
This contraction is implemented by Clerc and Kennedy with a constriction
factor χ, χ < 1 , [20]. For our purposes here, the Cle rc-Kennedy PSO will be
taken as the canonical swarm; χ replace s other energy draining factors extant
in the liter ature such as a decreasing ‘iner tial weight’ and velocity cla mping.
Moreover the constricted swarm is replete with a convergence proof, albeit
about a static attractor (although there is some experimental and theoretical
support for c onvergence in the fully interacting swarm where particles can
move attractors [10]).
Explicitly, the acceleration of particle i in Eq.1 is given by
a
i
= χ[cǫ · (p
g
x
i
) + cǫ · (p
i
x
i
)] (1 χ)v
i
(3)
where ǫ are vectors of random numbers drawn fro m the uniform distribution
U[0, 1], c > 2 is the spr ing constant and p
i
, p
g
are particle and global attra c -
tors. This formulation o f the particle dynamics has been chosen to demon-
strate explicitly constriction as a frictional fo rce, opposite in direction, and
proportional to, veloc ity. Clerc and Kennedy derive a relation for χ(c): stan-
dard values are c = 2.05 and χ = 0.729843788. T he complete PSO algorithm
for maximizing an objective function f is summarized as Algorithm 1.
3 PSO problems with moving peaks
As has be en mentioned in Sect 1, PSO must be modified fo r optimal results
on dynamic environments typified by the moving pea ks benchmark (MPB).
These modifications must solve the pr oblems of outdated memory, and of lost
diversity. This explains the origins of these problems in the context of MPB,
and shows how memory loss is easily addressed. The following section then
considers the second, more severe, problem.
3.1 Moving Peaks
The dynamic objective function of MPB, f(x, t), is optimized at ‘peak’ lo-
cations x
and has a global optimum at x
∗∗
= arg max{f(x
)} (once more,
assuming optimization means ma ximizing). Dynamism entails a small move-
ment of magnitude s, and in a random direction, of each x
. This happens

4 Tim Blackwell
Algorithm 1 Canonical PSO
FOR EACH particle i
Randomly initialize v
i
, x
i
= p
i
Evaluate f (p
i
)
g = arg max f(p
i
)
REPEAT
FOR EACH particle i
Update particle position x
i
according to eqs.. 1, 2 and 3
Evaluate f (x
i
)
//Update personal best
IF f (x
i
) > f(p
i
) THEN
p
i
= x
i
//Update global best
IF f (x
i
) > f(p
g
) THEN
p
g
= arg max f(p
i
)
UNTIL termination criterion reached
every K evaluations and is accompanied by small changes o f pea k height and
width. There are p peak s in total, although some peaks may bec ome obscured.
The peaks are constrained to move in a search space of extent X in each of
the d dimensions, [0, X]
d
.
This scenario, which is no t the most ge neral, nevertheless has been put
forward as representative of real world dynamic problems [12] and a bench-
mark function is publicly available for download from [11]. Note that small
changes in f(x
) can still invoke large changes in x
∗∗
due to peak promotion,
so the many peaks model is far from trivial.
3.2 The problem of outdated memory
Outdated memo ry happ ens at environment change when the optima may shift
in location and/or value. Particle memory (namely the best location visited
in the past, and its co rresponding fitnes s) may no longer be true at change,
with potentially disastrous effects on the search.
The problem of outdated memory is typically solved by either assuming
that the algorithm knows just when the environment change occurs, or that
it can detect change. In either case, the algor ithm must invoke a n appr opriate
response. One method of detecting change is a re-evaluation of f at one or
more of the personal b e sts p
i
[17, 23]. A simple and effective response is to
re-set all particle memories to the current particle position and f value at this
position, and e ns uring that p
g
= arg max f(p
i
). One possible dr awback is that
the function has not changed at the chosen p
i
, but has changed els e w here.
This can be remedied by re- e valuating f at all personal bests, at the expense
of doubling the to tal number of function e valuations per iteration.

Particle Swarm Optimization in Dy namic Environments 5
3.3 The problem of lost diversity
Equally troubling as outdated memory is insufficient diversity at change. The
population takes time to re-diversify and re-converge, effectively unable to
track a moving optimum.
It is helpful at this stage to introduce the swarm diameter |S|, defined
as the larg e st distance, along any axis, between any two particles [2], as a
measure of swarm diversity (Fig. 1). Loss of diversity arises when a swarm
is co nverging on a peak. There are two possibilities: when change occurs, the
new optimum location may either be within or outside the collapsing swarm.
In the former case, there is a good chance that a particle will find itself close to
the new optimum within a few iterations and the swarm will successfully track
the moving target. The swarm as a whole ha s sufficient diversity. However,
if the optimum shift is significantly far from the swarm, the low velocities
of the particles (which are of order |S|) will inhibit re-diversification and
tracking, and the swarm can even o scillate about a false attractor and along
a line perpendicular to the true optimum, in a phenomenon known as linear
collapse [5]. This effect is illustr ated in Fig. 2.
Fig. 1. The swarm diameter
These considerations can be quantified with the help of a prediction for
the rate of diversity loss [2, 3, 10]. In general, the swa rm shrinks at a rate
determined by the constriction facto r and by the local environment at the
optimum. For static functions with spherical symmetric basins of attraction,
the theoretical and empirical analysis of the above references suggest that the
rate of shrinkage (and hence diversity loss) is scale invariant and is given by
a scaling law
|S(t)| = Cα
t
(4)
for constants C a nd α < 1, where α 0.92 and C is the swarm diameter
at iteration t = 0. T he number of function evaluations between change, K,
can be converted into a period measured in iterations, L by considering the

Citations
More filters
Journal ArticleDOI
TL;DR: The components and concepts that are used in various metaheuristics are outlined in order to analyze their similarities and differences and the classification adopted in this paper differentiates between single solution based metaheURistics and population based meta heuristics.
Abstract: Metaheuristics are widely recognized as efficient approaches for many hard optimization problems. This paper provides a survey of some of the main metaheuristics. It outlines the components and concepts that are used in various metaheuristics in order to analyze their similarities and differences. The classification adopted in this paper differentiates between single solution based metaheuristics and population based metaheuristics. The literature survey is accompanied by the presentation of references for further details, including applications. Recent trends are also briefly discussed.

1,343 citations


Cites methods from "Particle Swarm Optimization in Dyna..."

  • ...Considerable research has been also conducted into further refinement of the original formulat ion of PSO in both continuous and discrete problem spaces [146], and areas such as dynamic environments [29], parallel implementati on [16] and MultiObject ive Optimiza tion [223]....

    [...]

Journal ArticleDOI
TL;DR: Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995 as discussed by the authors, and the authors have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm.
Abstract: Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors' perspective, including variations in the algorithm, current and ongoing research, applications and open problems.

720 citations

Journal ArticleDOI
TL;DR: An in-depth survey of the state-of-the-art of academic research in the field of EDO and other meta-heuristics in four areas: benchmark problems/generators, performance measures, algorithmic approaches, and theoretical studies is carried out.
Abstract: Optimization in dynamic environments is a challenging but important task since many real-world optimization problems are changing over time. Evolutionary computation and swarm intelligence are good tools to address optimization problems in dynamic environments due to their inspiration from natural self-organized systems and biological evolution, which have always been subject to changing environments. Evolutionary optimization in dynamic environments, or evolutionary dynamic optimization (EDO), has attracted a lot of research effort during the last 20 years, and has become one of the most active research areas in the field of evolutionary computation. In this paper we carry out an in-depth survey of the state-of-the-art of academic research in the field of EDO and other meta-heuristics in four areas: benchmark problems/generators, performance measures, algorithmic approaches, and theoretical studies. The purpose is to for the first time (i) provide detailed explanations of how current approaches work; (ii) review the strengths and weaknesses of each approach; (iii) discuss the current assumptions and coverage of existing EDO research; and (iv) identify current gaps, challenges and opportunities in EDO.

566 citations

Journal ArticleDOI
TL;DR: A broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications, and some considerations about future directions in the subject are given.
Abstract: Swarm intelligence (SI) algorithms, including ant colony optimization, particle swarm optimization, bee-inspired algorithms, bacterial foraging optimization, firefly algorithms, fish swarm optimization and many more, have been proven to be good methods to address difficult optimization problems under stationary environments. Most SI algorithms have been developed to address stationary optimization problems and hence, they can converge on the (near-) optimum solution efficiently. However, many real-world problems have a dynamic environment that changes over time. For such dynamic optimization problems (DOPs), it is difficult for a conventional SI algorithm to track the changing optimum once the algorithm has converged on a solution. In the last two decades, there has been a growing interest of addressing DOPs using SI algorithms due to their adaptation capabilities. This paper presents a broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications. In addition, this paper focuses on the enhancement strategies integrated in SI algorithms to address dynamic changes, the performance measurements and benchmark generators used in SIDO. Finally, some considerations about future directions in the subject are given.

421 citations


Cites methods from "Particle Swarm Optimization in Dyna..."

  • ...Blackwell [163] proposed the self-adaptive multi-swarm optimizer (SAMO) which is the first adaptive method regarding the number of populations....

    [...]

  • ...[140, 141, 142, 143, 144, 145, 146, 147, 148, 98, 149, 99, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 97, 164, 102, 165, 166, 101, 167, 168] [169, 170, 171, 172] [173, 174, 175] [176, 177] [178, 179, 180, 181, 182, 183]...

    [...]

  • ...To efficiently solve DOPs by the multi-swarm scheme, one key issue is to adapt the number of swarms [200, 163, 164]....

    [...]

Journal ArticleDOI
TL;DR: An adaptive LS starting strategy is proposed by utilizing the proposed quasi-entropy index to address its key issue, i.e., when to start LS.
Abstract: A comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) is proposed to pursue higher optimization performance by taking the advantages of CLPSO’s strong global search capability and LS’s fast convergence ability. This paper proposes an adaptive LS starting strategy by utilizing our proposed quasi-entropy index to address its key issue, i.e., when to start LS. The changes of the index as the optimization proceeds are analyzed in theory and via numerical tests. The proposed algorithm is tested on multimodal benchmark functions. Parameter sensitivity analysis is performed to demonstrate its robustness. The comparison results reveal overall higher convergence rate and accuracy than those of CLPSO, state-of-the-art particle swarm optimization variants.

288 citations

References
More filters
Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations

Book
01 Jan 1982
TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.
Abstract: "...a blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) ...and the illustrations include many superb examples of computer graphics that are works of art in their own right." Nature

24,199 citations

Proceedings ArticleDOI
01 Aug 1987
TL;DR: In this article, an approach based on simulation as an alternative to scripting the paths of each bird individually is explored, with the simulated birds being the particles and the aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course.
Abstract: The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle systems, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the "animator." The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.

7,365 citations

Journal ArticleDOI
TL;DR: A variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm.
Abstract: The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.

2,038 citations

Journal ArticleDOI
TL;DR: A Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Abstract: This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and e1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.

1,436 citations

Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "Particle swarm optimization in dynamic environments" ?

Particle Swarm Optimization ( PSO ) is a population-based optimization technique, in many respects similar to evolutionary algorithms ( EAs ) this paper.