scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Particle swarm optimization with thresheld convergence

20 Jun 2013-pp 510-516
TL;DR: Experiments show that the addition of thresheld convergence to particle swarm optimization can lead to large performance improvements in multi-modal search spaces.
Abstract: Many heuristic search techniques have concurrent processes of exploration and exploitation. In particle swarm optimization, an improved pbest position can represent a new more promising region of the search space (exploration) or a better solution within the current region (exploitation). The latter can interfere with the former since the identification of a new more promising region depends on finding a (random) solution in that region which is better than the current pbest. Ideally, every sampled solution will have the same relative fitness with respect to its nearby local optimum - finding the best region to exploit then becomes the problem of finding the best random solution. However, a locally optimized solution from a poor region of the search space can be better than a random solution from a good region of the search space. Since exploitation can interfere with subsequent/concurrent exploration, it should be prevented during the early stages of the search process. In thresheld convergence, early exploitation is “held” back by a threshold function. Experiments show that the addition of thresheld convergence to particle swarm optimization can lead to large performance improvements in multi-modal search spaces.

Summary (2 min read)

II. BACKGROUND

  • The first work by the authors that used threshold functions to control the rate of convergence was also an application to particle swarm optimization [2] .
  • Using a threshold function to ensure that new pbest positions are kept a minimum distance from all existing pbest positions is not more efficient than standard crowding -it still requires p distance calculations.
  • In [2] , this was implemented by ensuring that a particle would not update its pbest position to be within the threshold of its attracting lbest position -a requirement which only needs a single distance measurement to enforce.
  • This initial implementation has some similarities to niching (e.g. [6] ) -as the pbest positions are kept a minimum distance apart, they encourage exploration around a more diverse group of attraction basins.
  • Compared to niching and crowding, thresheld convergence prevents both convergence and local search.

III. PARTICLE SWARM OPTIMIZATION WITH THRESHELD CONVERGENCE

  • The development of particle swarm optimization (PSO) includes inspirations from "bird flocking, fish schooling, and swarming theory in particular" [8] .
  • Rather than a simple line search between a current position and a best position, the velocity and momentum of each particle encourage a more explorative search path.
  • Algorithm 2 then shows the new update condition for particle swarm optimization with thresheld convergence.
  • In general, the results with thresheld convergence are better and more consistent on the functions with global structure (BBOB set 4) than without global structure (BBOB set 5).
  • Of note are the Gallagher functions [11] (BBOB 21 and 22) -the previous work with threshold functions [2] had large improvements on these functions while the current results have essentially no change in performance.

IV. AN ADAPTIVE THRESHOLD FUNCTION

  • The key parameter affecting the performance of thresheld convergence is α.
  • The basic premise is that a threshold value that is too high will prevent any improvements from being made.
  • It appears that the adaptive threshold function does have some ability to find and maintain an appropriate threshold value regardless of the initial value of α.

The results in Table IV

  • In Fig. 3 , the threshold value drops off more slowly, but it does not approach the scheduled threshold function with α = 0.10 (which led to the best performance on BBOB 18) until very late in the search process.
  • In general, there are no visible plateaus (except the initial stage where pbests are being improved from their very poor initial random solutions), so the idea of an ideal threshold value may have to be revisited.

V. ANOTHER ADAPTATION

  • The addition of thresheld convergence to particle swarm optimization increases the distances among the pbest positions.
  • Due to the distances among the pbest attractors, the particles can reaccelerate, also known as (Note.
  • The results in Table V again show percent difference (%diff = (b-a)/b) of mean performance between particle swarm optimization with thresheld convergence (a) and standard PSO (b).

VI. DISCUSSION

  • There are many ways to improve the performance of PSO from the standard baseline version [1] .
  • Convergence occurs in PSO when a particle with zero speed has the same position as all of its pbest attractors.
  • As previously discussed in Section III, it appears that "niching effect" may be more suitable for the Gallagher functions (BBOB 21 and 22) than the current implementation of thresheld convergence.
  • In general, simple modifications (e.g. the switch from a GBest/star topology to an LBest/ring topology [1] ) are more likely to gain widespread adoption than more complex modifications (e.g. niching [6] ).
  • The large potential benefits, computational efficiency, and general ease of adding thresheld convergence make improved threshold functions a promising area for further research.

VII. SUMMARY

  • The addition of thresheld convergence to particle swarm optimization can lead to large performance improvements on multi-modal functions with adequate global structure.
  • A simple, effective, and robust adaptive threshold function has been developed to replace the originally developed scheduled threshold functions.
  • The simplicity and effectiveness of the proposed modifications make thresheld convergence a promising area for further research.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Deposite
Ar
c
http:
This is t
h
Chen, S
t
Particle
Paper
t
23, 201
© 2013
all othe
r
advertis
i
servers
o

dANUResea
r
c
hive
d
//ww
w
t
he accepted
t
ephen & M
o
s
warm opti
m
t
o be
rese
3, Cancun,
IEEE. Pers
o
r
uses, in an
y
i
ng or prom
o
o
r lists, or r
e
r
chrepositor
y
d
in
A
w
.anu.e
d
version of:
o
ntgomery,
J
m
ization wit
h
n
ted at
I
EE
E
Mexico.
o
nal use of t
h
y
current or
f
o
tional purp
o
e
use of any c
o
y
A
NU
R
d
u.au/r
e
J
ames (2013
)
h
thresheld
c
E
Congress
h
is material i
s
fu
ture media
,
o
ses, creatin
g
o
pyrighted
c
R
ese
a
e
searc
h
)
c
onvergence
on Evoluti
o
s
permitted.
,
including r
e
g
new collec
t
c
omponent o
a
rch r
h
/acces
o
nary Com
p
Permission
f
e
printing/re
p
t
ive works,
f
f
this work i
n
e
posi
t
s/
p
utation (C
E
f
rom IEEE
m
p
ublishing th
i
f
or resale or
r
n
other wor
k
tory
E
C2013),
J
m
ust be obta
i
is material f
o
redistributio
k
s
Ju
ne 20-
ned for
o
r
n
to

Particle Swarm Optimization with Thresheld
Convergence
Stephen Chen
School of Information Technology
York University
Toronto, Canada
sychen@yorku.ca
James Montgomery
Research School of Computer Science
Australian National University
Canberra, Australia
james.montgomery@anu.edu.au
Abstract—Many heuristic search techniques have concurrent
processes of exploration and exploitation. In particle swarm
optimization, an improved pbest position can represent a new
more promising region of the search space (exploration) or a
better solution within the current region (exploitation). The latter
can interfere with the former since the identification of a new
more promising region depends on finding a (random) solution in
that region which is better than the current pbest. Ideally, every
sampled solution will have the same relative fitness with respect
to its nearby local optimum – finding the best region to exploit
then becomes the problem of finding the best random solution.
However, a locally optimized solution from a poor region of the
search space can be better than a random solution from a good
region of the search space. Since exploitation can interfere with
subsequent/concurrent exploration, it should be prevented during
the early stages of the search process. In thresheld convergence,
early exploitation is “held” back by a threshold function.
Experiments show that the addition of thresheld convergence to
particle swarm optimization can lead to large performance
improvements in multi-modal search spaces.
Keywords—particle swarm optimization; thresheld
convergence; niching; crowding; exploration; exploitation
I. INTRODUCTION
An attraction basin represents the region of a search space
that will lead to a given (local) optimum when greedy local
search is used. Let us define the fitness of an attraction basin as
the fitness of the optimum within it. During the explorative
phase(s) of a search technique, the goal is to find the fittest
attraction basin. The exploitative phase(s) will then be
responsible for finding the exact optimum within the initially
discovered attraction basin. Since precise measurement of the
fitness of an attraction basin is not possible without using local
search to find the actual optimum, the fitness of a search
point’s attraction basin is often estimated by the fitness of the
search point itself.
Particle swarm optimization (PSO) [1] can be viewed as a
system with two populations: a population of current positions
which search for better solutions, and a population of pbest
positions which store the best found solutions. PSO can also be
viewed as a system with two phases. During the initial phase
when the system has large velocities, the current solutions
focus more on exploration. Later, as velocities slow towards
zero, the current solutions will focus more on exploitation. This
exploitation will occur around the pbest positions, so the goal
of the initial phase is to find pbest positions that are members
of the fittest attraction basins.
In PSO, the fitness of an attraction basin is (implicitly)
estimated by the fitness of a sample solution found from within
that basin. Specifically, given two positions which represent
two attraction basins, the position stored by pbest (which
represents the most promising attraction basin to exploit during
the later exploitative phase) will be the position with the better
fitness. In order to improve the attraction basin represented by
the pbest position, it is not sufficient to find a new position in a
fitter attraction basin – it is necessary to find a position in a
fitter attraction basin that is also fitter than the existing pbest
position.
For an attraction basin represented by an “average” sample
solution, it should be relatively easy to find a fitter sample
solution from a fitter attraction basin. Consider a search space
with attraction basins that have similar shapes and sizes (e.g. a
sinusoid super-imposed over a linear slope). In this search
space, the average fitness of a random solution is correlated to
the fitness of its attraction basin (see Fig. 1). In particular, the
expected difference in fitness between two random solutions
from different attraction basins will be equal to the difference
in fitness between the optima from these attraction basins. With
“average” sample solutions, the task of finding the fittest
Fig. 1. The horizontal lines represent the average/expected fitness of
random sample solutions in each attraction basin. If an attraction basin is
represented by a better-than-average solution (see dot), a random solution
from a fitter attraction basin may no longer have a better expected fitness.
c
2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

attraction basin is equivalent to finding the fittest random
solution. However, this task becomes more complicated if an
existing attraction basin is represented by a better-than-average
solution.
One way to find a better-than-average solution from an
attraction basin is to perform local search. Starting from an
initial solution, let us define any change that leads to a solution
in a new attraction basin as an explorative/global search step
and any change that leads to a solution in the same attraction
basin as an exploitative/local search step. Without any other
information, the first solution from an attraction basin can be
considered to be a random solution. The expected fitness of a
random solution is the average fitness of all solutions in an
attraction basin, so a second solution in the same attraction
basin that is better than the first solution can be expected to be
a better-than-average solution. Referring again to Fig. 1,
concurrent exploitation which can lead to better-than-average
solutions from an existing attraction basin can interfere with a
search technique’s explorative processes which are tasked with
finding new, more promising attraction basins.
The goal of “thresheld convergence” is to delay local search
and thus prevent “uneven” sampling from attraction basins.
Convergence is “held” back as (local) search steps that are less
than a threshold function are disallowed. As this threshold
function decays towards zero, greedier local search steps
become allowed. Conversely, until the threshold is sufficiently
small, the search technique is forced to focus on the global
search aspect of finding the best attraction basin/region of the
search space in which a local optimum will eventually be
found.
Before applying thresheld convergence to particle swarm
optimization, a brief background on the development of
threshold functions and other diversification techniques is
provided in Section II. Preliminary results for particle swarm
optimization with thresheld convergence are then presented in
Section III. A simple and robust adaptive threshold function is
used to improve performance in Section IV. Performance is
again improved by adding an adaptive velocity update in
Section V. Finally, all of these results are discussed in Section
VI before a summary is given in Section VII.
II. B
ACKGROUND
The first work by the authors that used threshold functions
to control the rate of convergence was also an application to
particle swarm optimization [2]. This application was
conceived of and implemented as an efficient version of
crowding [3] (applied to the population of pbest positions). To
prevent crowding in a population, a new solution that is
accepted into the population should replace its nearest
neighbour. The main weaknesses with crowding is that it is
either slow (requiring p = population size distance calculations
to find the nearest neighbour) or prone to “replacement errors”
(if crowding is applied to only a subset of the population).
Using a threshold function to ensure that new pbest
positions are kept a minimum distance from all existing pbest
positions is not more efficient than standard crowding – it still
requires p distance calculations. The efficiency gain comes
from allowing crowds but disallowing communication within
crowds. In differential evolution (DE) [4], crowded solutions
create short difference vectors which lead to the creation of
new solutions close to existing solutions – i.e. crowding begets
more crowding. This “cascading convergence” was reduced by
requiring new solutions to be a minimum distance from their
base solutions – a requirement which needs only a single
distance measurement to enforce [5].
In a particle swarm with a ring topology [1], each particle
only communicates with two neighbouring particles. As long as
a particle does not create a crowd with its two neighbours, a
cascading effect of neighbour after neighbour after neighbour
joining this crowd will not occur. In [2], this was implemented
by ensuring that a particle would not update its pbest position
to be within the threshold of its attracting lbest position – a
requirement which only needs a single distance measurement
to enforce.
In the previous work [2], the goal was to prevent pbest
positions from forming crowds, but it did not prevent the
improvement of existing pbest positions. This initial
implementation has some similarities to niching (e.g. [6]) – as
the pbest positions are kept a minimum distance apart, they
encourage exploration around a more diverse group of
attraction basins. However, maintaining diversity among a set
of known attraction basins does not address how new attraction
basins are discovered and tested.
Recent work with the use of threshold functions to control
the rate of convergence has led to the new technique of
“thresheld convergence” [7]. Compared to niching and
crowding, thresheld convergence prevents both convergence
and local search. Similar to niching and crowding, thresheld
convergence promotes diversity which increases the chances of
finding highly fit (local) optima. The key difference from
thresheld convergence to niching and crowding is the
prevention of local search which reduces the bias of the search
technique to over exploit the current attraction basins (see Fig.
1).
III. P
ARTICLE SWARM OPTIMIZATION WITH THRESHELD
CONVERGENCE
The development of particle swarm optimization (PSO)
includes inspirations from “bird flocking, fish schooling, and
swarming theory in particular” [8]. Each particle (e.g. a
simulated bird) is attracted to its personal best position and the
best position of a neighbouring member in the swarm. Rather
than a simple line search between a current position and a best
position, the velocity and momentum of each particle
encourage a more explorative search path. However, elitism is
applied to the storage of best positions, so PSO will eventually
be highly exploitative around the regions/attraction basins of
the best positions as velocities slow to zero.
The following experiments build from the published source
code of a PSO benchmark implemented by El-Abd and Kamel
[9]. This benchmark implementation is for a GBest swarm
using a star topology, so it requires a slight modification to
become an LBest swarm with a ring topology. After this
modification, the benchmark becomes an implementation of
standard PSO [1] in which each dimension d of a particle’s
velocity v is updated for the next iteration i+1 by

didididididi
xlbestcxpbestcvv
,,22,,11,,1

where χ is the constriction factor, c
1
and c
2
are weights which
vary the contributions of personal best and local best attractors,
ε
1
and ε
2
are independent uniform random numbers in the range
[0,1], x is the position of the particle, pbest is the best position
found by the current particle, and lbest is the best position
found by any particle communicating with the current particle
(i.e. two neighbours in an LBest ring topology). Key
parameters in [9] include
792.0
, 4944.1**
21
cc
,
i.e.
887.1
21
cc , and p = 40 particles.
The application of thresheld convergence requires a
threshold function. The threshold function (2) developed in [2]
has two parameters: α represents the initial minimum distance
as a fraction of the search space diagonal and γ represents the
decay factor. For γ = 1, the threshold decays with a linear slope
as the iteration k goes from 0 to the maximum number of
allowed function evaluations n.



nkndiagonalthreshold /** (2)
The threshold function is used during the update of pbest
positions. In Algorithm 1, the normal update condition for
standard PSO [1] is shown. Algorithm 2 then shows the new
update condition for particle swarm optimization with thresheld
convergence. Since the two distance measurements are only
required when improving positions are found, the addition of
thresheld convergence has a negligible effect on the
computational efficiency of the underlying implementation of
standard PSO.
The following analysis of particle swarm optimization with
thresheld convergence focuses on two sets from the Black-Box
Optimization Benchmarking (BBOB) functions [10]: set 4,
multi-modal functions with adequate global structure, and set 5,
multi-modal functions with weak global structure. See Table I
for more information on the BBOB functions. To be consistent
with previous results (i.e. [2]), the following experiments
perform 25 independent trials on each function (5 trials on each
of the first 5 instances – each instance has a different randomly
shifted location for its global optimum) with a fixed limit of
5000*D function evaluations. These experiments also use D =
20 dimensions.
In Table II, results for particle swarm optimization with
thresheld convergence are presented for γ = 3 and α = 0.01,
0.02, 0.05, 0.10, 0.20, and 0.50. Experiments were also
conducted with γ = 2 (the best results in [2] were with γ = 2 or
3), but they were consistently a little worse, so they have been
omitted for clarity and brevity. The results show the percent
difference (%-diff = (b-a)/b)) in the means for 25 independent
trials of particle swarm optimization with thresheld
convergence (a) and standard PSO (b).
In general, the results with thresheld convergence are better
and more consistent on the functions with global structure
(BBOB set 4) than without global structure (BBOB set 5). Of
note are the Gallagher functions [11] (BBOB 21 and 22) – the
previous work with threshold functions [2] had large
improvements on these functions while the current results have
essentially no change in performance. Given the random
Algorithm 1 Normal pbest update in PSO
if f(x) < f(pbest)
pbest = x;
end if
Algorithm 2 Modified pbest update
if f(x) < f(pbest)
AND distance (x, pbest) > threshold
AND distance (x, lbest) > threshold
pbest = x;
end if
TABLE I
BBOB
FUNCTIONS
Set fn Function Name
Attribute
s u gs
1
1 Sphere X X X
2 Ellipsoidal, original X X X
3 Rastrigin X X
4 Büche-Rastrigin X X
5 Linear Slope X X
2
6 Attractive Sector X
7 Step Ellipsoidal X
8 Rosenbrock, original
9 Rosenbrock, rotated
3
10 Ellipsoidal, rotated X X
11 Discus X X
12 Bent Cigar X
13 Sharp Ridge X
14 Different Powers X
4
15 Rastrigin, rotated X
16 Weierstrass X
17 Schaffers F7 X
18 Schaffers F7, moderately ill-conditioned X
19 Composite Griewank-Rosenbrock F8F2 X
5
20 Schwefel
21 Gallagher’s Gaussian 101-me Peaks
22 Gallagher’s Gaussian 21-hi Peaks
23 Katsuura
24 Lunacek bi-Rastrigin
Names and selected attributes of the 24 functions in the BBOB problem
set – separable (s), unimodal (u), global structure (gs).
TABLE II
E
FFECTS OF INITIAL THRESHOLD SIZE
BBOB α
fn 0.01 0.02 0.05 0.1 0.2 0.5
15 9.2% 15.9% 12.5% 6.8% -3.5% -2.6%
16 20.4% 25.6% 33.8% 21.2% -1.2% -3.1%
17 34.1% 40.7% 67.0% 75.2% 61.6% 52.2%
18 13.0% 24.9% 41.1% 55.6% 46.7% 33.5%
19 -1.7% -0.3% -0.2% -8.1% -6.5% -4.0%
15-19 15.0% 21.4% 30.8% 30.1% 19.4% 15.2%
20 13.1% 16.1% 18.8% 10.9% 0.4% -9.0%
21 -5.4% -1.4% -20.2% 3.9% -2.3% 9.3%
22 -3.2% -13.9% -16.1% -18.2% -20.4% -6.0%
23 -18.3% -7.4% -13.5% -14.2% -16.9% -24.5%
24 2.2% -0.5% 2.3% -2.0% -3.1% -7.7%
20-24 -2.3% -1.4% -5.8% -3.9% -8.5% -7.6%
The best overall results occur with α = 0.05. The benefits of thresheld
conver
g
ence a
pp
ear to de
p
end on the
g
lobal structure of the search s
p
ace.

optima in the Gallagher functions, exploring multiple distinct
optima (e.g. through the effects of niching) improves the
chances that one of these optima will be highly fit. Conversely,
the existence of random attraction basins of different shapes
and sizes is a distinct contradiction to the premise of similarly
sized and shaped attraction basins used in the development of
thresheld convergence (see Fig. 1). The remainder of this paper
will thus focus on BBOB set 4.
It should also be noted that the conducted experiments with
thresheld convergence included the full set of BBOB functions.
On set 1 – separable functions, neither version exploits
separability so the results on this set match the results on other
sets – thresheld convergence improved performance on the
multi-modal functions and had worse performance on the
unimodal functions. On set 2 – functions with low or moderate
conditioning, neither versions addresses the effects of ill-
conditioning so the dominant trend again matches the modality
of the underlying functions. On set 3 – unimodal functions with
high conditioning, thresheld convergence is explicitly designed
for multi-modal functions so it is specifically ill-equipped to
perform well on unimodal functions. As the threshold size
increases (e.g. α), less exploitation is possible and worse results
are obtained (across all functions for all parameter settings).
For brevity and clarity, results and analysis for these three sets
are omitted from the rest of the paper.
The addition of thresheld convergence is particularly
effective on BBOB 17 and 18 – large improvements for all
tested values of α. On BBOB 15 and 16, thresheld convergence
is effective for smaller values of α, but larger values of α
probably lead to too little exploitation. The best results on these
four functions are statistically significant as indicated by the t-
tests shown in Table III. For all of set 4 (BBOB15-19), the best
results with any value of α have a mean improvement of
36.1%. Compared to the best results for a single value of α =
0.05 which has a mean improvement of 30.8%, these best
overall results show the room for improvement that should be
attainable with adaptive threshold functions.
IV. A
N ADAPTIVE THRESHOLD FUNCTION
The key parameter affecting the performance of thresheld
convergence is α. On some functions (e.g. BBOB 18 – see
Table II), the best results are achieved with larger values of α
whereas smaller values of α lead to the best results on other
functions (e.g. BBOB 15). It is hypothesized that the ideal
threshold value is related to the size of the attraction basins in
the search space. For a threshold function with specific α and γ
parameters, a certain amount of time will be spent near the
ideal threshold value. The development of adaptive threshold
functions which can spend more time near this ideal threshold
value should improve the performance of thresheld
convergence.
An adaptive threshold function has been developed for
particle swarm optimization. The basic premise is that a
threshold value that is too high will prevent any improvements
from being made. Conversely, if improving solutions are being
found, the current threshold value may be at the ideal level, so
it should be left unchanged. Thus, the number of pbest updates
that occur during an iteration i are recorded (see Algorithm 2),
and the threshold value is decreased if the number of updates is
zero (see Algorithm 3).
In Table IV, the results on BBOB sets 4 and 5 are given for
PSO with the new adaptive threshold function. The initial
threshold value (α) is 0.01, 0.02, 0.05, 0.10, 0.20, and 0.50 and
the “decay factor” is 0.995 – the threshold decreases by 0.5%
after any iteration in which no pbest improvements are made.
Unreported experiments also tried rates of decrease of 0.25%,
1%, and 2%. Similar but slightly worse results were achieved
with 1% while 0.25% and 2% had larger drop-offs in
performance. Despite its simplicity, the adaptive threshold
function in Algorithm 3 appears to provide relatively stable and
predictable performance.
The results in Table IV again show percent difference (%-
diff = (b-a)/b) of mean performance between particle swarm
optimization with thresheld convergence (a) and standard PSO
(b). Compared to the results in Table II, the results in Table IV
TABLE III
B
EST RESULTS FOR INITIAL THRESHOLD SIZES
BBOB standard PSO with thresholds
α %-diff p-value
fn mean std dev mean std dev
15 6.05e+1 1.46e+1 5.08e+1 1.49e+1 0.02 15.9% 0.01
16 5.37e+0 1.53e+0 3.55e+0 1.28e+0 0.05 33.8% 0.00
17 6.61e1 2.64e1 1.64e1 1.08e1 0.10 75.2% 0.00
18 2.87e+0 1.28e+0 1.28e+0 6.70e1 0.10 55.6% 0.00
19 3.61e+0 4.32e1 3.62e+0 4.26e1 0.05 -0.2% 0.47
15-19 36.1%
20 1.14e+0 1.38e1 9.22e1 1.77e1 0.05 18.8% 0.00
21 1.41e+0 1.21e+0 1.28e+0 1.28e+0 0.50 9.3% 0.35
22 1.69e+0 1.51e+0 1.75e+0 1.77e+0 0.01 -3.2% 0.45
23 1.33e+0 2.49e1 1.43e+0 2.55e1 0.02 -7.4% 0.09
24 1.13e+2 1.12e+1 1.10e+2 1.60e+1 0.05 2.3% 0.26
20-24 3.9%
The addition of thresheld convergence leads to a significant
improvement (%-diff > 10% and p < 0.05 for the t-test) on four of the
five functions in BBOB set 4.
Algorithm 3 Adaptive threshold function
if no pbest updates
threshold = threshold * decay factor;
end if
TABLE IV
E
FFECTS OF INITIAL THRESHOLD SIZE
BBOB α
fn 0.01 0.02 0.05 0.1 0.2 0.5
15 13.0% 10.6% 18.9% 17.7% 5.3% 14.7%
16 16.7% 13.0% 17.1% 5.6% 11.7% 3.8%
17 31.4% 35.8% 66.4% 75.6% 72.3% 64.8%
18 1.0% 20.5% 50.2% 47.1% 52.6% 49.1%
19 -0.1% 1.3% 4.2% -0.8% -0.4% -2.3%
15-19 12.4% 16.3% 31.4% 29.0% 28.3% 26.0%
20 16.2% 14.7% 12.7% 13.0% 5.9% 2.8%
21 37.4% -1.5% 0.4% -35.0% 18.9% -33.8%
22 -7.7% -42.4% -7.8% -17.8% -37.7% -12.7%
23 -0.4% -4.9% -7.6% -13.5% -6.3% -12.8%
24 7.4% 1.7% 5.1% 4.3% -1.1% -1.5%
20-24 10.6% -6.5% 0.5% -9.8% -4.1% -11.6%
For each value of α, the performance of the adaptive threshold
function is remarkably similar to the performance with the scheduled
threshold function – see Table II (especially for BBOB set 4).

Citations
More filters
Journal ArticleDOI
TL;DR: The experimental results show that the proposed novel metaheuristic optimization algorithm, named BRO (battle royale optimization), is an efficient method and provides promising and competitive results.
Abstract: Recently, several metaheuristic optimization approaches have been developed for solving many complex problems in various areas. Most of these optimization algorithms are inspired by nature or the social behavior of some animals. However, there is no optimization algorithm which has been inspired by a game. In this paper, a novel metaheuristic optimization algorithm, named BRO (battle royale optimization), is proposed. The proposed method is inspired by a genre of digital games knowns as “battle royale.” BRO is a population-based algorithm in which each individual is represented by a soldier/player that would like to move toward the safest (best) place and ultimately survive. The proposed scheme has been compared with the well-known PSO algorithm and six recent proposed optimization algorithms on nineteen benchmark optimization functions. Moreover, to evaluate the performance of the proposed algorithm on real-world engineering problems, the inverse kinematics problem of the 6-DOF PUMA 560 robot arm is considered. The experimental results show that, according to both convergence and accuracy, the proposed algorithm is an efficient method and provides promising and competitive results.

72 citations

Proceedings ArticleDOI
25 May 2015
TL;DR: This paper addresses the design of thresheld convergence in the context of evolution strategies by analyzing the behavior of the standard (μ, λ)-ES on multi-modal landscapes and arguing that part of it's shortcomings are due to an ineffective balance between exploration and exploitation.
Abstract: When optimizing multi-modal spaces, effective search techniques must carefully balance two conflicting tasks: exploration and exploitation. The first refers to the process of identifying promising areas in the search space. The second refers to the process of actually finding the local optima in these areas. This balance becomes increasingly important in stochastic search, where the only knowledge about a function's landscape relies on the relative comparison of random samples. Thresheld convergence is a technique designed to effectively separate the processes of exploration and exploitation. This paper addresses the design of thresheld convergence in the context of evolution strategies. We analyze the behavior of the standard (μ, λ)-ES on multi-modal landscapes and argue that part of it's shortcomings are due to an ineffective balance between exploration and exploitation. Afterwards we present a design for thresheld convergence tailored to ES, as a simple yet effective mechanism to increase the performance of (μ, λ)-ES on multimodal functions.

22 citations


Cites background from "Particle swarm optimization with th..."

  • ...A new solutions replaces the personal attractor only if its distance to the local attractor is larger than the threshold [3]....

    [...]

  • ...Thresheld convergence has proven useful to increase the search performance when applied to rapidly converging search techniques such as simulated annealing (SA) [2], particle swarm optimization (PSO) [3] and differential evolution (DE) [4]....

    [...]

  • ...In PSO, the minimum step is enforced between the local and personal attractors (best solutions found)....

    [...]

  • ...Thresheld convergence has proven useful to increase the search performance when applied to rapidly converging search techniques such as simulated annealing (SA) [2], particle swarm optimization (PSO) [3] and differential evolution (DE) [4]....

    [...]

Proceedings ArticleDOI
20 Jun 2013
TL;DR: This paper presents a new adaptive thresheld convergence mechanism which helps DE achieve large performance improvements in multi-modal search spaces.
Abstract: During the search process of differential evolution (DE), each new solution may represent a new more promising region of the search space (exploration) or a better solution within the current region (exploitation). This concurrent exploitation can interfere with exploration since the identification of a new more promising region depends on finding a (random) solution in that region which is better than its target solution. Ideally, every sampled solution will have the same relative fitness with respect to its nearby local optimum - finding the best region to exploit then becomes the problem of finding the best random solution. However, differential evolution is characterized by an initial period of exploration followed by rapid convergence. Once the population starts converging, the difference vectors become shorter, more exploitation is performed, and an accelerating convergence occurs. This rapid convergence can occur well before the algorithm's budget of function evaluations is exhausted; that is, the algorithm can converge prematurely. In thresheld convergence, early exploitation is “held” back by a threshold function, allowing a longer exploration phase. This paper presents a new adaptive thresheld convergence mechanism which helps DE achieve large performance improvements in multi-modal search spaces.

18 citations

Proceedings ArticleDOI
20 Jun 2013
TL;DR: Computational results show that this new heuristic can achieve the benefits of smaller populations and largely avoid the risk of premature convergence.
Abstract: Population-based heuristics can be effective at optimizing difficult multi-modal problems. However, population size has to be selected correctly to achieve the best results. Searching with a smaller population increases the chances of convergence and the efficient use of function evaluations, but it also induces the risk of premature convergence. Larger populations can reduce this risk but can cause poor efficiency. This paper presents a new method specifically designed to work with very small populations. Computational results show that this new heuristic can achieve the benefits of smaller populations and largely avoid the risk of premature convergence.

18 citations


Cites methods from "Particle swarm optimization with th..."

  • ...They are updated by a rule similar to that used in previous attempts to control convergence for PSO [20] and DE [21] in which an initial threshold is selected that then decays over the course of the search process....

    [...]

  • ...The implementation of standard PSO [3] is the same as that described more fully in [20]....

    [...]

Proceedings ArticleDOI
02 Jul 2018
TL;DR: This paper provides an overview of developments on termination conditions in evolutionary algorithms over the past decades, and reviews recent research on threshold strategy, statistical inference, i.e., Kalman filters, as well as Fuzzy methods, and other methods.
Abstract: This paper provides an overview of developments on termination conditions in evolutionary algorithms (EAs). It seeks to give a representative picture of the termination conditions in EAs over the past decades, segment the contributions of termination conditions into progress indicators and termination criteria. With respect to progress indicators, we consider a variety of indicators, in particular in convergence indicators and diversity indicators. With respect to termination criteria, this paper reviews recent research on threshold strategy, statistical inference, i.e., Kalman filters, as well as Fuzzy methods, and other methods. Key developments on termination conditions over decades include: (i) methods of judging the algorithm's search behavior based on statistics, and (ii) methods of detecting the termination based on different distance formulations.

11 citations


Cites background from "Particle swarm optimization with th..."

  • ...In [7, 20, 47], termination criteria of EAs has been analyzed and the threshold strategy has been addressed....

    [...]

References
More filters
Book ChapterDOI

[...]

01 Jan 2012

139,059 citations


"Particle swarm optimization with th..." refers background or result in this paper

  • ...This unexpected result is inconsistent with previous work involving threshold functions [2][5][7] which showed broad benefits across the full range of multi-modal functions (i....

    [...]

  • ...Recent work with the use of threshold functions to control the rate of convergence has led to the new technique of “thresheld convergence” [7]....

    [...]

Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations


"Particle swarm optimization with th..." refers background in this paper

  • ...switching from a GBest/star topology [8] to an LBest/ring topology [1])....

    [...]

  • ...The development of particle swarm optimization (PSO) includes inspirations from “bird flocking, fish schooling, and swarming theory in particular” [8]....

    [...]

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

01 Jan 2010

6,571 citations

Frequently Asked Questions (2)
Q1. What have the authors contributed in "Particle swarm optimization with thresheld convergence" ?

Experiments show that the addition of thresheld convergence to particle swarm optimization can lead to large performance improvements in multi-modal search spaces. 

Future research will study the Gallagher functions more closely with an emphasis on achieving the simultaneous benefits of niching and thresheld convergence. Future research will also study the effects of each parameter more closely ( e. g. α, vf, and the threshold decay factor ). The large potential benefits, computational efficiency, and general ease of adding thresheld convergence make improved threshold functions a promising area for further research. This variation suggests that more improvements can be achieved through the development of improved ( adaptive ) threshold functions.