scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Particle swarm hybridized with differential evolution: black box optimization benchmarking for noisy functions

08 Jul 2009-pp 2343-2350
TL;DR: This work evaluates a Particle Swarm Optimizer hybridized with Differential Evolution and applies it to the Black-Box Optimization Benchmarking for noisy functions (BBOB 2009) and obtained an accurate level of coverage rate.
Abstract: In this work we evaluate a Particle Swarm Optimizer hybridized with Differential Evolution and apply it to the Black-Box Optimization Benchmarking for noisy functions (BBOB 2009). We have performed the complete procedure established in this special session dealing with noisy functions with dimension: 2, 3, 5, 10, 20, and 40 variables. Our proposal obtained an accurate level of coverage rate, despite the simplicity of the model and the relatively small number of function evaluations used.

Summary (1 min read)

1. INTRODUCTION

  • Sections 3 and 4 presents the experimentation procedure and the results obtained, respectively.
  • Finally, conclusions and remarks are given in Section 6.

2. THE ALGORITHM: DEPSO

  • Algorithm 1 shows pseudocode of the hybrid DEPSO algorithm developed for this work.
  • First, an initialization process of all particles in the swarm S (as stated in [6] ), and their initial evaluation (line 1) are carried out.
  • After this, each evolution step the particle's positions are updated following the differential variation model of the equations previously explained (lines 4 to 18).
  • In addition, the global best position reached at the moment is updated in order to guide the rest of the swarm.
  • Finally, the algorithm returns the best solution found during the whole process.

3. EXPERIMENTAL PROCEDURE

  • The authors proposal was tested performing 15 independent runs for each noisy function and each dimension.
  • Table 1 shows the parameter setting used to configure DEPSO.
  • These parameters were tuned in the context of the special session of CEC'05 for real parameter optimization [11, 5] reaching results statistically similar to the best participant algorithms (G-CMA-ES [2] and K-PCX [10]) in that session.
  • This parameterization was kept the same for all the experiments, and therefore the crafting effort [6] is zero.

5. CPU TIMING EXPERIMENT

  • For the timing experiment, the same DEPSO algorithm was run on f8 until at least 30 seconds had passed (according to the exampletiming procedure available in BBOB 2009 [6] ).
  • These experiments have been conducted with an Intel(R) Corel(TM)2 CPU processor with 1.66 GHz and 1GB RAM; O.S Linux Ubuntu version 8.10 using the C-code provided.

6. CONCLUSION

  • A simple and easy to implement optimization algorithm constructed by hybridizing the Particle Swarm Optimizer with Differential Evolution operations.the authors.
  • The experiments have been made in the context of the special session of real parameter Black-Box Optimization Benchmarking (GECCO BBOB 2009), performing the complete procedure previously established, and dealing with noisy functions with dimension: 2, 3, 5, 10, 20, and 40 variables.
  • The authors proposal obtained an accurate level of coverage rate for dimensions 2, 3, 5, and 10, specifically with moderate noise and severe noise multimodal functions.
  • The fact of using the same parameter setting for all functions (and dimensions), together with the relatively small number of function evaluations used (1000 × DIM ), leads us to think that DEPSO can be easily improved for better covering noise functions with higher dimensions.

Did you find this useful? Give us your feedback

Figures (6)

Content maybe subject to copyright    Report

Particle Swarm Hybridized with Differential Evolution:
Black-Box Optimization Benchmarking for Noisy Functions
José García-Nieto
Dept. de Lenguajes y Ciencias
de la Computación
University of Málaga,
ETSI Informática,
Málaga - 29071, Spain
jnieto@lcc.uma.es
Enrique Alba
Dept. de Lenguajes y Ciencias
de la Computación
University of Málaga,
ETSI Informática,
Málaga - 29071, Spain
eat@lcc.uma.es
Javier Apolloni
LIDIC - Departamento de
Informática
University of San Luis,
Ejército de los Andes 950,
5700, Argentina
javierma@unsl.edu.ar
ABSTRACT
In this work we evaluate a Particle Swarm Optimizer hy-
bridized with Differential Evolution and apply it to the Black-
Box Optimization Benchmarking for noisy functions (BBOB
2009). We have performed the complete procedure estab-
lished in this special session dealing with noisy functions with
dimension: 2, 3, 5, 10, 20, and 40 variables. Our pro-posal
obtained an accurate level of coverage rate, despite the
simplicity of the model and the relatively small number of
function evaluations used.
Keywords
Benchmarking, Black-box optimization, Noisy Functions,
Hy-brid Algorithms, Particle Swarm, Differential Evolution
1. INTRODUCTION
Particle Swarm Optimization (PSO) [8] and Differential
Evolution (DE) [9] have been successfully used on real pa-
rameter function optimization since they are two well adap-
ted algorithms for continuous solution encoding. Real pa-
rameter optimization problems consist basically in this: find
x
such that x f(x
) f(x) (minimization). Here, f(.) is a
function in a real space domain that models an opti-mization
problem, x = {x
1
, x
2
, . . . , x
DIM
} is a solution for such
problem, and DIM is the number of variables with
x
i
[x
low
i
, x
upp
i
] (1 i DIM). Finally, x
low
i
, x
upp
i
R
correspond to lower (low) and upper (upp) limits of the vari-
able domain, respectively.
Swagatam Das et al. [3] proposed an initial hybridization
of PSO and DE for continuous optimization. Based on this
first idea, in this work we propose a PSO algorithm which
basically uses the differential variation scheme employed in
DE for adjusting the velocity of particles. In this way, by
combining search strategies, parameter adaptation, and dif-
ferential operators present in PSO and DE we expect to
improve the performance of the resulting technique. Our
proposal, called DEPSO, is evaluated by means of the Black-
Box Optimization Benchmarking for noisy functions (BBOB
2009) [4, 7] according to the experimental design from [6].
We have performed the complete procedure established in
this GECCO’09 workshop dealing with noisy functions with
dimension: 2, 3, 5, 10, 20, and 40 variables. Our proposal
will be shown to obtain an accurate level of coverage rate,
even for the higher dimensions, and despite the relatively
small number of function evaluations used (1000 × DIM ).
The remaining of the paper is organized as follows. In
Section 2 the DEPSO algorithm is briefly described. Sec-
tions 3 and 4 presents the experimentation procedure and
the results obtained, respectively. In Section 5, a brief anal-
ysis of the CPU Timing experiment is reported. Finally,
conclusions and remarks are given in Section 6.
2. THE ALGORITHM: DEPSO
Our proposal, DEPSO, is basically running a PSO algo-
rithm in which we incorporate ideas of DE. For each particle
p (with velocity and position vectors v and x, respectively)
the velocity is updated according to two main influences:
social and differential variation factors. The social factor
concerns the local or global best position g of a given neigh-
borhood of particles (being global when this neighborhood is
the entire swarm). The differential variation factor is com-
posed by using two randomly selected particle positions as
made in DE. This way, for each particle x
i
of the swarm,
a differential vector w = x
r1
x
r2
is generated where par-
ticles x
r1
and x
r2
are randomly (uniformly) selected (with
x
i
6= x
r1
6= x
r2
). Then, the new velocity v
0
of a particle i is
calculated by means of the following equation:
v
0
i
ω · v
i
+ µ · w + ϕ · (g
i
x
i
), (1)
where ω is the inertia weight of the particle that con-

tro
ls the trade-off between global and lo cal experience and
µ is a scaling factor applied to the differential vector (µ =
UN (0, 1)). The third component corresponds to the social
factor which is influenced by the global best g of the swarm
and directly proportional to the social coefficient ϕ (in this
case ϕ = UN (0, 1)). Therefore, in the velocity calculation,
the standard historical influence used in PSO is replaced in
our proposal by the differential vector w.
Similarly to DE, the update of each j component of the
velocity vector of a given particle i is carried out by means
of the Equation 2 as follows:
v
0
i
(j) =
(
v
0
i
(j) if r Cr,
v
i
(j) otherwise.
(2)
Here, r [0, 1] is a uniformly distributed value which
determines if the j
th
component is selected from the new
velocity or is selected from the current velocity, based on the
crossover probability Cr [0, 1]. This mechanism is used to
increase the exploitation ability of the algorithm through the
search space [9]. Finally, a particle i changes its position
(moves) only if the new one x
0
i
is lower or equal than the
current position x
i
(minimizing in this work). In other case,
the particle keeps its current position (equations 3 and 4).
x
00
i
=
(
x
0
i
if f (x
0
i
) f (x
i
)
x
i
otherwise,
(3)
being
x
0
i
x
i
+ v
0
i
(4)
Additionally, with certain probability p
mut
, a mutation
operation is made on each particle in order to avoid an early
convergence to a local optimum. The new mutated position
x
0
is generated by means of the Equation 5.
x
00
x
low
+ U N(0, 1) · (x
upp
x
low
) (5)
Vectors x
low
and x
upp
correspond to lower and upper lim-
its of each dimension of the function to optimize, respec-
tively.
Algorithm 1 shows pseudocode of the hybrid DEPSO al-
gorithm developed for this work. First, an initialization pro-
cess of all particles in the swarm S (as stated in [6]), and
their initial evaluation (line 1) are carried out. After this,
each evolution step the particle’s positions are updated fol-
lowing the differential variation model of the equations pre-
viously explained (lines 4 to 18). In addition, the global best
position reached at the moment is updated in order to guide
the rest of the swarm. Eventually, a mutation operation is
made (lines 20 to 24). Finally, the algorithm returns the
best solution found during the whole process.
3. EXPERIMENTAL PROCEDURE
The experimentation procedure has been carried out ac-
cording to [6] on the benchmark of 30 noisy functions given
in [4, 7]. The DEPSO algorithm was implemented in C++
using the MALLBA library [1] of metaheuristics. The noisy
functions were tackled connecting the C-code of the Black-
Box Optimization Benchmarking to our implementation of
DEPSO. Each candidate solution was sampled uniformly in
[5, 5]
DIM
, where DIM represents the search space dimen-
sion. The maximal number of function evaluation was set
to 1000 × DIM.
Al
gorithm 1 Pseudocode of DEPSO
1
: initialize(S)
2: while not stop condition is met do
3: for each particle position x
i
of the swarm S do
4: /* Differential Variation */
5: for each dimension j of the particle position x
i
do
6: w(j) x
r1
(j) x
r2
(j)
7: if r Cr then
8: v
0
i
(j) ω · v
i
(j) + µ · w(j) + ϕ · (g(j) x
i
(j))
9: end if
10: end for
11: for each dimension j of the particle position x
i
do
12: x
0
i
(j) x
i
(j) + v
0
i
(j)
13: end for
14: if f (x
0
i
) f(x
i
) then
15: x
00
i
x
0
i
16: else
17: x
00
i
x
i
18: end if
19: /* Mutation */
20: if U N (0, 1) < p
mut
then
21: for each dimension j of the particle position x
i
do
22: x
00
i
(j) x
inf
(j) + UN (0, 1) · (x
sup
(j) x
inf
(j))
23: end for
24: end if
25: end for
26: end while
27: Output: Best solution found
Ou
r proposal was tested performing 15 independent runs
for each noisy function and each dimension. Table 1 shows
the parameter setting used to configure DEPSO. These pa-
rameters were tuned in the context of the special session of
CEC’05 for real parameter optimization [11, 5] reaching re-
sults statistically similar to the best participant algorithms
(G-CMA-ES [2] and K-PCX [10]) in that session. This pa-
rameterization was kept the same for all the experiments,
and therefore the crafting effort [6] is zero.
Table 1: Parameter setting used in DEPSO
Descri
ption Parameter Value
Sw
arm size Ss 20
Crossover probability Cr 0.9
Inertia weight ω 0.1
Differential mutation µ UN(0, 1)
Social coefficient ϕ 1 · UN(0, 1)
Mutation probability p
mut
1
D
IM
4.
RESULTS AND DISCUSSION
In this section, the results are presented and some discus-
sions are made in terms of the number of successful trials
(f
opt
+ f 10
8
) reached for each kind of noisy function:
moderate (f101-f106), severe (f107-f121), and severe multi-
modal (f122-f139).
Figure 1 shows the Expected Running Time (ERT, ) to
reach f
opt
+ f and median number of function evaluations
of successful trials (+). As we can observe, our proposal
obtains the highest number of successful trials in moderate
noise functions, specifically in dimensions 2, 3, 5 and 10, in
f101 and f102. With regards to severe noise functions, the
target function is reached in 9 out of 15 trials for the lower
dimensions. In severe noise multimodal functions, DEPSO

ob
tains successful trials in f125 and f130 for dimension 2,
and in f128 for dimensions 2, 3 and 5. As a summary, in
Table 2 the functions are ranked by means of the number of
successful trials that DEPSO has obtained with them (with
the best ranked functions in the top).
For higher dimensions (20 and 40), the best behavior of
our proposal can be observed when facing moderate noise
functions as shown in Table 3, concretely in functions f101
and f102 where error precisions slightly higher than 10
5
were reached. Secondly, for severe noise and severe noise
multimodal functions (tables 3 and 4 DIM=20), the best
achieved f precisions (of the median trials) are always
close to e + 0, although being some of them close to e 2
as in f109, f125, and f127. We suspect that the relative low
convergence rate in severe noise functions (for the higher
dimensions) can be due to the small number of maximal
function evaluations employed (1000 × DIM).
Concerning this last issue, the empirical cumulative dis-
tribution function (ECDF) of the running time (in terms of
function evaluations) is shown in Figure 2 left. We can easily
observe that the ECDF tends to get reduced with the num-
ber of functions evaluation for all noisy functions. Specifi-
cally for severe noise and severe noise multimodal functions,
the ECDF of the best achieved f (Figure 2 right) tends to
reduce the difference to the optimal function value notice-
ably (with the proportion of trials).
Table 2: Noisy Functions ranked by the number of
successful trials obtained by DEPSO. Columns indi-
cate dimensions
2
3 5 10 20 40
f10
1 f101 f101 f101 - -
f102 f102 f102 f102 - -
f113 f107 f107 - - -
f128 f113 f113 - - -
f107 f128 f128 - - -
f103 f103 - - - -
f115 f115 - - - -
f125 - - - - -
f130 - - - - -
f116 - - - - -
f105 - - - - -
5.
CPU TIMING EXPERIMENT
For the timing experiment, the same DEPSO algorithm
was run on f8 until at least 30 seconds had passed (ac-
cording to the exampletiming procedure available in BBOB
2009 [6]). These experiments have been conducted with
an Intel(R) Corel(TM)2 CPU processor with 1.66 GHz and
1GB RAM; O.S Linux Ubuntu version 8.10 using the C-code
provided. The results were 1.1, 0.9, 1.3, 2.3 , 3.9, and 7.1 ×
e-06 seconds per function evaluation in dimensions 2, 3, 5,
10, 20, and 40, respectively.
6. CONCLUSION
In this work we have experimentally studied DEPSO, a
simple and easy to implement optimization algorithm con-
structed by hybridizing the Particle Swarm Optimizer with
Differential Evolution operations. The experiments have
been made in the context of the special session of real pa-
rameter Black-Box Optimization Benchmarking (GECCO
BBOB 2009), performing the complete procedure previously
established, and dealing with noisy functions with dimen-
sion: 2, 3, 5, 10, 20, and 40 variables. Our proposal obtained
an accurate level of coverage rate for dimensions 2, 3, 5, and
10, specifically with moderate noise and severe noise multi-
modal functions. Successful trials were found for 11 out of 30
functions. The fact of using the same parameter setting for
all functions (and dimensions), together with the relatively
small number of function evaluations used (1000 × DIM),
leads us to think that DEPSO can be easily improved for
better covering noise functions with higher dimensions. Fu-
ture experiments with different parameter settings depend-
ing of each family of noisy functions, and performing larger
number of evaluations can be made in this sense.
7. ACKNOWLEDGMENTS
Authors acknowledge funds from the Spanish Ministry
of Sciences and Innovation European FEDER under con-
tract TIN2008-06491-C04-01 (M* project, available in URL
http://mstar.lcc.uma.es) and CICE, Junta de Andaluc´ıa
under contract P07-TIC-03044 (DIRICOM project, avail-
able in URL http://diricom.lcc.uma.es).
8. REFERENCES
[1] E. Alba, G. Luque, J. Garc´ıa-Nieto, G. Ordonez, and
G. Leguizam´on. Mallba: a software library to design
efficient optimisation algorithms. Int. J. of Innovative
Computing and Applications 2007 (IJICA),
1(1):74–85, 2007.
[2] A. Auger and N. Hansen. A restart cma evolution
strategy with increasing population size. IEEE
Congress on Evolutionary Computation, 2:1769–1776,
2005.
[3] S. Das, A. Abraham, and A. Konar. Particle swarm
optimization and differential evolution algorithms:
Technical analysis, applications and hybridization
perspectives. In Advances of Computational
Intelligence in Industrial Systems, 2008.
[4] S. Finck, N. Hansen, R. Ros, and A. Auger.
Real-parameter black-box optimization benchmarking
2009: Presentation of the noisy functions. Technical
Report 2009/21, Research Center PPE, 2009.
[5] J. Garcia-Nieto, E. J. Apolloni, Alba, and
G. Leguizam´on. Algoritmo Basado en C´umulos de
Part´ıculas y Evoluci´on Diferencial para la Resoluci´on
de Problemas de Optimizaci´on Continua. In VI
Congreso Espa˜nol sobre Metaheur´ısticas, Algoritmos
Evolutivos y Bioinspirados (MAEB’09), page 433.440,
alaga, 11 a 13 de Febrero, 2009.
[6] N. Hansen, A. Auger, S. Finck, and R. Ros.
Real-parameter black-box optimization benchmarking
2009: Experimental setup. Technical Report RR-6828,
INRIA, 2009.
[7] N. Hansen, S. Finck, R. Ros, and A. Auger.
Real-parameter black-box optimization benchmarking
2009: Noisy functions definitions. Technical Report
RR-6869, INRIA, 2009.
[8] J. Kennedy and R. Eberhart. Particle swarm
optimization. Neural Networks, Piscataway, NJ.,

2 3
5
10 20 40
0
1
2
3
4
5
6
7
101 Sphere moderate Gauss
+1
+0
-1
-2
-3
-5
-8
2 3
5
10 20 40
0
1
2
3
4
5
6
7
104 Rosenbrock moderate Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
11
107 Sphere Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
110 Rosenbrock Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
14
12
4
113 Step-ellipsoid Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
102 Sphere moderate unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
1
105 Rosenbrock moderate unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
108 Sphere unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
111 Rosenbrock unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
114 Step-ellipsoid unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
10
6
103 Sphere moderate Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
106 Rosenbrock moderate Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
109 Sphere Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
112 Rosenbrock Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
8
4
115 Step-ellipsoid Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
2
116 Ellipsoid Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
119 Sum of different powers Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
122 Schaffer F7 Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
3
125 Griewank-Rosenbrock Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
14
9
4
128 Gallagher Gauss
2 3
5
10 20 40
0
1
2
3
4
5
6
7
117 Ellipsoid unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
120 Sum of different powers unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
123 Schaffer F7 unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
126 Griewank-Rosenbrock unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
129 Gallagher unif
2 3
5
10 20 40
0
1
2
3
4
5
6
7
118 Ellipsoid Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
121 Sum of different powers Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
124 Schaffer F7 Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
127 Griewank-Rosenbrock Cauchy
2 3
5
10 20 40
0
1
2
3
4
5
6
7
3
130 Gallagher Cauchy
+1
+0
-1
-2
-3
-5
-8
Fi
gure 1: Exp ected Running Time (ERT, ) to reach f
opt
+ f and median number of function evaluations of
successful trials (+), shown for f = 10, 1, 10
1
, 10
2
, 10
3
, 10
5
, 10
8
(the exponent is given in the legend of f
101
and f
130
) versus dimension in log-log presentation. The ERT(∆f ) equals to #FEs(∆f) divided by the number
of successful trials, where a trial is successful if f
opt
+ f was surpassed during the trial. The #FEs(∆f ) are
the total number of function evaluations while f
opt
+ f was not surpassed during the trial from all respective
trials (successful and unsuccessful), and f
opt
denotes the optimal function value. Crosses (×) indicate the total
number of function evaluations #FEs(−∞ ). Numbers above ERT-symbols indicate the number of successful
trials. Annotated numbers on the ordinate are decimal logarithms. Additional grid lines show linear and
quadratic scaling.

f
101
in
5-D, N=15, mFE=3600
f
101
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
8.2 e1 6.6 e1 9.9 e1 8.2 e1
15
1.3 e3 1.2 e3 1.4 e3 1.3 e3
1
15
4.0 e2 3.7 e2 4.2 e2 4.0 e2
15
3.6 e3 3.4 e3 3.8 e3 3.6 e3
1e1
15
7.3 e2 7.0 e2 7.5 e2 7.3 e2
15
7.9 e3 7.6 e3 8.2 e3 7.9 e3
1e3
15
1.4 e3 1.4 e3 1.5 e3 1.4 e3
15
2.3 e4 2.1 e4 2.4 e4 2.3 e4
1e5
15
2.1 e3 2.1 e3 2.2 e3 2.1 e3
7
7.9 e4 5.9 e4 1.2 e5 3.7 e4
1e8
15
3.2 e3 3.2 e3 3.3 e3 3.2 e3
0 27e–6
29e–8 12e–5 4.0e4
f
102
in
5-D, N=15, mFE=3480
f
102
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
8.6 e1 6.5 e1 1.1 e2 8.6 e1
15
1.5 e3 1.4 e3 1.5 e3 1.5 e3
1
15
3.2 e2 2.7 e2 3.6 e2 3.2 e2
15
3.5 e3 3.4 e3 3.6 e3 3.5 e3
1e1
15
6.5 e2 6.0 e2 7.0 e2 6.5 e2
15
7.5 e3 7.3 e3 7.8 e3 7.5 e3
1e3
15
1.3 e3 1.2 e3 1.4 e3 1.3 e3
13
3.0 e4 2.5 e4 3.7 e4 2.5 e4
1e5
15
2.1 e3 2.0 e3 2.1 e3 2.1 e3
3
2.0 e5 1.2 e5 6.0 e5 4.0 e4
1e8
15
3.2 e3 3.1 e3 3.3 e3 3.2 e3
0 4
6e–6 61e–7 14e–4 4.0e4
f
103
in
5-D, N=15, mFE=10040
f
103
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
1.3 e2 1.0 e2 1.6 e2 1.3e2
15
1.3 e3 1.3 e3 1.4 e3 1.3 e3
1
15
3.9 e2 3.5 e2 4.3 e2 3.9e2
15
3.8 e3 3.6 e3 3.9 e3 3.8 e3
1e1
15
7.4 e2 7.0 e2 7.9 e2 7.4e2
12
2.4 e4 1.8 e4 3.1 e4 2.0 e4
1e3
15
2.2 e3 1.9 e3 2.5 e3 2.2e3
0 34e–3
16e–3 12e–2 3.2e4
1e5
8
1.4 e4 1.1 e4 2.0 e4 8.4e3
.
. . . .
1e8
0 33e–7
79e–8 16e–5 7.9 e3
.
. . . .
f
104
in
5-D, N=15, mFE=10040
f
1
04
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
7.7 e2 7.0 e2 8.4 e2 7.7e2
0 19e+0
18e+0 82e+0 3.5e4
1
10
8.1 e3 6.5 e3 1.0 e4 6.8e3
.
. . . .
1e1
2
7.1 e4 3.7 e4 >1 e5 1.0 e4
.
. . . .
1e3
0 5
3e–2 60e–3 19e–1 8.9e3
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
105
in
5-D, N=15, mFE=10040
f
105
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
8.9 e2 8.0 e2 9.8 e2 8.9e2
0 20e+
0 15e+0 84e+0 3.5 e4
1
5
2.3 e4 1.6 e4 4.1 e4 8.8 e3
.
. . . .
1e1
2
7.2 e4 3.7 e4 >1 e5 1.0 e4
.
. . . .
1e3
0 17e–1
87e–3 30e–1 7.9 e3
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
106
in
5-D, N=15, mFE=10040
f
1
06
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
8.7 e2 7.9 e2 9.5 e2 8.7 e2
0 25e+0
18e+0 83e+0 3.5e4
1
6
1.8 e4 1.3 e4 3.1 e4 7.5 e3
.
. . . .
1e1
1
1.5 e5 7.3 e4 >1 e5 1.0 e4
.
. . . .
1e3
0 1
4e–1 18e–2 31e–1 8.9e3
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
107
in
5-D, N=15, mFE=10040
f
107
in
20-D, N=15, mFE=40040
f # ERT 10% 90% RT
succ
# ERT 10% 90% RT
succ
10 15
1.3 e2 9.4 e1 1.6 e2 1.3 e2
0 38e+
0 18e+0 71e+0 1.4 e4
1
15
7.2 e2 6.2 e2 8.1 e2 7.2 e2
.
. . . .
1e1
15
1.6 e3 1.5 e3 1.8 e3 1.6 e3
.
. . . .
1e3
15
3.5 e3 3.2 e3 3.8 e3 3.5 e3
.
. . . .
1e5
13
7.1 e3 5.8 e3 9.0 e3 5.6 e3
.
. . . .
1e8
11
1.2 e4 9.6 e3 1.5 e4 8.3 e3
.
. . . .
f
108
in
5-D, N=15, mFE=10040
f
1
08
in 20-D, N=15, mFE=40040
f # ERT 10% 90% RT
succ
# ERT 10% 90% RT
succ
10 15
2.0 e2 9.4 e1 3.2 e2 2.0 e2
0 81e+0
59e+0 14e+1 1.1e4
1
3
4.8 e4 2.8 e4 1.5 e5 8.3 e3
.
. . . .
1e1
0 1
9e–1 74e–2 47e–1 2.8e3
.
. . . .
1e3
.
. . . .
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
109
in
5-D, N=15, mFE=10040
f
109
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
1.1 e2 8.9 e1 1.3 e2 1.1 e2
15
2.4 e3 2.0 e3 2.8 e3 2.4 e3
1
15
3.8 e2 3.4 e2 4.2 e2 3.8 e2
8
6.1 e4 4.6 e4 9.0 e4 3.3 e4
1e1
15
1.2 e3 1.0 e3 1.5 e3 1.2 e3
0 99e–2
67e–2 28e–1 3.2 e4
1e3
2
7.2 e4 3.6 e4 >1 e5 9.0 e3
.
. . . .
1e5
0 51e–4
97e–5 44e–3 7.1 e3
.
. . . .
1e8
.
. . . .
.
. . . .
f
110
in
5-D, N=15, mFE=10040
f
1
10
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
1.6 e3 1.4 e3 1.9 e3 1.6 e3
0 72e+2
14e+2 22e+3 2.5e4
1
4
3.4 e4 2.2 e4 7.0 e4 9.1 e3
.
. . . .
1e1
0 2
0e–1 38e–2 31e–1 7.1e3
.
. . . .
1e3
.
. . . .
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
111
in
5-D, N=15, mFE=10040
f
111
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 1
1.5 e5 7.4e4 >1 e5 1.0 e4
0 77e+
3 42e+3 14e+4 3.2 e3
1
0 31e+0
12e+0 94e+0 5.0 e3
.
. . . .
1e1
.
. . . .
.
. . . .
1e3 . . . . . . . . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
112
in
5-D, N=15, mFE=10040
f
1
12
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
7.9 e2 7.4 e2 8.4 e2 7.9 e2
0 37e+0
24e+0 13e+1 3.2e4
1
2
6.7 e4 3.6 e4 >1 e5 1.0 e4
.
. . . .
1e1
1
1.5 e5 7.4 e4 >1 e5 1.0 e4
.
. . . .
1e3 0 25e–1 31e–2 30e–1 7.1 e3 . . . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
113
in
5-D, N=15, mFE=10040
f
113
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
4.1 e2 3.3 e2 5.0 e2 4.1 e2
0 14e+
1 91e+0 29e+1 1.6 e4
1
15
1.9 e3 1.5 e3 2.2 e3 1.9 e3
.
. . . .
1e1
12
8.1 e3 6.5 e3 1.0 e4 6.4 e3
.
. . . .
1e3
4
3.4 e4 2.1 e4 7.0 e4 8.8 e3
.
. . . .
1e5
4
3.4 e4 2.2 e4 7.0 e4 8.8 e3
.
. . . .
1e8
4
3.4 e4 2.2 e4 7.0 e4 9.0 e3
.
. . . .
f
114
in
5-D, N=15, mFE=10040
f
1
14
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 13
3.6 e3 2.1 e3 5.4 e3 2.7 e3
0 49e+1
44e+1 83e+1 2.5e4
1
0 4
2e–1 15e–1 15e+0 5.0e3
.
. . . .
1e1
.
. . . .
.
. . . .
1e3
.
. . . .
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
115
in
5-D, N=15, mFE=10040
f
115
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
3.1 e2 2.7 e2 3.6 e2 3.1 e2
9
4.5 e4 3.6 e4 6.1 e4 3.1 e4
1
13
2.7 e3 1.6 e3 4.0 e3 2.5 e3
0 93e–1
64e–1 24e+0 3.2 e4
1e1
8
1.2 e4 8.6 e3 1.7 e4 7.0 e3
.
. . . .
1e3
0 58e–3
11e–3 10e–1 6.3 e3
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
116
in
5-D, N=15, mFE=10040
f
1
16
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 9
1.1 e4 7.8 e3 1.5 e4 6.5 e3
0 16e+3
94e+2 38e+3 1.4e4
1
2
6.9 e4 3.6 e4 >1 e5 1.0 e4
.
. . . .
1e1
2
7.1 e4 3.6 e4 >1 e5 1.0 e4
.
. . . .
1e3
0 73e–1
45e–3 26e+0 7.9 e3
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
117
in
5-D, N=15, mFE=10040
f
117
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 0 15e+1
61e+0 29e+1 3.2 e3
0 26e+
3 18e+3 43e+3 7.1 e3
1
.
. . . .
.
. . . .
1e1
.
. . . .
.
. . . .
1e3
.
. . . .
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
118
in
5-D, N=15, mFE=10040
f
1
18
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 11
7.1 e3 5.3 e3 9.5 e3 5.2 e3
0 55e+1
38e+1 73e+1 3.5e4
1
3
4.6 e4 2.6 e4 1.4 e5 7.7 e3
.
. . . .
1e1
1
1.5 e5 7.5 e4 >2 e5 1.0 e4
.
. . . .
1e3
0 3
7e–1 21e–2 19e+0 7.9e3
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
f
119
in
5-D, N=15, mFE=10040
f
119
in
20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
1.8 e1 1.0 e1 2.6 e1 1.8 e1
10
3.5 e4 2.6 e4 4.9 e4 2.4 e4
1
15
6.8 e2 5.8 e2 8.0 e2 6.8 e2
0 83e–1
56e–1 15e+0 1.8e4
1e1
14
3.1 e3 2.4 e3 4.0 e3 2.9 e3
.
. . . .
1e3
7
1.9 e4 1.4 e4 2.8 e4 9.7 e3
.
. . . .
1e5
0 19e–4
17e–5 29e–3 7.9 e3
.
. . . .
1e8
.
. . . .
.
. . . .
f
120
in
5-D, N=15, mFE=10040
f
1
20
in 20-D, N=15, mFE=40040
f
#
ERT 10% 90% RT
succ
#
ERT 10% 90% RT
succ
10 15
3.7 e1 1.6 e1 6.0 e1 3.7 e1
0 31e+0
21e+0 42e+0 3.5e3
1
7
1.4 e4 9.6 e3 2.4 e4 5.9 e3
.
. . . .
1e1
0 1
0e–1 60e–2 23e–1 2.8e3
.
. . . .
1e3
.
. . . .
.
. . . .
1e5
.
. . . .
.
. . . .
1e8
.
. . . .
.
. . . .
Table 3: Shown are, for functions f
101
-f
120
and for a given target difference to the optimal function value f:
the number of successful trials (#); the expected running time to surpass f
opt
+ f (ERT, see Figure 1); the
10%-tile and 90%-tile of the bo otstrap distribution of ERT; the average number of function evaluations in
successful trials or, if none was successful, as last entry the median number of function evaluations to reach
the best function value (RT
succ
). If f
opt
+ f was never reached, figures in italics denote the best achieved
f-value of the median trial and the 10% and 90%-tile trial. Furthermore, N denotes the number of trials,
and mFE denotes the maximum of number of function evaluations executed in one trial. See Figure 1 for the
names of functions.

Citations
More filters
Journal ArticleDOI
TL;DR: An attempt is made to review the hybrid optimization techniques in which one main algorithm is a well known metaheuristic; particle swarm optimization or PSO and three hybrid PSO algorithms are compared on a test suite of nine conventional benchmark problems.

246 citations


Cites background or methods from "Particle swarm hybridized with diff..."

  • ...[50] evaluated a Particle Swarm Optimizer hybridized with Differential Evolution and applied it to the Black-Box Optimization Benchmarking for noisy functions....

    [...]

  • ...[50] 2009 DEPSO Noisy functions Zhang et al....

    [...]

Journal ArticleDOI
01 Jan 2016
TL;DR: Numerical, statistical, and graphical analysis reveals the competency of the proposed MBDE, which is employed to solve 12 basic, 25 CEC 2005, and 30 CEC 2014 unconstrained benchmark functions.
Abstract: This is a Flowchart of MBDE algorithm. A novel "Memory Based DE" algorithm proposed for unconstrained optimization.The algorithm relies on "swarm mutation" and "swarm crossover".Its robustness increased vastly with the help of the "Use of memory" mechanism.It obtains competitive performance with state-of-the-art methods.It has better convergence rate and better efficiency. In optimization, the performance of differential evolution (DE) and their hybrid versions exist in the literature is highly affected by the inappropriate choice of its operators like mutation and crossover. In general practice, during simulation DE does not employ any strategy of memorizing the so-far-best results obtained in the initial part of the previous generation. In this paper, a new "Memory based DE (MBDE)" presented where two "swarm operators" have been introduced. These operators based on the pBEST and gBEST mechanism of particle swarm optimization. The proposed MBDE is employed to solve 12 basic, 25 CEC 2005, and 30 CEC 2014 unconstrained benchmark functions. In order to further test its efficacy, five different test system of model order reduction (MOR) problem for single-input and single-output system are solved by MBDE. The results of MBDE are compared with state-of-the-art algorithms that also solved those problems. Numerical, statistical, and graphical analysis reveals the competency of the proposed MBDE.

59 citations

Journal ArticleDOI
TL;DR: The empirical results indicate that the proposed FA model outperforms other state-of-the-art FA variants and classical metaheuristic search methods in solving diverse complex unimodal and multimodal optimization and ensemble reduction problems.
Abstract: In this research, we propose a variant of the firefly algorithm (FA) for classifier ensemble reduction. It incorporates both accelerated attractiveness and evading strategies to overcome the premature convergence problem of the original FA model. The attractiveness strategy takes not only the neighboring but also global best solutions into account, in order to guide the firefly swarm to reach the optimal regions with fast convergence while the evading action employs both neighboring and global worst solutions to drive the search out of gloomy regions. The proposed algorithm is subsequently used to conduct discriminant base classifier selection for generating optimized ensemble classifiers without compromising classification accuracy. Evaluated with standard, shifted, and composite test functions, as well as the Black-Box Optimization Benchmarking test suite and several high dimensional UCI data sets, the empirical results indicate that, based on statistical tests, the proposed FA model outperforms other state-of-the-art FA variants and classical metaheuristic search methods in solving diverse complex unimodal and multimodal optimization and ensemble reduction problems. Moreover, the resulting ensemble classifiers show superior performance in comparison with those of the original, full-sized ensemble models.

42 citations


Cites methods from "Particle swarm hybridized with diff..."

  • ...PSO variants, i.e. PSO_Bounds (El-Abd and Kamel, 2009b), EDA-PSO (El-Abd and Kamel, 2009a), DE-PSO (García-Nieto et al., 2009), and other methods such as DASA (Korošec and Šilc, 2009) and BayEDAcG (Gallagher, 2009) embed diverse search strategies to overcome local optima traps and show competitive performance in tackling BBOB problems....

    [...]

  • ...Other selected methods such as DE-PSO, SNES and BayEDAcG employ the maximum of 104D number of function evaluations....

    [...]

  • ...PSO variants, i.e. PSO_Bounds (El-Abd and Kamel, 2009b), EDA-PSO (El-Abd and Kamel, 2009a), DE-PSO (García-Nieto et al., 2009), and other methods such as DASA (Korošec and Šilc, 2009) and BayEDAcG (Gallagher, 2009) embed diverse search strategies to overcome local optima traps and show competitive…...

    [...]

  • ...The selected methods include the best 2009 optimizer (provided automatically by the COCO platform) (Hansen et al., 2010a), Separable Natural Evolution Strategies (SNES) (Schaul, 2012a), Exponential NES (xNES) (Schaul, 2012b), xNES with Adaptation Sampling (xNESas) (Schaul, 2012c), Differential Ant-Stigmergy Algorithm (DASA) (Korošec and Šilc, 2009), Simultaneous Perturbation Stochastic Approximation (SPSA) (Finck and Beyer, 2010), PSO hybridized with Estimation of Distribution Algorithm (EDA) (EDA-PSO) (El-Abd and Kamel, 2009a), PSO with adaptive bounds (PSO_Bounds) (El-Abd and Kamel, 2009b), PSO incorporated with DE (DE-PSO) (García-Nieto et al., 2009), BayEDAcG (Gallagher, 2009) and the Pure-Random-Search algorithm (RANDOMSEARCH) (Auger and Ros, 2009)....

    [...]

  • ...…of Distribution Algorithm (EDA) (EDA-PSO) (El-Abd and Kamel, 2009a), PSO with adaptive bounds (PSO_Bounds) (El-Abd and Kamel, 2009b), PSO incorporated with DE (DE-PSO) (García-Nieto et al., 2009), BayEDAcG (Gallagher, 2009) and the Pure-Random-Search algorithm (RANDOMSEARCH) (Auger and Ros, 2009)....

    [...]

Proceedings ArticleDOI
04 Jun 2014
TL;DR: A differential particle swarm evolution (DPSE) algorithm which combines the basic idea of velocity and position update rules from particle swarm optimization (PSO) and the concept of differential mutation from differential evolution (DE) in a new way is presented.
Abstract: We present a differential particle swarm evolution (DPSE) algorithm which combines the basic idea of velocity and position update rules from particle swarm optimization (PSO) and the concept of differential mutation from differential evolution (DE) in a new way. With the goal of optimizing within a limited number of function evaluations, the algorithm is tested and compared with the standard PSO and DE methods on 14 benchmark problems to illustrate that DPSE has the potential to achieve a faster convergence and a better solution. Simulation results show that, on the average, DPSE outperforms DE by 39.20% and PSO by 14.92% on the 14 benchmark problems. To show the feasibility of the proposed strategy on a real-world optimization problem, an application of DPSE to optimize the parameters of active disturbance rejection control (ADRC) in PUMA-560 robot is presented. I. INTRODUCTION The particle swarm optimization (PSO) algorithm was originally introduced in (1) as an alternative to the standard genetic algorithm (GA). The PSO was inspired by insect swarms and has since proven to be a competitor to the standard GA when it comes to function optimization. Since then several researchers have analyzed the PSO performance and disadvantages (2-4) and their research indicates that it performs well in the early iterations but has problems reaching a near optimal solution in several real-valued function optimization problems. Differential evolution (DE) is a simple yet powerful evolutionary algorithm (EA) for global optimization introduced by Price and Storn (5). Both PSO and DE received great interest from the evolutionary computation community, and showed great promise in several real-world applications (6-9). As a result, a lot of effort has been spent recently in combining both methods to achieve better optimization result. One method proposed in 2003 (10) applies to the modeling of gene regulator networks in 2007 (11). In this algorithm, the mutations provided by DE operator are applied, and only applied on the personal best individual to prevent the swarm from disorganizing by unexpected fluctuations. Later in 2008, (12) also presents a method of hybrid PSO with DE and its application to a high-frequency transformer can be seen in (13). In this algorithm, the DE mutations are applied to update both personal best and global best. At the same time, Swagatam Das et al. (14) proposed a hybridization of PSO and DE for continuous optimization in 2008. Based on that idea, (15) apply this algorithm for the black-box optimization benchmarking for noisy functions.

15 citations


Cites background from "Particle swarm hybridized with diff..."

  • ...The basic idea of DE is to take the difference vector between two individuals and add a scaled version of the difference vector (a mutant vector) to a third individual to create a new trial vector to update each individual [5]....

    [...]

Journal ArticleDOI
TL;DR: The numerical, statistical and graphical analyses reveal the robustness of the proposed DPD, which uses DE–PSO–DE on the sub-populations of the same population.
Abstract: This paper presents a novel hybridization between differential evolution (DE) and particle swarm optimization (PSO), based on ‘tri-population’ environment. Initially, the whole population (in increasing order of fitness) is divided into three groups—inferior group, mid group and superior group. DE is employed in the inferior and superior groups, whereas PSO is used in the mid-group. This proposed method is named as DPD as it uses DE–PSO–DE on the sub-populations of the same population. Two more strategies namely Elitism (to retain the best obtained values so far) and Non-Redundant Search (to improve the solution quality) have been incorporated in DPD cycle. Considering eight variants of popular mutation operators in one DE, a total of 64 variants of DPD are formed. The top four DPDs have been pointed out through 13 constrained benchmark functions and five engineering design problems. Further, based on the ‘performance’ analysis the best DPD is reported. Later to show superiority and effectiveness, the best DPD is compared with various state-of-the-art approaches. The numerical, statistical and graphical analyses reveal the robustness of the proposed DPD.

8 citations

References
More filters
Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations

Book
13 Dec 2005
TL;DR: This volume explores the differential evolution (DE) algorithm in both principle and practice and is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimization.
Abstract: Problems demanding globally optimal solutions are ubiquitous, yet many are intractable when they involve constrained functions having many local optima and interacting, mixed-type variables.The differential evolution (DE) algorithm is a practical approach to global numerical optimization which is easy to understand, simple to implement, reliable, and fast. Packed with illustrations, computer code, new insights, and practical advice, this volume explores DE in both principle and practice. It is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimization.

5,607 citations


"Particle swarm hybridized with diff..." refers methods in this paper

  • ...Particle Swarm Optimization (PSO) [8] and Differential Evolution (DE) [9] have been successfully used on real parameter function optimization since they are two well adapted algorithms for continuous solution encoding....

    [...]

  • ...This mechanism is used to increase the exploitation ability of the algorithm through the search space [9]....

    [...]

Book
25 Nov 2014
TL;DR: The differential evolution (DE) algorithm is a practical approach to global numerical optimization which is easy to understand, simple to implement, reliable, and fast as discussed by the authors, which is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimisation.
Abstract: Problems demanding globally optimal solutions are ubiquitous, yet many are intractable when they involve constrained functions having many local optima and interacting, mixed-type variables.The differential evolution (DE) algorithm is a practical approach to global numerical optimization which is easy to understand, simple to implement, reliable, and fast. Packed with illustrations, computer code, new insights, and practical advice, this volume explores DE in both principle and practice. It is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimization.

4,273 citations

01 Jan 2005
TL;DR: This special session is devoted to the approaches, algorithms and techniques for solving real parameter single objective optimization without making use of the exact equations of the test functions.
Abstract: Single objective optimization algorithms are the basis of the more complex optimization algorithms such as multi-objective optimizations algorithms, niching algorithms, constrained optimization algorithms and so on. Research on the single objective optimization algorithms influence the development of these optimization branches mentioned above. In the recent years various kinds of novel optimization algorithms have been proposed to solve real-parameter optimization problems. Eight years have passed since the CEC'05 Special Session on Real-Parameter Optimization [1]. Considering the comments on the CEC'05 test suite received by us, we propose to organize a new competition on real parameter single objective optimization. In the CEC'13 test suite, the previously proposed composition functions [2] are improved and additional test functions are included. This special session is devoted to the approaches, algorithms and techniques for solving real parameter single objective optimization without making use of the exact equations of the test functions. We encourage all researchers to test their algorithms on the CEC'13 test suite which includes 28 benchmark functions. The participants are required to send the final results in the format specified in the technical report to the organizers. The organizers will present an overall analysis and comparison based on these results. We will also use statistical tests on convergence performance to compare algorithms that eventually generate similar final solutions. Papers on novel concepts that help us in understanding problem characteristics are also welcome.

2,989 citations


"Particle swarm hybridized with diff..." refers methods in this paper

  • ...These parameters were tuned in the context of the special session of CEC’05 for real parameter optimization [11, 5] reaching results statistically similar to the best participant algorithms (G-CMA-ES [2] and K-PCX [10]) in that session....

    [...]

Proceedings ArticleDOI
Anne Auger1, Nikolaus Hansen1
12 Dec 2005
TL;DR: The IPOP-CMA-ES is evaluated on the test suit of 25 functions designed for the special session on real-parameter optimization of CEC 2005, where the population size is increased for each restart (IPOP).
Abstract: In this paper we introduce a restart-CMA-evolution strategy, where the population size is increased for each restart (IPOP). By increasing the population size the search characteristic becomes more global after each restart. The IPOP-CMA-ES is evaluated on the test suit of 25 functions designed for the special session on real-parameter optimization of CEC 2005. Its performance is compared to a local restart strategy with constant small population size. On unimodal functions the performance is similar. On multi-modal functions the local restart strategy significantly outperforms IPOP in 4 test cases whereas IPOP performs significantly better in 29 out of 60 tested cases.

961 citations


"Particle swarm hybridized with diff..." refers methods in this paper

  • ...These parameters were tuned in the context of the special session of CEC’05 for real parameter optimization [11, 5] reaching results statistically similar to the best participant algorithms (G-CMA-ES [2] and K-PCX [10]) in that session....

    [...]

Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "Particle swarm hybridized with differential evolution: black-box optimization benchmarking for noisy functions" ?

In this work the authors evaluate a Particle Swarm Optimizer hybridized with Differential Evolution and apply it to the BlackBox Optimization Benchmarking for noisy functions ( BBOB 2009 ). The authors have performed the complete procedure established in this special session dealing with noisy functions with dimension: 2, 3, 5, 10, 20, and 40 variables.