scispace - formally typeset
Open AccessJournal ArticleDOI

Differential Evolution With an Individual-Dependent Mechanism

Reads0
Chats0
TLDR
This paper proposes a novel variant of DE with an individual-dependent mechanism that includes an Individual-dependent parameter (IDP) setting and anindividual-dependent mutation (IDM) strategy that is extensively evaluated on a suite of the 28 latest benchmark functions developed for the 2013 Congress on Evolutionary Computation special session.
Abstract
Differential evolution (DE) is a well-known optimization algorithm that utilizes the difference of positions between individuals to perturb base vectors and thus generate new mutant individuals. However, the difference between the fitness values of individuals, which may be helpful to improve the performance of the algorithm, has not been used to tune parameters and choose mutation strategies. In this paper, we propose a novel variant of DE with an individual-dependent mechanism that includes an individual-dependent parameter (IDP) setting and an individual-dependent mutation (IDM) strategy. In the IDP setting, control parameters are set for individuals according to the differences in their fitness values. In the IDM strategy, four mutation operators with different searching characteristics are assigned to the superior and inferior individuals, respectively, at different stages of the evolution process. The performance of the proposed algorithm is then extensively evaluated on a suite of the 28 latest benchmark functions developed for the 2013 Congress on Evolutionary Computation special session. Experimental results demonstrate the algorithm’s outstanding performance.

read more

Content maybe subject to copyright    Report

This paper can be cited as “Tang, L., Dong, Y. and Liu, J. (2015) Differential Evolution with an Individual-Dependent Mechanism, IEEE Transactions on
Evolutionary Computation, 19(4), pp.560-574.”
1
AbstractDifferential evolution is a well-known optimization
algorithm that utilizes the difference of positions between
individuals to perturb base vectors and thus generate new mutant
individuals. However, the difference between the fitness values of
individuals, which may be helpful to improve the performance of
the algorithm, has not been used to tune parameters and choose
mutation strategies. In this paper, we propose a novel variant of
differential evolution with an individual-dependent mechanism
that includes an individual-dependent parameter setting and an
individual-dependent mutation strategy. In the individual-
dependent parameter setting, control parameters are set for
individuals according to the differences in their fitness values. In
the individual-dependent mutation strategy, four mutation
operators with different searching characteristics are assigned to
the superior and inferior individuals, respectively, at different
stages of the evolution process. The performance of the proposed
algorithm is then extensively evaluated on a suite of the 28 latest
benchmark functions developed for the 2013 Congress on
Evolutionary Computation special session. Experimental results
demonstrate the algorithm’s outstanding performance.
Index TermsDifferential evolution, global numerical
optimization, individual-dependent, mutation strategy, parameter
setting.
I. I
NTRODUCTION
ifferential Evolution (DE), proposed by Storn and Price
[1] in 1995, is an efficient population-based global
searching algorithm for solving optimization problems [2]–[4].
Over the years, different variants of DE have been developed to
solve complicated optimization problems in a wide range of
application fields, such as auction [5], decision-making [6],
neural network training [7], chemical engineering [8], robot
control [9], data clustering [10], gene regulatory network [11],
nonlinear system control [12], and aircraft control [13].
DE is a typical evolutionary algorithm (EA) which employs
strategies inspired by biological evolution to evolve a popula-
This research is partly supported by State Key Program of National Natural
Science Foundation of China (Grant No. 71032004), the Fund for Innovative
Research Groups of the National Natural Science Foundation of China (Grant
No. 71321001), and the Fund for the National Natural Science Foundation of
China (Grant No. 61374203).
Lixin Tang is with The Logistics Institute, Liaoning Key Laboratory of
Manufacturing System and Logistics, Northeastern University, Shenyang,
110819, China (e-mail: lixintang@mail.neu.edu.cn).
Yun Dong is with State Key Laboratory of Synthetical Automation for
Process Industry, The Logistics Institute, Northeastern University, Shenyang,
110819, China (e-mail: dydexter@hotmail.com).
Jiyin Liu is with School of Business and Economics, Loughborough
University, Leicestershire LE11 3TU, UK (e-mail: J.Y.Liu@lboro.ac.uk).
tion. Throughout this paper, we suppose that DE is used for
solving minimization problems, and that the objective function
can be expressed as: f(x), x = (x
1
,…,x
D
) R
D
. We take the
objective function value of each solution as its fitness value in
DE. Note that with this setting a solution with a lower fitness
value is a better solution. Some previous studies (e.g., [6][7])
also used such settings, while [2][3] used objective function
only without defining fitness. To solve the problem, the classic
DE process starts from an initial population of NP individuals
(vectors) in the solution space, with each individual
representing a feasible solution to the problem. At each
generation g, the individuals in the current population are
selected as parents to undergo mutation and crossover to
generate offspring (trial) individuals. Each individual x
i,g
(i =
1,2,…,NP) in the population of current generation is chosen
once to be a parent (target individual) for crossover with a
mutant individual v
i,g
obtained from a mutation operation in
which the base individual is perturbed by a scaled difference of
several randomly selected individuals. The offspring individual
u
i,g
generated from the crossover contains genetic information
obtained from both the mutant individual and the parent
individual. Each element u
j
i,g
(j = 1,2,…,D) of u
i,g
should be
restricted within the corresponding upper and lower boundaries.
Otherwise, it will be reinitialized within the solution space. The
mutation and crossover operations of classic DE are illustrated
in Fig. 1. The fitness value (objective function value) of the
offspring individual u
i,g
is then compared with the fitness value
of the corresponding parent individual x
i,g
, and the superior
individual is chosen to enter the next generation. Then, the new
population is taken as the current population for further
evolution operations. This continues until specific termination
conditions are satisfied. In the final generation, the best
individual will be taken as the solution to the problem.
There are only a few DE control parameters that need to be
set: the population size NP, the mutation factor (scale factor) F,
and the crossover probability (crossover rate) CR. Different
parameter settings have different characteristics. In addition,
the strategies used for each operation in DE, especially the
mutation strategies, can vary to obtain diverse searching
features that fit different problems. Therefore, the ability of DE
to solve a specific problem depends heavily on the choice of
strategies [4], and the setting of control parameters [14], [15].
Inappropriate configurations of mutation strategies and control
parameters can cause stagnation due to over exploration, or can
cause premature convergence due to over exploitation.
Exploration can make the algorithm search every promising
Differential Evolution with an
Individual-Dependent Mechanism
Lixin Tang, Senior Member, IEEE, Yun Dong, and Jiyin Liu
D

This paper can be cited as “Tang, L., Dong, Y. and Liu, J. (2015) Differential Evolution with an Individual-Dependent Mechanism, IEEE Transactions on
Evolutionary Computation, 19(4), pp.560-574.”
2
solution area with good diversity, while exploitation can make
the algorithm execute a local search in some promising solution
areas to find the optimal point with a high convergence rate.
Therefore, choosing strategies and setting parameters aimed at
getting a good balance between the algorithm’s effectiveness
(solution quality) and efficiency (convergence rate) have
always been subjects of research.
In the evolution process of a DE population, superior
individuals and inferior individuals play different roles. The
former guide the searching direction, and the latter maintain
population diversity. Generally, in the solution space, better
solutions are more likely to be found in the neighborhoods of
superior individuals or in areas relatively far from inferior
individuals. Unfortunately, these differences have not been
reflected in the control parameter setting and mutation strategy
selection in classic DE and its variants.
In this paper, a novel DE variant (IDE) with an individual-
dependent mechanism is presented. The new mechanism in
IDE includes an individual-dependent parameter (IDP) setting
and an individual-dependent mutation (IDM) strategy. The IDP
setting first ranks individuals based on their fitness values and
then determines the parameter values of the mutation and
crossover operations for each individual based on its rank
value. In IDP, superior individuals tend to be assigned with
smaller parameter values so as to exploit their neighborhoods in
which better solutions are likely contained, while inferior
individuals tend to be assigned with larger parameter values so
as to explore further areas in the solution space. The IDM
strategy assigns four mutation operators with different
searching characteristics for superior and inferior individuals at
different stages of the iteration process. Further, the IDM
strategy introduces perturbations to the difference vector by
employing diverse elements generated from the solution space.
In response to the decreased diversity of the population, we
propose to increase the proportions of the explorative mutation
operator and the perturbations following an exponential
function. Extensive experiments are carried out to evaluate the
performance of IDE by comparing it with five IDE variants,
five state-of-the-art DE variants, ten up-to-date DE variants,
and three other well- known EAs on the latest 28 standard
benchmark functions listed in the CEC 2013 contest.
The remainder of this paper is organized as follows. In
Section II, the related works on DE mutation strategy selecting
and control parameter setting are reviewed. After the
development of the IDP setting and the IDM strategy, the
complete procedure of IDE is presented in Section III. In
Section IV, experiment results are reported to demonstrate the
effectiveness of IDE. Section V gives conclusions and indicates
possible future work.
II. L
ITERATURE REVIEW
Over the past decades, researchers have developed many
different mutation strategies [16], [17]. It has been shown that
some of them are fit for global search with good exploration
ability and others are good at local search with good
exploitation ability [3]. For example, mutation strategies with
two difference vectors (e.g., DE/rand/2) can increase the
population diversity compared to strategies with a single
difference vector (e.g., classic DE/rand/1), while mutation
strategies taking the best individual of the current generation in
the base vector (e.g., DE/best/1 and DE/current-to-best/1) can
strengthen exploitive search and so speed up convergence.
However, to improve algorithm robustness, exploration and
exploitation must be simultaneously combined into the
evolution strategy. Therefore, the single-mutation operator
strategies with composite searching features (such as BDE [18],
DEGL [19], JADE [20], ProDE [21], BoRDE [22], and TDE
[23]) and the multi-mutation operators strategies with different
searching features (such as SaDE [24], [25], TLBSaDE [26],
CoDE [27], EPSDE [28],[29], SMADE [30], jDEsoo [31], and
SPSRDEMMS [32]) were proposed. Epitropakis et al. [18]
linearly combined an explorative and an exploitive mutation
operator to form a hybrid approach (BDE) with an attempt to
balance these two operators. DEGL, proposed by Das et al.
[19], combines global and local mutant individuals to form the
actual mutant individuals with a weight coefficient in each
generation. In JADE, proposed by Zhang and Sanderson [20],
an optional external archive is combined with a mutation
strategy DE/current-to-pbest/1 that utilizes historical
information to direct population searching. The individuals in
the neighbor- hood of the target individual are selected to
participate in the mutation operation in ProDE, proposed by
Epitropakis et al. [21]. To obtain both good diversity and fast
convergence rate, Lin et al. [22] proposed a DE variant with a
best of random mutation strategy (BoRDE), in which the
individual with the best fitness value among several randomly
chosen individuals is selected as the base vector in the mutation
operation. Fan and Lampinen [23] introduced a trigonometric
mutation operation to form TDE (trigonometric mutation
differential evolution), in which the searching direction of each
new mutant individual is biased to the best among three
different individuals randomly chosen in the mutation. Each of
the four given effective trial vector generation strategies
(mutation and crossover operations) is selected to be
implemented according to a self-adaptive probability in SaDE,
proposed by Qin et al. [25]. In addition to a mutation strategy
pool, a teaching and learning mechanism is employed by
TLBSaDE, i.e., a new variant of SaDE, to generate the mutant
individuals [26]. Wang et al. [27] proposed CoDE in which
three mutation strategies are implemented in parallel to create
three new mutant individuals for crossover with the target
individual before selecting one to enter the next generation.
Mallipeddi et al. [28] proposed EPSDE, in which a set of
mutation strategies along with a set of parameter values
compete to generate offspring. Zhao et al. [29] modified
2
x
r
x
b
1
x
r
u
i
'
x
i
()
= +⋅
12
vx x x
ib r r
F
v
i
Fig. 1.
The 2-D plot of evolution operation of classic DE.

This paper can be cited as “Tang, L., Dong, Y. and Liu, J. (2015) Differential Evolution with an Individual-Dependent Mechanism, IEEE Transactions on
Evolutionary Computation, 19(4), pp.560-574.”
3
EPSDE by incorporating an SaDE type learning scheme.
SMADE, proposed in [30], makes use of a multiple mutation
strategy consisting of four different mutation operators and
selects one operator via a roulette-wheel selection scheme for
each target individual in current generation. Researchers in [31]
proposed a variant of jDE, i.e., jDEsoo (jDE for single
objective optimization), which concurrently applies three
different DE strategies in three sub- populations. In
SPSRDEMMS [32], one mutation operator is selected from
DE/rand/1 and DE/best/1 to generate the mutant individual
according to the population size gradually reduced with
iteration progresses. A ranking-based mutation operator is
introduced in Rank-DE [33] where the base and terminal
(guiding) individuals in mutation operators are proportionally
selected according to their ranking values in current generation.
With respect to parameter setting methods, we can classify
them into three categories: constant, random, and adaptive
(including self-adaptive). In constant parameter setting, such as
that used in classic DE, parameters are preset before the search
and kept invariant during the entire iteration process. Storn and
Price [2] stated that it is not difficult to choose control
parameters for obtaining good results. In their opinion, a
promising range of NP is between 5D and 10D, where D is the
dimension of the individual, 0.5 represents a reasonable value
of F, and 0.1 is a first choice for CR for general situations and
0.9 for situations in which quick solutions are desired.
However, based on the results of tests on four benchmark
functions, Gämperle et al. [15] concluded that DE performance
depends heavily on the setting of control parameters. The
researchers stated that a promising range of NP is between 3D
and 8D, an efficient initial value for F is 0.6, and a promising
range for CR is [0.3, 0.9]. In [34], Rönkkönen et al. suggested
that a reasonable range of NP is between 2D and 40D, that F
should be sampled between 0.4 and 0.95 (and that 0.9 offered a
compromise between exploration and exploitation), and that
CR should be drawn from the range (0.0, 0.2) if the problem is
separable, or [0.9, 1] if the problem is both non-separable and
multimodal. They set F = 0.9, CR = 0.9, and NP = 30 for all
experimental functions in the CEC 2005 contest benchmark
suite. In CoDE [27], each trial vector generation strategy
randomly selects one parameter setting from a pool consisting
of three constant parameter settings. OXDE, proposed in [35],
employs a new orthogonal crossover operator to improve the
searching ability of classic DE with F = 0.9, CR = 0.9, and NP =
D. In [36], DE-APC randomly assigns F value and CR value
from two preset constant sets F
set
and CR
set
, respectively, to
each individual. The diversity of the conclusions reached by
these researchers indicates the number of different claims on
the DE control parameter setting. It is impossible that one
constant parameter setting fits all problems, instead, effective
constant parameters should be problem-dependent.
To avoid manually tuning parameters, researchers have
developed some techniques to automatically set the parameter
values, one of which is random parameter setting. Linear
variation, probability distribution, and specified heuristic rules
are usually employed to generate the diverse parameters in
random parameter setting. For example, Das et al. [37]
presented two ways to set F in DE, i.e., random and time
varying. In the random way, F is set to be a random real number
from the range (0.5, 1); in the time varying way, F is linearly
reduced within a given interval. In SaDE [25], the value of
parameter F is randomly drawn from a normal distribution
N(0.5, 0.3) for each target individual in the current population.
Abbass [38] generated F from the standard normal distribution
N(0, 1). CR in [39] is generated from a normal distribution
N(0.5, 0.15). CR is set to be [2
1/(
e
)
]
-1
in DEcfbLS [40], where n
is the dimension of the problem and α
e
is the inheritance factor
[41]. The random setting can improve the exploration ability by
increasing searching diversity.
Another automatic parameter setting method focuses on the
adaptation of control parameters. In adaptive parameter setting,
the control parameters are adjusted according to the feedback
from the searching process [42], [43], or undergo evolutionary
operation [38], [44]. Incorporating the individuals of successive
generations and relative objective function values as inputs, Liu
and Lampinen [42] employed a fuzzy logic controller to tune
the parameters adaptively in FADE (fuzzy adaptive differential
evolution). A self-adaptive DE (jDE) proposed by Brest et al.
[43] assigns the values from the ranges [0.1, 1.0] and [0.0, 1.0]
in an adaptive manner with probabilities τ
1
and τ
2
to F and CR,
respectively, for each mutation and crossover operation. In
JADE [20], according to the historical successful information
of parameters, F is generated by a Cauchy distribution, and CR
is sampled from a normal distribution for each individual at
each generation. SHADE [45] is an improved version of JADE
which employs a different success-history based mechanism to
update F and CR. SaDE [25] adaptively adjusts CR values
following a normal distribution with the mean value depending
on the previous successful CR values. The parameters in [29]
are gradually self-adapted by learning from the success history.
In [46], for each individual, control parameters is selected
adaptively from a set of 12 different DE parameter settings with
a probability depending on the corresponding success rate.
Instead of the single one value of F to one individual, PVADE,
presented in [47], adopts a scaling factor vector F
m
, calculated
by a diversity measure of the population in each dimension
level. The mutation and crossover rates of the evolutionary
algorithms [38], [44], proposed based on DE, are self-adapted
by the DE mutation operation.
We have observed that the existing mutation strategies and
control parameter setting methods reviewed above are used at
the population level, i.e., all individuals in the current
population are treated identically without consideration of the
differences in their fitness values. Consideration of this
difference may be helpful for finding better solutions in the
next generation. We therefore propose a new difference-based
mechanism in which the control parameters and mutation
operators are set to each individual according the information
on its fitness value in the current generation to improve the DE
performance in both convergence rate and diversity.
III. T
HE PROPOSED IDE
In essence, the purposes of DE mutation operations and
control parameters are to determine the searching direction and
the size of the searching area, respectively. In this section, we

This paper can be cited as “Tang, L., Dong, Y. and Liu, J. (2015) Differential Evolution with an Individual-Dependent Mechanism, IEEE Transactions on
Evolutionary Computation, 19(4), pp.560-574.”
4
first discuss the influences of the control parameters on DE and
give the IDP setting method. Then, we investigate the
convergence features of DE with different mutation operators
and propose the IDM strategy. Finally, we present the complete
IDE procedure with these new strategies.
Specifically, to demonstrate the validity of the proposed
strategies in this section, four typical functions introduced in
the CEC 2013 benchmark suite [48] are employed to run the
pilot experiments. The contour maps and the 3-D (three-
dimensional) maps for 2-D benchmark functions are plotted in
Fig. 2. The Sphere Function (a) is unimodal, continuous,
differentiable, and separable. It is the benchmark for studying
the convergence features. The Rotated Schwefel’s Function (b)
is multimodal, rotated, non-separable, and asymmetrical. Its
number of local optima is huge, and its second best local
optimum is far from the global optimum [48]. The function is
very difficult to optimize and is a good candidate for studying
the ability to jump out of local optima. The Rotated Schaffers
F7 Function (c) is multimodal, non-separable, asymmetrical,
and has a huge number of local optima. It is very suitable for
investigating the exploration ability. The Rotated Expanded
Scaffer’s F6 Function (d) is multimodal, non-separable, and
asymmetrical. Through it, the exploitation ability can be
examined. All these benchmark functions are minimization
problems.
A. Individual-Dependent Parameter Setting
From the illustration of the evolution operation of classic DE
in Fig. 1, we can observe that the scale factor F is the parameter
that controls the size of the searching area around the base
individual x
b
, and the crossover rate CR indicates the
probability of inheriting elements from the mutant individual v
i
in the process of constructing the trial individual u
i
.
We look at the effects of the control parameters in two cases,
i.e., the unimodal Sphere Function and the multimodal Rotated
Schwefel’s Function. For the Sphere Function expressed in Fig.
2 (a), it can be seen from the diagram that the individuals with
better (smaller) fitness values have shorter distances to the
global optimum in the center (marked by a black star) of
concentric circles, whereas individuals with worse (larger)
fitness values have longer distances. This suggests that to find
better solutions, for superior base individuals, the mutation
operations should exploit the searching area with a relatively
small radius by employing a relatively small F value, and for
inferior base individuals, they should jump out of the
neighborhood area to explore more promising areas by using a
relatively large value of scale factor. In terms of crossover, the
offspring individuals should inherit more elements from the
corresponding parent (target) individuals with a smaller CR
value when the parent individuals are superior. In contrast, if
the parent individuals are far away from the optimum, more
promising offspring individuals are likely to be obtained by
accepting more mutant elements with a larger CR value.
Based on this analysis, we propose to set the parameters F
and CR in accordance with the differences between individuals’
fitness values. In other words, we propose that parameter
setting should follow a general principle that the parameters for
individuals with higher fitness values are larger, and vice versa.
Using this principle, specific individual-dependent parameter
setting can be devised. Two natural schemes are given below:
1) A rank-based scheme
Re-index all individuals in the current population in
ascending order of their fitness (objective function) values, i.e.,
individual x
i
is the ith most superior one. The control parameter
F
b
associated with base individual x
b
and CR
i
associated with
target individual x
i
can be set using (1) and (2), respectively.
b
b
F
NP
=
(1)
(a) The Sphere Function
(b) The Rotated Schwefels Function
(c) The Rotated Schaffers F7 Function
(d) The Rotated Expanded Scaffer’s F6 Function
Fig.
2. The contour maps and the 3-D maps of four benchmark functions
for
the pilot experiments
.

This paper can be cited as “Tang, L., Dong, Y. and Liu, J. (2015) Differential Evolution with an Individual-Dependent Mechanism, IEEE Transactions on
Evolutionary Computation, 19(4), pp.560-574.”
5
i
i
CR
NP
=
(2)
2) A value-based scheme
Denote f
i
as the fitness value of individual x
i
in the current
population. F
b
and CR
i
can be set as follows.
LLU
LLb
b
ff
ff
F
δ
δ
+
+
=
(3)
LLU
LLi
i
ff
ff
CR
δ
δ
+
+
=
(4)
Where f
L
and f
U
are the minimum and maximum fitness
values, respectively, in the current population, and
δ
L
is the
absolute value of the difference between the minimum fitness
value and the second minimum (different from the minimum
value) in the searching history up to the current population.
Note that there may be more than one solution with the
minimum fitness value, and the second minimum value has to
be different from this minimum. The inclusion of
δ
L
prevents
the parameters to be set to 0. Note also that it is not necessary to
use the same scheme to set F and CR.
The situation for multimodal functions is more complicated
than that for unimodal functions. We take as an example a
typical multimodal case in Fig. 2 (b), i.e., the Rotated
Schwefel’s Function. As we can observe, superior individuals
are not always close to the global minimum (marked by a black
star) because many local minima spread all over the searching
space and confuse the search. In fact, relative to the superior
individuals, the inferior individuals may be closer to the global
minimum. Based on this observation, we randomize the
parameters using a normal distribution with the mean specified
to the original value and the standard deviation specified to 0.1.
Denote a random number from a normal distribution by
randn(mean, std) [20]. Then, the IDP setting can be modified as
follows.
( ,0.1)
bb
F randn F=
'
(5)
( ,0.1)
ii
CR randn CR=
'
(6)
With the randomization, the control parameter values near
their mean values maintain search efficiency, while the
parameter values away from those improve search diversity. If
the parameter value drawn from the normal distribution is
beyond the range of (0, 1), it will be ignored and a new value
will be drawn until the parameter value is within the range.
To verify the idea of IDP setting, we run a pilot experiment
on the four benchmark functions introduced above. The
dimension of the problems (D) and the population size (NP) are
set to 30 and 100, respectively. Based on the strategy DE/rand/1
/bin, DEs with different parameter settings are applied to
optimize the benchmark functions. These include [F = 0.5/ CR
= 0.9] [2], [F = 0.6/ CR = 0.5] [15], [F = 0.7/ CR = 0.2] [34], the
parameter setting in SaDE [25] (denoted by pSaDE), and the
IDPs presented above. Four versions of IDP are tested. We use
two letters to indicate the IDP parameter setting schemes, the
first for F and the second for CR. For example, IDP_RV means
that F is set using the rank-based scheme and CR is set using the
value-based scheme. For all pilot experiments, the maximum
function evaluations (MaxFES) is set to 300,000 (i.e., the
iteration number g
max
is 3,000 = MaxFES/NP) and the fitness
error value of the best individual is sampled every 30,000 FES.
For convenient illustration, the convergence curves of different
DEs on four problems are plotted in Fig. 3. The horizontal axis
is the number of FES, and the vertical axis is the average of the
fitness error values over 51 [48] independent runs.
The parameters generated by the value-based scheme adjust
the size of the searching area more precisely, and the
parameters generated by the rank-based scheme are uniformly
distributed in the range [0, 1]. The diagrams clearly show that
the IDP settings outperform other parameter settings in all cases.
Among the four IDPs, IDP_RR achieves the best performance
and is selected as the parameter setting in IDE.
A recently proposed DE variant Rank-DE [33] employs a
similar ranking-based scheme to select some individuals in
mutation operators. This is different from IDE, which uses the
scheme to calculate the values of the control parameters. The
comparison of these two algorithms is carried out in Section IV.
B. Individual-Dependent Mutation Strategy
Mutation is an operation for obtaining the mutant individuals
in the area around the base individual. In the mutation operator
of classic DE, the base individual serves as the center point of
the searching area, the difference vector sets the searching
direction, and the scale factor controls the movement step.
Considering the mutation and crossover strategies used, the
evolutionary operation of DE can be denoted using a
four-parameter notation DE/base/differ/cross [3]. There are
several widely used mutation strategies with different kinds of
base vectors and different numbers of difference vectors, e.g.,
DE/rand/1, DE/best/1, DE/current-to-best/1, DE/rand/2, and
DE/best/2. The single base vector mutation strategy DE/best/
1.E-08
1.E-05
1.E-02
1.E+01
1.E+04
0.E+00 1.E+05 2.E+05 3.E+05
Fitness Error Values
Function Evaluations (FES)
F=0.5/CR=0.9
F=0.6/CR=0.5
F=0.7/CR=0.2
IDP_RR
IDP_RV
IDP_VR
IDP_VV
pSaDE
0.E+00
2.E+03
4.E+03
6.E+03
8.E+03
1.E+04
0.E+00 1.E+05 2.E+05 3.E+05
Fitness Error Values
Function Evaluations (FES)
F=0.5/CR=0.9 F=0.6/CR=0.5
F=0.7/CR=0.2 IDP_RR
IDP_RV
IDP_VR
IDP_VV
pSaDE
(a) (b)
1.E+00
1.E+02
1.E+04
1.E+06
1.E+08
0.E+00 1.E+05 2.E+05 3.E+05
Fitness Error Values
Function Evaluations (FES)
F=0.5/CR=0.9 F=0.6/CR=0.5
F=0.7/CR=0.2 IDP_RR
IDP_RV IDP_VR
IDP_VV pSaDE
1.0E+01
1.2E+01
1.4E+01
1.6E+01
0.E+00 1.E+05 2.E+05 3.E+05
Fitness Error Values
Function Evaluations (FES)
F=0.5/CR=0.9 F=0.6/CR=0.5
F=0.7/CR=0.2 IDP_RR
IDP_RV IDP_VR
IDP_VV pSaDE
(c) (d)
Fig. 3.
Convergence curves of DEs with different parameter settings on
four
30
-D benchmark functions: (a) The Sphere Function, (b) The Rot
ated
Schwefel’
s Function, (c) The Rotated Schaffers F7 Function
, and (d) The
Rotated
Expanded Scaffer’s F6 Function.

Citations
More filters
Journal ArticleDOI

Recent advances in differential evolution – An updated survey

TL;DR: It is found that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research on DE.
Journal ArticleDOI

Differential evolution with multi-population based ensemble of mutation strategies

TL;DR: A multi-population based approach is proposed to realize the adapted ensemble of multiple strategies of differential evolution, thereby resulting in a new DE variant named multi- Population ensemble DE (MPEDE) which simultaneously consists of three mutation strategies.
Journal ArticleDOI

Ensemble of differential evolution variants

TL;DR: The success of EDEV reveals that, through an appropriate ensemble framework, different DE variants of different merits can support one another to cooperatively solve optimization problems.
Journal ArticleDOI

Ensemble strategies for population-based optimization algorithms – a survey

TL;DR: A survey on the use of ensemble strategies in POAs is provided and an overview of similar methods in the literature such as hyper-heuristics, island models, adaptive operator selection, etc. are provided and compare them with the ensemble Strategies in the context of POAs.
Journal ArticleDOI

Automatic Niching Differential Evolution With Contour Prediction Approach for Multimodal Optimization Problems

TL;DR: The proposed ANDE algorithm acts as a parameter-free automatic niching method that does not need to predefine the number of clusters or the cluster size and is enhanced by a contour prediction approach (CPA) and a two-level local search strategy.
References
More filters
Journal ArticleDOI

Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces

TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Book ChapterDOI

Individual Comparisons by Ranking Methods

TL;DR: The comparison of two treatments generally falls into one of the following two categories: (a) a number of replications for each of the two treatments, which are unpaired, or (b) we may have a series of paired comparisons, some of which may be positive and some negative as mentioned in this paper.
Journal ArticleDOI

On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other

TL;DR: In this paper, the authors show that the limit distribution is normal if n, n$ go to infinity in any arbitrary manner, where n = m = 8 and n = n = 8.
Journal ArticleDOI

No free lunch theorems for optimization

TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Book

Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)

TL;DR: This volume explores the differential evolution (DE) algorithm in both principle and practice and is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimization.
Related Papers (5)
Frequently Asked Questions (5)
Q1. What are the main techniques used to generate the diverse parameters in random parameter setting?

Linear variation, probability distribution, and specified heuristic rules are usually employed to generate the diverse parameters in random parameter setting. 

In this paper, the authors propose a novel variant of differential evolution with an individual-dependent mechanism that includes an individual-dependent parameter setting and an individual-dependent mutation strategy. 

Because SR tends to decrease continually from a relative larger value as the iteration progresses, it can be employed as an indicator to identify whether the earlier stage ends. 

In CLPSO, introduced by Liang et al. in 2006 [52], the best historical information of each particle is utilized to update the corresponding velocity vector. 

In essence, the purposes of DE mutation operations and control parameters are to determine the searching direction and the size of the searching area, respectively.