scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Multimodal Continuous Ant Colony Optimization

TLDR
An adaptive multimodal continuous ACO algorithm is introduced and an adaptive parameter adjustment is developed, which takes the difference among niches into consideration, which affords a good balance between exploration and exploitation.
Abstract
Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ACO algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima.

read more

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 2, APRIL 2017 191
Adaptive Multimodal Continuous
Ant Colony Optimization
Qiang Yang, Student Member, IEEE, Wei-Neng Chen, Member, IEEE, Zhengtao Yu, Tianlong Gu,
Yun Li, Member, IEEE, Huaxiang Zhang, and Jun Zhang, Senior Member, IEEE
Abstract—Seeking multiple optima simultaneously, which
multimodal optimization aims at, has attracted increasing atten-
tion but remains challenging. Taking advantage of ant colony
optimization (ACO) algorithms in preserving high diversity, this
paper intends to extend ACO algorithms to deal with multimodal
optimization. First, combined with current niching methods,
an adaptive multimodal continuous ACO algorithm is intro-
duced. In this algorithm, an adaptive parameter adjustment is
developed, which takes the difference among niches into consider-
ation. Second, to accelerate convergence, a differential evolution
mutation operator is alternatively utilized to build base vec-
tors for ants to construct new solutions. Then, to enhance
the exploitation, a local search scheme based on Gaussian
distribution is self-adaptively performed around the seeds of
niches. Together, the proposed algorithm affords a good bal-
ance between exploration and exploitation. Extensive experiments
on 20 widely used benchmark multimodal functions are con-
ducted to investigate the influence of each algorithmic component
and results are compared with several state-of-the-art multi-
modal algorithms and winners of competitions on multimodal
optimization. These comparisons demonstrate the competitive
efficiency and effectiveness of the proposed algorithm, especially
in dealing with complex problems with high numbers of local
optima.
Manuscript received November 11, 2015; revised May 8, 2016 and
July 4, 2016; accepted July 6, 2016. Date of publication July 13, 2016;
date of current version March 28, 2017. This work was supported in part by
the National Natural Science Foundation of China under Project 61379061,
Project 61332002, Project 61511130078, and Project 6141101191, in part
by the Natural Science Foundation of Guangdong for Distinguished Young
Scholars under Project 2015A030306024, in part by the Guangdong
Special Support Program under Project 2014TQ01X550, and in part by
the Guangzhou Pearl River New Star of Science and Technology under
Project 201506010002 and Project 151700098. (Corresponding authors:
Wei-Neng Chen and Jun Zhang.)
Q. Yang is with the School of Computer Science and Engineering,
South China University of Technology, Guangzhou 51006, China, and also
with the School of Data and Computer Science, Sun Yat-sen University,
Guangzhou 510006, China.
W.-N. Chen and J. Zhang are with the School of Computer Science
and Engineering, South China University of Technology, Guangzhou 51006,
China (e-mail: cwnraul634@aliyun.com; junzhang@ieee.org).
Z. Yu is with the School of Information Engineering and Automation,
Kunming University of Science and Technology, Kunming 650504, China.
T. Gu is with the School of Computer Science and Engineering, Guilin
University of Electronic Technology, Guilin 541004, China.
Y. Li is with the School of Engineering, University of Glasgow,
Glasgow G12 8LT, U.K.
H. Zhang is with the School of Information Science and Engineering,
Shandong Normal University, Jinan 250014, China.
This paper has supplementary downloadable multimedia material available
at http://ieeexplore.ieee.org provided by the authors.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TEVC.2016.2591064
Index Terms—Ant colony optimization (ACO), multimodal
optimization, multiple global optima, niching.
I. INTRODUCTION
M
ULTIPLE optimal solutions, representing various
designs with the same or very similar performance,
are in demand in many practical applications, so that deci-
sion makers can have multiple choices [1]. To obtain multiple
optima of a problem, practitioners turn their attention to
population-based evolutionary algorithms (EAs), which pos-
sess potential to locate and preserve multiple optima simulta-
neously.
Even though different kinds of EAs [2]–[7], such
as particle swarm optimization (PSO) [8]–[11], differ-
ential evolution (DE) [12]–[16], ant colony optimiza-
tion (ACO) [17]–[22], and estimation of distribution
algorithms (EDAs) [23]–[27], have been successfully applied
to solve various problems [28]–[35], most of them focus
on single optimization, rather than multimodal optimization.
Owing to the global learning and updating schemes used, these
EAs usually drive the whole population toward only one global
optimum. Therefore, these EAs cannot be directly applied to
deal with multimodal optimization. To solve multimodal prob-
lems efficiently, some special tactics should be designed to
cooperate with classical EAs.
So far, the most adopted method to aid classical EAs
deal with multimodal optimization is niching [36]–[43], which
divides the whole population into smaller niches. Generally,
each niche is responsible for seeking one or a small number
of optima. Along this promising avenue, researchers have pro-
posed various niching strategies [38]–[40], [42]–[49]. Then,
through utilizing a certain niching method, a number of
new updating schemes for classical EAs [47], [50]–[53]have
emerged to deal with multimodal optimization. Recently, even
some researchers have applied multiobjective techniques to
tackle multimodal optimization [54]–[57]. The related work
on these three aspects will be detailed in the following
section.
In spite of the effectiveness of existing multimodal algo-
rithms on tested problems, they are known to suffer from
various drawbacks, such as inferior performance on irregular
multimodal surfaces [41], the serious reliance on particu-
lar landscapes and the sensitive parameter settings [38], [39],
etc. In particular, most existing multimodal algorithms would
1089-778X
c
2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/
redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

192 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 2, APRIL 2017
lose efficiency when the dimensionality of multimodal prob-
lems increases [38], [39], [45]–[48], [51]–[57]. Such inferior
performance may be attributed to the exponentially increasing
number of optima resulted from the growth of dimension-
ality. Under this environment, high diversity preservation
is especially important for EAs to deal with multimodal
optimization.
In literature, GA [40], [42], [43], [48], [57],
PSO [47], [53], [58], and DE [38], [39], [44], [46],
[50]–[52], [55] are often employed to evolve the population.
Although new learning or updating strategies [47], [50]–[53],
have been especially developed to aid these optimizers, they
still only locate a very small number of global optima, when
solving complex problems with a large number of local
optima. In contrast, in this paper, we take advantage of
ACO algorithms in preserving high diversity to deal with
multimodal optimization.
ACO [59]–[62], which is a novel nature-inspired method in
evolutionary computation, is originally designed for optimiz-
ing discrete problems. Recently, Socha and Dorigo [63]have
extended ACO to a continuous one named ACO
R
to solve
continuous problems through shifting a discrete probability
distribution to a continuous one. In ACO
R
, each ant constructs
solutions using a Gaussian kernel function based on solutions
selected probabilistically from an archive. This solution con-
struction strategy arms ACO
R
with high diversity [63], which
is valuable for multimodal optimization. However, ACO
R
can-
not be directly utilized to locate multiple optima because the
solution selection and construction strategies are based on
global information, which is only fit for single optimization.
As far as we know, there is no previous work on extending
ACO to cope with multimodal optimization.
The above mentioned motivations stimulate the proposal of
an adaptive multimodal continuous ACO (AM-ACO), for mul-
timodal optimization in this paper. More specifically, the main
characteristics of AM-ACO are as follows.
1) Instead of operating on the whole archive in tradi-
tional ACOs, AM-ACO operates on the niche level by
incorporating niching methods, and an adaptive param-
eter adjusting strategy is introduced, which takes the
differences among niches into consideration.
2) A DE mutation operator is absorbed in AM-ACO, so
that the convergence speed (CS) can be accelerated.
3) A local search scheme based on Gaussian distribu-
tion is embedded to promote the exploitation, which is
adaptively conducted around seeds of niches.
To verify the efficiency and effectiveness of the proposed
AM-ACO, extensive experiments on 20 widely used bench-
mark multimodal functions are conducted to investigate the
influence of each algorithmic component and make wide com-
parisons with state-of-the-art multimodal algorithms and the
winners of the CEC’2013 and the CEC’2015 competitions on
multimodal optimization.
Following a comprehensive review of the recent multimodal
algorithms and a brief description of the related ACO algo-
rithms in Section II, the proposed AM-ACO will be detailed
in Section III. Then, a series of experiments are conducted
in Section IV, to verify the efficiency and effectiveness of
the proposed algorithm. Finally, conclusions together with
discussions are given in Section V.
II. R
ELATED WORK
Without loss of generality, in this paper, maximiza-
tion problems are taken into consideration as in [38]–[40]
and [42]–[57]. In addition, this paper aims at seeking multiple
global optima of a problem, which is the main focus of the cur-
rent multimodal optimization researches [38]–[40], [42]–[57].
A. Multimodal Optimization Methods
Various multimodal optimization algorithms have been put
forward in recent years. To better review these respectable
works, we attempt to briefly describe them in three aspects.
1) New Niching Strategies: Most of the current researches
on multimodal optimization focus on proposing new niching
strategies [38]–[40], [42]–[49]. At present, the two most fun-
damental and famous niching methods are crowding [39] and
speciation [38]. However, these two niching strategies are sen-
sitive to their parameters, such as the crowding size in crowd-
ing and the species radius in speciation. Therefore, to liberate
the niching methods from the sensitivity to parameters, some
researchers brought up parameter-free or parameter-insensitive
niching strategies.
A hill-valley (HV) niching tactic [64], [65] was developed
through sampling enough intermediate points within the line
segment connected by two individuals to detect HVs. If there
exists at least one point whose fitness is smaller than those
of both individuals, then a valley is detected, indicating these
two individuals belong to different niches. A drawback of this
method is that enough points should be sampled so that the
accurate detection can be achieved. To reduce the number of
sampled points, recursive middling [57], [66] was put forward
by borrowing ideas from binary search. It continuously sam-
ples the middle point of the line segment connected by two
updated endpoints until the demanded point is found or the
two endpoints converge to the same one. Further, a topo-
logical species conservation [48], [67] strategy was brought
up by introducing a seed preservation method to avoid the
extinction of some niches, which have very few individu-
als. Although these methods are promising in partitioning the
population into niches, they usually cost a large number of fit-
ness evaluations to perfectly detect all valleys. To circumvent
this dilemma, a history-based topological speciation (HTS)
method [45] was proposed through maintaining a large archive
to store historical individuals, which are used to detect valleys.
Though HTS can avoid costing fitness evaluations in detect-
ing valleys, it can detect only few or even no valleys at early
stages, because very few historical individuals exist in the
archive.
Although the above niching strategies are promising, they
encounter two limitations. First, they are either at the sacrifice
of fitness evaluations [48], [57], [64]–[67], or at the sacrifice
of memory space [45]. Second, such niching strategies may
lead to imbalance in the number of individuals among niches.
Consequently, to tackle this predicament, a clustering-based
niching method [44], [46] arose. Algorithms 1 and 2 [44
], [46]

YANG et al.:AM-ACO 193
Algorithm 1 Clustering for Crowding [36]
Input: population P, cluster size M
Step 1: Randomly generate a reference point R and compute its
distance to all individuals;
Step 2: While P is not empty
Select the individual P
near
nearest to R in P ;
Build a crowd by combining P
near
and M-1 individuals
nearest to it;
Eliminate these M individuals from P;
End While
Output: a set of crowds
Algorithm 2 Clustering for Speciation [36]
Input: population P, cluster size M
Step 1: Sort P according to fitness;
Step 2: While P is not empty
Select the best individual P
best
in P as a new seed;
Build a species containing P
best
and M-1 individuals
nearest to it;
Eliminate these M individuals from P;
End While
Output: a set of species
display the clustering frameworks for crowding and speciation,
respectively. Both methods transfer the sensitive parameter
(the crowding size or the species radius) to a less sensitive
parameter (the cluster size).
2) Novel Updating Strategies for EAs: The for-
mer researches put emphasis on the development
of niching methods, with the optimizer for evo-
lution set as the basic EAs, for instance, basic
GA [40], [42], [43], [48], [57], PSO [47], [53], [58], and
DE [38], [39], [44], [46], [50]–[52], [55]. However, these
basic EAs may have limitations in exploring and exploiting
the search space to locate all global optima [50], [53].
Therefore, taking advantage of the above mentioned niching
strategies, some researchers direct their attention to propos-
ing new update strategies for classical EAs to deal with
multimodal optimization efficiently.
Li [47] proposed a ring topology-based PSO utilizing the
ring topology to form stable niches across neighborhoods.
Qu et al. [53] put forward a distance-based locally informed
PSO (LIPS), which uses several local best positions to guide
each particle. Then, a local informative niching DE was
brought up by Biswas et al. [52], which introduces two
different types of individual generation schemes based on
selected individuals. Subsequently, they further developed an
improved parent centric normalized neighborhood mutation
operator for DE (PNPCDE) [51], which is then integrated
with crowding [39]. In addition, utilizing speciation [38],
Hui and Suganthan [50] enhanced the exploration ability of
DE by applying an arithmetic recombination strategy, lead-
ing to ARSDE, which is further combined with an ensemble
tactic, resulting in EARSDE. Recently, taking advantage of
EDAs, Yang et al. [68] developed multimodal EDAs to deal
with multimodal optimization.
3) Multiobjective Techniques: In contrast to the above
mentioned researches on integrating a niching scheme with
a single-objective EA to cope with multimodal optimization,
Algorithm 3 ACO
While the termination criterion is not satisfied
AntBasedSolutionConstruction();
PheromoneUpdate();
DaemonAction();
End While
a few approaches [54]–[57] recently have been proposed
to recast multimodal optimization as a multiobjective opti-
mization problem. This is feasible because both multimodal
optimization and multiobjective optimization involve multiple
optimal solutions.
Generally, the multiobjective techniques [54]–[57] trans-
form a multimodal problem into a bi-objective problem, with
the first objective to be the multimodal function itself and the
second to be a self-designed function. Thus, the differences
among these multiobjective methods mainly lie in the design of
the second objective. In [57], the second objective is the abso-
lute value of the gradient of the multimodal function, while
in [56], it is constructed based on the norm of the gradient
vector. These two algorithms require that multimodal functions
are differentiable, which may not always be met in practice.
Subsequently, Basak et al. [55] made use of the mean distance
of one individual to all other individuals in the current popu-
lation as the second objective, which should be maximized so
that the diversity of the population can be improved. Different
from the above three techniques, Wang et al. [54] designed
a novel transformation, which not only redesigns the second
objective, but also redesigns the first objective. This trans-
formation makes the two transformed objectives conflict with
each other, which matches the requirement of multiobjective
optimization more.
Even though these techniques are promising for multimodal
problems, especially for low dimensional ones, it becomes
very difficult for them to locate global optima for problems
with high dimensionality. With the dimensionality increasing,
the number of local optima usually grows exponentially, which
requires that optimizers should maintain considerably high
diversity. This motives us to seek for an optimizer which can
preserve high diversity for multimodal optimization.
B. Ant Colony Optimization
ACO [59]–[61] is inspired from the foraging behavior of
real ants. When ants find a food source, they will deposit
pheromone trails on the ground. The amount of the pheromone
deposited depends on the quantity and quality of the food,
indicating the degree of attracting other ants to the food source.
This indirect cooperation among ants enables them to find the
shortest path between their nest and the food source [69].
Originally, ACO is designed for discrete optimiza-
tion, and has been widely applied to solve real world
problems [70]–[76]. The general framework of an ACO is dis-
played in Algorithm 3. Subsequently, Socha and Dorigo [63]
extended ACO
R
, through shifting a discrete probability distri-
bution to a continuous one. The brief procedure of ACO
R
is
as follows.

194 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 2, APRIL 2017
1) AntBasedSolutionConstruction(): In ACO
R
, the con-
struction of new solutions by ants is accomplished in an
incremental way, namely variable by variable. First, before
generating a new value for a variable, each ant probabilis-
tically selects one solution from the archive containing the
already found solutions. The probability of the jth solution is
calculated by
p
j
=
w
j
NP
i=1
w
i
(1)
where NP is the archive size and w
j
is the weight of the jth
solution and is given by
w
j
=
1
σ NP
2π
e
(rank( j)1)
2
2σ
2
NP
2
(2)
where rank( j) returns the rank of the jth solution sorted in
descending order according to fitness values; and σ is a param-
eter, which has a significant effect on the weight. A small σ
indicates that the top-ranked solutions are strongly preferred,
while a large σ suggests a uniform probability distribution of
solutions. The larger the value of σ , the more uniform the
probability distribution [63].
Then, based on the selected solutions, an ant samples new
values for variables using Gaussian distribution defined by
g
x
d
d
d
=
1
δ
d
2π
e
(
x
d
μ
d
)
2
2
(
δ
d
)
2
(3)
where d is the dimension index and δ is computed by
δ
d
= ξ
NP
i=1
|x
d
i
x
d
j
|
NP 1
(4)
where ξ is a parameter that has an effect similar to that of
the pheromone persistence in the discrete ACO [59]–[61]. The
higher the value of ξ , the lower the CS of the algorithm [63].
When sampling the dth dimension of a new solution, μ
d
is
set as the dth dimension of the selected jth solution.
Through the above process, each ant constructs a new solu-
tion. Such random construction based on Gaussian distribution
potentially equips the algorithm with high diversity, which is
precious for multimodal optimization.
2) PheromoneUpdate(): In ACO
R
, there are no apparent
pheromone representation and updating strategies. Actually,
these strategies are embedded into the calculation of the weight
for each solution in the archive. In (2), the weight of a solu-
tion decreases exponentially with its rank [17] and in (1), this
weight determines the probability of the solution chosen by
ants. Thus, the weight operates as the pheromone.
Once NP new solutions have been obtained, they are added
into the archive, obtaining 2NP solutions totally. Then, the NP
best solutions remain as the new solutions in the archive. In
this way, the search process is biased toward the best solutions
found during evolution. Overall, the update of the archive plays
the role of updating pheromone.
3) DaemonAction(): Daemon action is an optional action,
which can be used to implement centralized actions [63].
Examples include the utilization of local search schemes to
refine the obtained solutions, or the collection of global infor-
mation that can be used to decide whether it is useful or
not to deposit additional pheromone to bias the search pro-
cess. However, in the original ACO
R
, no daemon action is
adopted.
After ACO
R
, researchers have developed other variants
of continuous ACO [19], [21], [77] to deal with continuous
domains and even mixed-variable problems [17], [20]. Even
though a lot of attempts have been made [78]–[80], ACO
is still restricted to single optimization. To the best of our
knowledge, there is no work on applying ACO to deal with
multimodal optimization. This observation and the consider-
able potential of ACO
R
in preserving high diversity motivate
the following work.
III. P
ROPOSED ALGORITHM
In this section, taking advantage of ACO
R
in preserving high
diversity, we propose an AM-ACO, to deal with multimodal
optimization. Furthermore, to accelerate the CS, a basic DE
mutation operator is incorporated into AM-ACO. To enhance
exploitation, an adaptive local search technique is further
absorbed into the algorithm. At last, a random-based niche
size setting strategy is developed for AM-ACO to deal with the
dilemma that the niche size is problem-dependent. Particularly,
each algorithmic component is detailed as follows.
A. Adaptive Multimodal ACO
To make ACO suitable for multimodal optimization, we first
couple ACO
R
with existing niching methods, resulting in mul-
timodal ACO (M-ACO). Instead of operating on the whole
solution archive in ACO
R
, M-ACO operates on the niche
level. Thus, before ants construct solutions, the already found
solutions in the archive are partitioned into several niches
according to the used niching strategies.
This paper mainly focuses on developing a new optimizer
(the second aspect in Section II-A) for multimodal optimiza-
tion. Thus, we directly incorporate the clustering-based nich-
ing methods [44], [46], presented in Algorithms 1 and 2,into
M-ACO. Consequently, two variants of the proposed M-ACO
are developed, namely crowding-based M-ACO (MC-ACO)
and speciation-based M-ACO (MS-ACO).
Subsequently, we talk about one key parameter in M-ACO,
namely σ , which makes significant difference on M-ACO and
then develop an adaptive adjusting strategy for this parameter,
leading to adaptive M-ACO (AM-ACO).
First, suppose the archive size is NP and the number of
solutions in each niche, called niche size, is NS,
1
then the
number of niches is T = NP/NS. Generally, NS is much
smaller than NP. In this paper, for briefness, the ant colony
size is set the same as the archive size and each niche is
assigned to NS ants to construct NS new solutions based on
AM-ACO.
1
When NP%NS = 0, the remaining NP%NS solutions are set as a new
niche. Thus, the number of niches is T = NP/NS + 1. However, for the
convenience of description, we generally use NS to denote the number of
individuals in each niche and T = NP/NS to denote the number of niches in
this paper.

YANG et al.:AM-ACO 195
Fig. 1. Influence of σ on the weight of each solution.
Then, we talk about the influence of σ on M-ACO in detail.
Through (1) and (2), we can see that σ plays a key role in
determining the probability of each solution in the archive,
and thus implicitly affects the selection of solutions for ants
to construct new ones. To better understand the influence of
σ , we plot the weight of each solution with σ varying from
0.1 to 1.0 and the results are presented in Fig. 1.
From this figure, we can see that the smaller the value of σ ,
the bigger the difference in the weight of each solution and the
larger the value of σ , the more uniform the weight. In other
words, a small σ leads to bias to the top-ranked solutions,
while a large σ results in equivalence among solutions. In
traditional ACO
R
for single optimization, a small σ , such as
10
4
in [63] and 0.05 in [17], is preferred. However, this is
not suitable for multimodal optimization.
On one hand, it should be noticed that when locating mul-
tiple global optima simultaneously, it is highly possible that
one niche may be responsible for locating a small number of
global optima not just one, especially when the number of
global optima is larger than that of niches. This indicates that
solutions with the same or very similar fitness values in each
niche should have nearly equal possibilities to be selected for
ants. Therefore, in contrast to the original ACO
R
, which biases
to the top-ranked solutions, a large σ is preferred in M-ACO.
On the other hand, not all solutions in one niche are ben-
eficial and usually the worst one should be less biased. This
tells us that σ should not be too large, because the larger the
value of σ , the more uniform the probability distribution.
In addition, the solution quality of different niches may be
different, and the proportion of the best solutions within each
niche may be different as well. This indicates that σ should
be different for different niches.
Therefore, taking the above into consideration, we propose
an adaptive adjusting strategy for σ , which is formulated as
σ
i
= 0.1 +0.3e
FS
i
max
FS
i
min
FS
max
FS
min
+η
(5)
where σ
i
is the σ in (2)fortheith niche; FS
i
max
and FS
i
min
are the maximum and minimum fitness values of the ith niche,
respectively; FS
max
and FS
min
are the maximum and minimum
fitness values of the whole archive, respectively and η is a very
small value used to avoid the denominator being zero.
Observing (5), we find that for each niche, σ
i
is ranging
within (0.1, 0.4]. Then, observing Fig. 1, we can conclude that,
when a significant difference in solution quality exists in one
niche, which is indicated by a large value of FS
i
max
FS
i
min
,
σ
i
tends to 0.1, leading to bias to the better solutions. This is
beneficial for exploitation. On the contrary, when the fitness
values of solutions in one niche are very close to each other,
suggested by a small value of FS
i
max
FS
i
min
, σ
i
has a tendency
to 0.4, resulting in that each solution is nearly unbiased. This is
profitable for exploration. Therefore, taking both the difference
in solution quality of niches and that of solutions within each
niche into consideration, this adaptive adjusting strategy for σ
can potentially afford proper selections of solutions for ants
to construct new ones. Through this, a good balance between
exploration and exploitation can also be achieved.
After obtaining the proper σ for each niche, NS ants start
to construct solutions using (3) and (4), where NP is replaced
by NS. However, two changes should be noted in AM-ACO.
1) Instead of selecting one solution for each dimension in
ACO
R
, we use all dimensions of the selected solution as
the base [namely µ in (3)] to construct the corresponding
new solution. Such operation can not only reduce the
time complexity, but also potentially take the correlation
among variables into consideration, which is beneficial
for preserving useful information together.
2) As for ξ in (4), which has effects on both diversity and
convergence through δ,wesetξ as a uniformly ran-
dom value generated within (0, 1] for each ant, instead
of adopting a fixed value in ACO
R
. The randomness of ξ
is utilized because
NS
i=1
|x
d
i
x
d
j
|/(NS1) in AM-ACO
is much smaller than
NP
i=1
|x
d
i
x
d
j
|/(NP1) in ACO
R
.
Thus, ξ may be different no matter for ants within one
niche or for ants in different niches, which is potentially
beneficial for obtaining a balance between exploration
and exploitation.
Overall, compared with the original ACO
R
[63], AM-ACO
operating on the niche level is relieved from the sensitivity to
parameters (σ and ξ) by the adaptive adjusting strategy for σ
and the random setting for ξ . The efficiency of AM-ACO in
multimodal optimization is verified in Section IV-B.
B. Enhancement Using DE Mutation
In AM-ACO, each ant in one niche constructs a new solu-
tion using (3) with µ set as the selected solution, namely
µ = x
j
(suppose the selected solution is x
j
in the niche). Such
sampling may cause slow convergence, especially when most
solutions in one niche are of poor quality. In addition, when
most solutions in one niche fall into local areas, it is hard for
the ant colony in this niche to escape from local areas, leading
to waste of fitness evaluations for useless exploration.
Therefore, to counteract such a predicament, we consider
introducing a basic DE mutation operator to AM-ACO to
shift the base vector [utilized in (3)] for an ant to construct
solutions, which is defined as follows:
μ
d
= x
d
j
+ F(x
d
seed
x
d
j
) (6)

Citations
More filters
Journal ArticleDOI

An Energy Efficient Ant Colony System for Virtual Machine Placement in Cloud Computing

TL;DR: The results show that the OEMACS generally outperforms conventional heuristic and other evolutionary-based approaches, especially on VMP with bottleneck resource characteristics, and offers significant savings of energy and more efficient use of different resources.
Journal ArticleDOI

Seeking Multiple Solutions: An Updated Survey on Niching Methods and Their Applications

TL;DR: This paper first revisits the fundamental concepts about niching and its most representative schemes, then reviews the most recent development of nICHing methods, including novel and hybrid methods, performance measures, and benchmarks for their assessment, and poses challenges and research questions on nichin that are yet to be appropriately addressed.
Journal ArticleDOI

A Level-Based Learning Swarm Optimizer for Large-Scale Optimization

TL;DR: This work considers particles in the swarm as mixed-level students and proposes a level-based learning swarm optimizer (LLSO) to settle large-scale optimization, which is still considerably challenging in evolutionary computation.
Journal ArticleDOI

Automatic Niching Differential Evolution With Contour Prediction Approach for Multimodal Optimization Problems

TL;DR: The proposed ANDE algorithm acts as a parameter-free automatic niching method that does not need to predefine the number of clusters or the cluster size and is enhanced by a contour prediction approach (CPA) and a two-level local search strategy.
References
More filters
Proceedings ArticleDOI

Particle swarm optimization

TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Journal ArticleDOI

Ant system: optimization by a colony of cooperating agents

TL;DR: It is shown how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling, and the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.
Journal ArticleDOI

Ant colony system: a cooperative learning approach to the traveling salesman problem

TL;DR: The results show that the ACS outperforms other nature-inspired algorithms such as simulated annealing and evolutionary computation, and it is concluded comparing ACS-3-opt, a version of the ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSPs.
Book

Ant Colony Optimization

TL;DR: Ant colony optimization (ACO) is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals as discussed by the authors In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization.
Journal ArticleDOI

Differential Evolution: A Survey of the State-of-the-Art

TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What are the contributions in "Adaptive multimodal continuous ant colony optimization" ?

Taking advantage of ant colony optimization ( ACO ) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. This work was supported in part by the National Natural Science Foundation of China under Project 61379061, Project 61332002, Project 61511130078, and Project 6141101191, in part by the Natural Science Foundation of Guangdong for Distinguished Young Scholars under Project 2015A030306024, in part by the Guangdong Special Support Program under Project 2014TQ01X550, and in part by the Guangzhou Pearl River New Star of Science and Technology under Project 201506010002 and Project 151700098. This paper has supplementary downloadable multimedia material available at http: //ieeexplore. Ieee. org provided by the authors. Color versions of one or more of the figures in this paper are available online at http: //ieeexplore. 

Therefore, there is room to further improve the performance of the proposed algorithms on very complex problems, which forms a part of future work. 

In the proposed LAM-ACOs, there are only two parameters needed to set, namely, the ant colony size (NP) and the niche size set G. 

In this paper, five accuracy levels, namely ε = 1.0E-01, ε = 1.0E-02, ε = 1.0E-03, ε = 1.0E-04, and ε = 1.0E-05, are adopted in the experiments. 

For the local search method, the authors propose to utilize a similar scheme used in the solution construction for ants in (3), because Gaussian distribution has a narrow sampling space, especially when the standard deviation δ is small. 

Please note that due to the absence of the detailed results in the associated competitions, whether LAMS-ACO is better than, equivalent to or worse than the compared winner is just determined by the values of PR without any statistical test validation, in these tables. 

In addition, the evaluation criteria used in both the special session and the state-of-the-art papers [44], [50]–[52], [54] are utilized to evaluate the performance of different algorithms. 

The third one is self-adaptively performed around seeds of niches to refine the obtained solutions, which is profitable for exploitation. 

Pi is the probability of the ith seed to do local search, FSEi is the fitness of the ith seed and FSEmax is the maximum fitness value among all seeds. 

Trending Questions (1)
What is the part who is responsible for exporation and exploitation in ant colony optimization?

The proposed algorithm in the paper aims to achieve a good balance between exploration and exploitation in ant colony optimization.