scispace - formally typeset
Open AccessBook ChapterDOI

Adaptive Particle Swarm Optimization

Zhi-Hui Zhan, +1 more
- pp 227-234
TLDR
An adaptive particle swarm optimization with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach is proposed, resulting in substantially improved quality of global solutions.
Abstract
This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an `evolutionary factor' by using the population distribution information and relative particle fitness information in each generation, and estimates the evolutionary state through a fuzzy classification method. According to the identified state and taking into account various effects of the algorithm-controlling parameters, adaptive control strategies are developed for the inertia weight and acceleration coefficients for faster convergence speed. Further, an adaptive `elitist learning strategy' (ELS) is designed for the best particle to jump out of possible local optima and/or to refine its accuracy, resulting in substantially improved quality of global solutions. The APSO algorithm is tested on 6 unimodal and multimodal functions, and the experimental results demonstrate that the APSO generally outperforms the compared PSOs, in terms of solution accuracy, convergence speed and algorithm reliability.

read more

Content maybe subject to copyright    Report

Zhan, Z-H. and Zhang, J. and Li, Y. and Chung, H.S-H. (2009) Adaptive
particle swarm optimization.
IEEE Transactions on Systems Man, and
Cybernetics — Part B: Cybernetics, 39 (6). pp. 1362-1381. ISSN 0018-
9472
http://eprints.gla.ac.uk/7645/
Deposited on: 12 October 2009
Enlighten – Research publications by members of the University of Glasgow
http://eprints.gla.ac.uk

1362 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 39, NO. 6, DECEMBER 2009
Adaptive Particle Swarm Optimization
Zhi-Hui Zhan, Student Member, IEEE, Jun Zhang, Senior Member, IEEE, Yun Li, Member, IEEE,and
Henry Shu-Hung Chung, Senior Member, IEEE
Abstract—An adaptive particle swarm optimization (APSO)
that features better search efficiency than classical particle swarm
optimization (PSO) is presented. More importantly, it can per-
form a global search over the entire search space with faster
convergence speed. The APSO consists of two main steps. First,
by evaluating the population distribution and particle fitness, a
real-time evolutionary state estimation procedure is performed to
identify one of the following four defined evolutionary states, in-
cluding exploration, exploitation, convergence,andjumping out in
each generation. It enables the automatic control of inertia weight,
acceleration coefficients, and other algorithmic parameters at run
time to improve the search efficiency and convergence speed. Then,
an elitist learning strategy is performed when the evolutionary
state is classified as convergence state. The strategy will act on
the globally best particle to jump out of the likely local optima.
The APSO has comprehensively been evaluated on 12 unimodal
and multimodal benchmark functions. The effects of parameter
adaptation and elitist learning will be studied. Results show that
APSO substantially enhances the performance of the PSO par-
adigm in terms of convergence speed, global optimality, solution
accuracy, and algorithm reliability. As APSO introduces two new
parameters to the PSO paradigm only, it does not introduce an
additional design or implementation complexity.
Index Terms—Adaptive particle swarm optimization (APSO),
evolutionary computation, global optimization, particle swarm
optimization (PSO).
I. INTRODUCTION
P
ARTICLE swarm optimization (PSO), which was intro-
duced by Kennedy and Eberhart in 1995 [1], [2], is one
of the most important swarm intelligence paradigms [3]. The
PSO uses a simple mechanism that mimics swarm behavior in
birds flocking and fish schooling to guide the particles to search
for globally optimal solutions. As PSO is easy to implement, it
has rapidly progressed in recent years and with many successful
applications seen in solving real-world optimization problems
[4]–[10].
Manuscript received July 31, 2008; revised November 7, 2008 and
January 21, 2009. First published April 7, 2009; current version published
November 18, 2009. This work was supported in part by the National Science
Foundation (NSF) of China under Project 60573066, the NSF of Guangdong
under Project 5003346, the Scientific Research Foundation for the Returned
Overseas Chinese Scholars, State Education Ministry, P.R. China, and the
NSFC Joint Fund with Guangdong under Key Project U0835002. This paper
was recommended by Associate Editor Q. Zhang.
Z.-H. Zhan and J. Zhang are with the Department of Computer Science, Sun
Yat-Sen University, Guangzhou 510275, China (e-mail: junzhang@ieee.org).
Y. Li is with the Department of Electronics and Electrical Engineering,
University of Glasgow, G12 8LT Glasgow, U.K., and also with the University
of Electronic Science and Technology of China (UESTC), Chengdu 610054,
China.
H. S.-H. Chung is with the Department of Electronic Engineering, City
University of Hong Kong, Kowloon, Hong Kong.
Digital Object Identifier 10.1109/TSMCB.2009.2015956
However, similar to other evolutionary computation algo-
rithms, the PSO is also a population-based iterative algorithm.
Hence, the algorithm can computationally be inefficient as
measured by the number of function evaluations (FEs) required
[11]. Further, the standard PSO algorithm can easily get trapped
in the local optima when solving complex multimodal problems
[12]. These weaknesses have restricted wider applications of
the PSO [5].
Therefore, accelerating convergence speed and avoiding the
local optima have become the two most important and appeal-
ing goals in PSO research. A number of variant PSO algorithms
have, hence, been proposed to achieve these two goals [8], [9],
[11], [12]. In this development, control of algorithm parameters
and combination with auxiliary search operators have become
two of the three most salient and promising approaches (the
other being improving the topological structure) [10]. However,
so far, it is seen to be difficult to simultaneously achieve both
goals. For example, the comprehensive-learning PSO (CLPSO)
in [12] focuses on avoiding the local optima, but brings in a
slower convergence as a result.
To achieve both goals, adaptive PSO (APSO) is formulated
in this paper by developing a systematic parameter adaptation
scheme and an elitist learning strategy (ELS). To enable adap-
tation, an evolutionary state estimation (ESE) technique is first
devised. Hence, adaptive parameter control strategies can be de-
veloped based on the identified evolutionary state and by mak-
ing use of existing research results on inertia weight [13]–[16]
and acceleration coefficients [17]–[20].
The time-varying controlling strategies proposed for the PSO
parameters so far are based on the generation number in the
PSO iterations using either linear [13], [18] or nonlinear [15]
rules. Some strategies adjust the parameters with a fuzzy system
using fitness feedback [16], [17]. Some use a self-adaptive
method by encoding the parameters into the particles and
optimizing them together with the position during run time
[19], [20]. Although these generation-number-based strategies
have improved the algorithm, they may run into the risk of
inappropriately adjusting the parameters because no informa-
tion on the evolutionary state that reflects the population and
fitness diversity is identified or utilized. To improve efficiency
and to accelerate the search process, it is vital to determine the
evolutionary state and the best values for the parameters.
To avoid possible local optima in the convergence state,
combinations with auxiliary techniques have been developed
elsewhere by introducing operators such as selection [21],
crossover [22], mutation [23], local search [24], reset [25], [26],
reinitialization [27], [28], etc., into PSO. These hybrid oper-
ations are usually implemented in every generation [21]–[23]
or at a prefixed interval [24] or are controlled by adaptive
1083-4419/$25.00 © 2009 IEEE
Authorized licensed use limited to: UNIVERSITY OF GLASGOW. Downloaded on October 2, 2009 at 10:06 from IEEE Xplore. Restrictions apply.

ZHAN et al.: ADAPTIVE PARTICLE SWARM OPTIMIZATION 1363
strategies using stagnated generations as a trigger [25]–[28].
While these methods have brought improvements in PSO, the
performance may further be enhanced if the auxiliary oper-
ations are adaptively performed with a systematic treatment
according to the evolutionary state. For example, the mutation,
reset, and reinitialization operations can be more pertinent
when the algorithm has converged to a local optimum rather
than when it is exploring.
Extending from the existing parameter setting techniques on
inertia weight [13]–[16] and acceleration coefficients [17]–[20],
this paper develops a systematic adaptation scheme. The PSO
parameters are not only controlled by ESE but also taking
the different effects of these parameters in different states into
account. In addition, departing from mutation [23], reset [25],
[26], or reinitialization [27], [28] operations, the ELS is pro-
posed in this paper to perform only on the globally best particle
and only in a convergence state. This is not only because
the convergence state needs the ELS most but also because
of a very low computational overhead. Further, the adaptive
ELS will maintain population diversity for jumping out of the
potential local optima. Moreover, tests are to be carried out on
various topological structures in the PSO paradigm to verify
the effectiveness of the APSO and to more comprehensively
compare with other improved PSO algorithms.
In Section II, the PSO and its developments are briefly
reviewed. Section III presents the ESE approach in detail. The
APSO algorithm is proposed in Section IV through the devel-
opments of an adaptive parameter control strategy and the ELS.
Section V experimentally compares the APSO with various
existing PSO algorithms using a set of benchmark functions.
Discussions and further investigations on the APSO are made
in Section VI. Finally, conclusions are drawn in Section VII.
II. PSO
AND ITS DEVELOPMENTS
A. PSO Framework
In PSO, a swarm of particles are represented as potential
solutions, and each particle i is associated with two vectors,
i.e., the velocity vector V
i
=[v
1
i
,v
2
i
,...,v
D
i
] and the posi-
tion vector X
i
=[x
1
i
,x
2
i
,...,x
D
i
], where D stands for the
dimensions of the solution space. The velocity and the position
of each particle are initialized by random vectors within the
corresponding ranges. During the evolutionary process, the
velocity and position of particle i on dimension d are updated as
v
d
i
= ωv
d
i
+ c
1
rand
d
1
pBest
d
i
x
d
i
+ c
2
rand
d
2
nBest
d
x
d
i
(1)
x
d
i
= x
d
i
+ v
d
i
(2)
where ω is the inertia weight [13], c
1
and c
2
are the acceleration
coefficients [2], and rand
d
1
and rand
d
2
are two uniformly dis-
tributed random numbers independently generated within [0, 1]
for the dth dimension [1]. In (1), pBest
i
is the position with
the best fitness found so far for the ith particle, and nBest is
the best position in the neighborhood. In the literature, instead
of using nBest, gBest may be used in the global-version PSO,
whereas lBest may be used in the local-version PSO (LPSO).
A user-specified parameter V
d
max
∈
+
is applied to clamp
the maximum velocity of each particle on the dth dimension.
Thus, if the magnitude of the updated velocity |v
d
i
| exceeds
V
d
max
, then v
d
i
is assigned the value sign(v
d
i
)V
d
max
.Inthis
paper, the maximum velocity V
max
is set to 20% of the search
range, as proposed in [4].
B. Current Developments of the PSO
Given its simple concept and effectiveness, the PSO has
become a popular optimizer and has widely been applied in
practical problem solving. Thus, theoretical studies and per-
formance improvements of the algorithm have become impor-
tant and attractive. Convergence analysis and stability studies
have been reported by Clerc and Kennedy [29], Trelea [30],
Yasuda et al. [31], Kadirkamanathan et al. [32], and van den
Bergh and Engelbrecht [33]. Meanwhile, much research on per-
formance improvements has been reported, including parameter
studies, combination with auxiliary operations, and topological
structures [4], [5], [10].
The inertia weight ω in (1) was introduced by Shi and
Eberhart [13]. They proposed an ω linearly decreasing with the
iterative generations as
ω = ω
max
(ω
max
ω
min
)
g
G
(3)
where g is the generation index representing the current number
of evolutionary generations, and G is a predefined maximum
number of generations. Here, the maximal and minimal weights
ω
max
and ω
min
are usually set to 0.9 and 0.4, respectively
[13], [14].
In addition, a fuzzy adaptive ω was proposed in [16], and
a random version setting ω to 0.5+random(0, 1)/2 was ex-
perimented in [34] for dynamic system optimization. As this
random ω has an expectation of 0.75, it has a similar idea as
Clerc’s constriction factor [28], [29]. The constriction factor
has been introduced into PSO for analyzing the convergence
behavior, i.e., by modifying (1) to
v
d
i
= χ
v
d
i
+ c
1
rand
d
1
pBest
d
i
x
d
i
+ c
2
rand
d
2
nBest
d
x
d
i

(4)
where the constriction factor
χ =
2
|2 ϕ
ϕ
2
4ϕ|
(5a)
is set to 0.729 with
ϕ = c
1
+ c
2
=4.1 (5b)
where c
1
and c
2
are both set to 2.05 [29]. Mathematically,
the constriction factor is equivalent to the inertia weight, as
Eberhart and Shi pointed out in [35]. In this paper, we focus
on the PSO with an inertia weight and use a global version of
PSO (GPSO) [13] to denote the traditional global-version PSO
with an inertia weight as given by (3).
In addition to the inertia weight and the constriction fac-
tor, the acceleration coefficients c
1
and c
2
are also important
Authorized licensed use limited to: UNIVERSITY OF GLASGOW. Downloaded on October 2, 2009 at 10:06 from IEEE Xplore. Restrictions apply.

1364 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 39, NO. 6, DECEMBER 2009
parameters in PSO. In Kennedy’s two extreme cases [36],
i.e., the “social-only” model and the “cognitive-only” model,
experiments have shown that both acceleration coefficients are
essential to the success of PSO. Kennedy and Eberhart [1]
suggested a fixed value of 2.0, and this configuration has been
adopted by many other researchers. Suganthan [37] showed
that using ad hoc values of c
1
and c
2
rather than a fixed value
of 2.0 for different problems could yield better performance.
Ratnaweera et al. [18] proposed a PSO algorithm with linearly
time-varying acceleration coefficients (HPSO-TVAC), where
a larger c
1
and a smaller c
2
were set at the beginning and
were gradually reversed during the search. Among these three
methods, the HPSO-TVAC shows the best overall performance
[18]. This may be owing to the time-varying c
1
and c
2
that
can balance the global and local search abilities, which implies
that adaptation of c
1
and c
2
can be promising in enhancing the
PSO performance. Hence, this paper will further investigate the
effects of c
1
and c
2
and develop an optimal adaptation strategy
according to ESE.
Another active research trend in PSO is hybrid PSO, which
combines PSO with other evolutionary paradigms. Angeline
[21] first introduced into PSO a selection operation similar to
that in a genetic algorithm (GA). Hybridization of GA and PSO
has been used in [38] for recurrent artificial neural network
design. In addition to the normal GA operators, e.g., selection
[21], crossover [22], and mutation [23], other techniques such
as local search [24] and differential evolution [39] have been
used to combine with PSO. Cooperative approach [40], self-
organizing hierarchical technique [18], deflection, stretching,
and repulsion techniques [41] have also been hybridized with
traditional PSO to enhance performance. Inspired by biology,
some researchers introduced niche [42], [43] and speciation
[44] techniques into PSO to prevent the swarm from crowd-
ing too closely and to locate as many optimal solutions as
possible.
In addition to research on parameter control and auxil-
iary techniques, PSO topological structures are also widely
studied. The LPSO with a ring topological structure and the
von Neumann topological structure PSO (VPSO) have been
proposed by Kennedy and Mendes [45], [46] to enhance the
performance in solving multimodal problems. Further, dynam-
ically changing neighborhood structures have been proposed
by Suganthan [37], Hu and Eberhart [47], and Liang and
Suganthan [48] to avoid the deficiencies of fixed neighbor-
hoods. Moreover, in the “fully informed particle swarm” (FIPS)
algorithm [49], the information of the entire neighborhood is
used to guide the particles. The CLPSO in [12] lets the particle
use different pBests to update its flying on different dimen-
sions for improved performance in multimodal applications.
III. ESE
FOR PSO
To more objectively and optimally control the PSO, this
section develops an ESE approach. During a PSO process, the
population distribution characteristics vary not only with the
generation number but also with the evolutionary state. For
example, at an early stage, the particles may be scattered in var-
ious areas, and, hence, the population distribution is dispersive.
As the evolutionary process goes on, however, particles would
cluster together and converge to a locally or globally optimal
area. Hence, the population distribution information would be
different from that in the early stage. Therefore, how to detect
the different population distribution information and how to use
this information to estimate the evolutionary state would be a
significant and promising research topic in PSO. The notion of
evolutionary states was first introduced in [50] and [51], where
a clustering analysis technique was used to determine the states.
This section extends this technique to systematic ESE with a
fuzzy classification option.
A. Population Distribution Information in PSO
In this section, the population distribution characteristics in
a PSO process are first investigated so as to formulate an ESE
approach. For this, a total of 12 commonly used test functions
[12], [53], [54] are adopted to later benchmark the perfor-
mance in this paper (including the tests in Section IV-B on
the effects of parameter adaptation, the benchmark experiments
in Section V, and the merit and sensitivity investigations in
Section VI). These functions are summarized in Table I, where
D represents the number of dimensions of the test function, and
Column 6 defines an “acceptance” value to gauge whether a so-
lution found by the nondeterministic PSO would be acceptable
or not.
To illustrate the dynamics of particle distribution in the PSO
process, we herein take a time-varying 2-D Sphere function
f
1
(x r)=(x
1
r)
2
+(x
2
r)
2
,x
i
[10, 10] (6)
as an example, where r is initialized to 5 and shifts to 5 at
the fiftieth generation in a 100-generation optimization process.
That is, the theoretical minimum of f
1
shifts from (5, 5) to
(5, 5) half way in the search process. Using a GPSO [13] with
100 particles to solve this minimization problem, the population
distributions in various running phases were observed, as shown
in Fig. 1.
It can be seen in Fig. 1(a) that following the initialization, the
particles start to explore throughout the search space without
an evident control center. Then, the learning mechanisms of the
PSO pull many particles to swarm together toward the optimal
region, as seen in Fig. 1(b). Then, the population converges to
the best particle [in Fig. 1(c)]. At the fiftieth generation, the
bottom of the sphere is shifted from (5, 5) to (5, 5). It is
seen in Fig. 1(d) that a new leader quickly emerges somewhat
far away from the current clustering swarm. It leads the swarm
to jump out of the previous optimal region to the new region
[Fig. 1(e)], forming a second convergence [Fig. 1(f)]. From
this simple investigation, it can be seen that the population
distribution information can significantly vary during the run
time, and that the PSO has the ability to adapt to a time-varying
environment.
B. ESE
Based on the search behaviors and the population distribution
characteristics of the PSO, an ESE approach is developed in
Authorized licensed use limited to: UNIVERSITY OF GLASGOW. Downloaded on October 2, 2009 at 10:06 from IEEE Xplore. Restrictions apply.

ZHAN et al.: ADAPTIVE PARTICLE SWARM OPTIMIZATION 1365
TAB LE I
T
WELVE TEST FUNCTIONS USED IN THIS PAPER, THE FIRST SIX BEING UNIMODAL AND THE REMAINING BEING MULTIMODAL
Fig. 1. Population distribution observed at various stages in a PSO process. (a) Generation =1. (b) Generation =25. (c) Generation =49. (d) Generation =50.
(e) Generation =60. (f) Generation =80.
this section. The distribution information in Fig. 1 can be
formulated as illustrated in Fig. 2 by calculating the mean dis-
tance of each particle to all the other particles. It is reasonable
to expect that the mean distance from the globally best particle
to other particles would be minimal in the convergence state
since the global best tends to be surrounded by the swarm. In
contrast, this mean distance would be maximal in the jumping-
out state, because the global best is likely to be away from
the crowding swarm. Therefore, the ESE approach will take
into account the population distribution information in every
generation, as detailed in the following steps.
Step 1: At the current position, calculate the mean distance
of each particle i to all the other particles. For
Authorized licensed use limited to: UNIVERSITY OF GLASGOW. Downloaded on October 2, 2009 at 10:06 from IEEE Xplore. Restrictions apply.

Citations
More filters
Journal ArticleDOI

Parameter tuning for configuring and analyzing evolutionary algorithms

TL;DR: A conceptual framework for parameter tuning is presented, a survey of tuning methods is provided, and related methodological issues are discussed to elaborate on how tuning can improve methodology by facilitating well-funded experimental comparisons and algorithm analysis.
Journal ArticleDOI

Book review: particle swarm optimization for single objective continuous space problems: A review

TL;DR: This paper reviews recent studies on the Particle Swarm Optimization (PSO) algorithm and presents some potential areas for future study.
Journal ArticleDOI

Particle Swarm Optimization With an Aging Leader and Challengers

TL;DR: ALC-PSO is designed to overcome the problem of premature convergence without significantly impairing the fast-converging feature of PSO and serves as a challenging mechanism for promoting a suitable leader to lead the swarm.
Journal ArticleDOI

A Self-Learning Particle Swarm Optimizer for Global Optimization Problems

TL;DR: A novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems, which can enable a particle to choose the optimal strategy according to its own local fitness landscape.
Journal ArticleDOI

Improved artificial bee colony algorithm for global optimization

TL;DR: An improved artificial bee colony (IABC) algorithm for global optimization is presented, Inspired by differential evolution and introducing a parameter M, that uses a selective probability p to control the frequency of introducing “ABC/rand/1” and “ ABC/best/1" and gets a new search mechanism.
References
More filters
Proceedings ArticleDOI

Particle swarm optimization

TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Proceedings ArticleDOI

A modified particle swarm optimizer

TL;DR: A new parameter, called inertia weight, is introduced into the original particle swarm optimizer, which resembles a school of flying birds since it adjusts its flying according to its own flying experience and its companions' flying experience.
Journal ArticleDOI

Evolutionary programming made faster

TL;DR: A "fast EP" (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator and is proposed and tested empirically, showing that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested.
Journal ArticleDOI

Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients

TL;DR: A novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations to overcome the difficulties of selecting an appropriate mutation step size for different problems.
Proceedings ArticleDOI

Using selection to improve particle swarm optimization

P.J. Angeline
TL;DR: A hybrid based on the particle swarm algorithm but with the addition of a standard selection mechanism from evolutionary computations is described that shows selection to provide an advantage for some (but not all) complex functions.
Related Papers (5)
Frequently Asked Questions (10)
Q1. What are the contributions mentioned in the paper "Adaptive particle swarm optimization" ?

An adaptive particle swarm optimization ( APSO ) that features better search efficiency than classical particle swarm optimization ( PSO ) is presented. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. The strategy will act on the globally best particle to jump out of the likely local optima. The effects of parameter adaptation and elitist learning will be studied. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity. 

Further work includes research into adaptive control of topological structures based on ESE and applications of the ESE technique to other evolutionary computation algorithms. 

In addition to the normal GA operators, e.g., selection [21], crossover [22], and mutation [23], other techniques such as local search [24] and differential evolution [39] have been used to combine with PSO. 

In a unimodal space, it is important for an optimization or search algorithm to converge fast and to refine the solution for high accuracy. 

In PSO, a swarm of particles are represented as potential solutions, and each particle i is associated with two vectors, i.e., the velocity vector V i = [v1i , v 2 i , . . . , v D i ] and the position vector Xi = [x1i , x 2 i , . . . , x D i ], where D stands for the dimensions of the solution space. 

The consequence is that the swarm will strongly be attracted by the current best region, causing premature convergence, which is harmful if the current best region is a local optimum. 

trials in elitist learning perturb the particle that leads the swarm, which is reflected in the slight divergence between c1 and c2 that follows. 

These plots confirm that, in a multimodal space, the APSO can also find a potential optimal region (maybe a local optimum) fast in an early phase and converge fast with a rapid decreasing diversity due to theAuthorized licensed use limited to: UNIVERSITY OF GLASGOW. 

In this paper, the authors focus on the PSO with an inertia weight and use a global version of PSO (GPSO) [13] to denote the traditional global-version PSO with an inertia weight as given by (3). 

In Kennedy’s two extreme cases [36], i.e., the “social-only” model and the “cognitive-only” model, experiments have shown that both acceleration coefficients are essential to the success of PSO.