scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive probabilities of crossover and mutation in genetic algorithms

M. Srinivas, +1 more
- Vol. 24, Iss: 4, pp 656-667
Reads0
Chats0
TLDR
An efficient approach for multimodal function optimization using genetic algorithms (GAs) and the use of adaptive probabilities of crossover and mutation to realize the twin goals of maintaining diversity in the population and sustaining the, convergence capacity of the GA are described.
Abstract
In this paper we describe an efficient approach for multimodal function optimization using genetic algorithms (GAs). We recommend the use of adaptive probabilities of crossover and mutation to realize the twin goals of maintaining diversity in the population and sustaining the, convergence capacity of the GA. In the adaptive genetic algorithm (AGA), the probabilities of crossover and mutation, p/sub c/ and p/sub m/, are varied depending on the fitness values of the solutions. High-fitness solutions are 'protected', while solutions with subaverage fitnesses are totally disrupted. By using adaptively varying p/sub c/ and p/sub ,/ we also provide a solution to the problem of deciding the optimal values of p/sub c/ and p/sub m/, i.e., p/sub c/ and p/sub m/ need not be specified at all. The AGA is compared with previous approaches for adapting operator probabilities in genetic algorithms. The Schema theorem is derived for the AGA, and the working of the AGA is analyzed. We compare the performance of the AGA with that of the standard GA (SGA) in optimizing several nontrivial multimodal functions with varying degrees of complexity. >

read more

Content maybe subject to copyright    Report

656
IEEE
TRANSACTIONS
ON
SYSTEMS,
MAN
AND
CYBERNETICS,
VOL.
24, NO.
4,
APRIL 1994
Adaptive Probabilities
of
Crossover
Genetic
in
Mu
tation
and
Algorithms
M.
Srinivas,
and
L.
M.
Patnaik,
Fellow,
ZEEE
Abstract-
In this paper we describe an efficient approach locally optimal solution. On the other hand, they differ from
for multimodal function optimization using Genetic Algorithms
crossover and mutation to realize the twin goals of maintaining
diversity in the population and sustaining the convergence
capacity of the GA. In the Adaptive Genetic Algorithm (AGA),
the probabilities of crossover and mutation,
p,
and
p,,
are
random sampling algorithms due to their ability to direct the
‘pace.
nents:
(GAS).
we
IW3”end the use
of adaptive probabilities of
search towards relatively ‘prospective’ regions in the search
VPiCallY a GA is characterized by the following Compo-
variid depending on the fitness values of the-solutio6.
High-
fitness solutions
are
‘protected’, while solutions
with
subaverage
fitnesses are totally disrupted. By
using
adaptivdy varying
p,
and
pm,
we
also
provide a solution to the problem of deciding
the optimal values of
pc
and
pm,
i.e.,
pc
and
pm
need
not
be
specified at
all.
The AGA is compared with previous approaches
for adapting operator probabilities in genetic
algorithms.
The
sShema theorem is derived for the AGA, and the working of
the AGA is analyzed.
We compare the performance of the AGA with that
of
the Standard GA (SGA) in optimizing several nontrivial
multimodal functions with varying degrees of complexity. For
most functions, the AGA converges to the global optimum in
far fewer generations than the SGA, and it gets stuck at a local
optimum fewer times.
Our
experiments demonstrate that the
relative performance
of
the AGA
as
compared
to
that
of
the
SGA improves
as
the epistacity and the multimodal nature of
the objective function increase. We believe that the AGA is the
first step in realizing a
class
of self organizing GAS capable
of adapting themselves in locating the global optimum in
a
multimodal landscape.
I.
INTRODUCTION
ENETIC Algorithms [2], [7], [lo], [17]
are
robust search
G
and optimization techniques which are finding applica-
tion in a number of practical problems. The robustness of
Genetic Algorithms (hereafter referred to as GAS) is due to
their capacity to locate the global optimum in a multimodal
landscape. A plethora of such multimodal functions exist in en-
gineering problems (optimization of neural network structure
and learning neural network weights, solving optimal control
problems, designing structures, and solving flow problems) are
a few examples. It is for the above reason that considerable
attention has been paid to the design of GAS for optimizing
multimodal functions.
GAS employ a random, yet directed, search for locating
the globally optimal solution. They are superior to ‘gradient
descent’ techniques as the search is not biased towards the
a genetic representation (or an encoding) for the feasible
a population of encoded solutions
a fitness function that evaluates the optimality of each
genetic operators that generate a new population from
control parameters.
The GA may be viewed as an evolutionary process wherein
a population of solutions evolves over a sequence of genera-
tions. During each generation, the fitness of each solution is
evaluated, and solutions
are
selected
for reproduction based
on their fitness.
Selection
embodies the principle
of
‘Survival
of the fittest.’ ‘Good’ solutions
are
selected
for reproduction
while ‘bad’ solutions
are
eliminated. The ‘goodness’ of a
solution is determined from its fitness value. The selected
solutions then undergo recombination under the action of the
crossover
and
mutation
operators. It has to be noted that
the genetic representation may differ considerably from the
natural form of the parameters of the solutions. Fixed-length
and binary encoded strings for representing solutions have
dominated GA research since they provide the maximum
number of schemata and as they are amenable to simple
implementation.
The power of GAS arises from crossover. Crossover causes
a structured, yet randomized exchange of genetic material
between solutions, with the possibility that ‘good’ solutions
can generate ‘better’ ones. The following sentences from [lo,
pp. 131 aptly summarize the working of GAS:
”.
.
.,
the population contains not just a sample
of
n
ideas,
rather it contains a multitude
of
notions and rankings of those
notions for task pe$onnance. Genetic Algorithms ruthlessly
exploit this wealth
of
information
by
1)
reproducing high
quality notions according
to
their performance and
2)
crossing
these notions with many other high-performance notions from
solutions to the optimization problem
solution
the existing population
other strings.”
some
probability
PC
(the
crossover probability or crossover rate). When the SOlUtiOnS are
not subjected to crossover, they remain unmodified. Notable
crossover techniques include the single-point, the two-point,
and the uniform types [23].
Manuscript received August 4, 1991: revised August
28,
1992, February
M
Srinivas
is
with the Department
of
Computer Science and Automation,
L.
M. Patnaik is with the Microprocessor Applications Laboratory, Indian
25, 1993, and June 11, 1993. Recommended by Associate Editor Bezdek.
Crossover
OccUTs
Only
Indian Institute
of
Science, Bangalore 560 012, India
Institute
of
Science, Bangalore 560 012, India.
IEEE
Log
Number 9400454.
0018-9472/94$04.00
0
1994
IEEE

SRlNIVAS
AND
PATNAIK:
CROSSOVER
AND
MUTATION
IN
GENETIC
ALGORITHMS
651
Simple Genetic Algorithm
()
initialize population;
evaluate population
;
while convergence not achieved
I
{
scale population fitnesses
;
select solutions for next population
;
perform crossover and mutation
;
evaluate population
;
I
1
Fig.
1.
Basic
structure
of
a
GA.
Mutation
involves the modification of the value of each
‘gene’ of a solution with some probability
p,
(the mutation
probability). The role of mutation in GAS has been that of
restoring lost or unexplored genetic material into the popu-
suboptimal solutions.
auxiliary operations are common in GAS. Of these,
scaling
mechanisms
[
161 are widely used.
Scaling
involves a readjust-
ment of fitness values of solutions to sustain a steady selective
pressure in the population and to prevent the premature con-
vergence of the population to suboptimal solutions.
I
lation to prevent the premature convergence of the GA to
Apart from selection, crossover, and mutation, various other
I
i
~
The basic structure of a GA is illustrated in Fig. 1.
In this paper we describe
an
efficient technique for multi-
modal function optimization using GAS. We recommend the
use of adaptive probabilities of crossover and mutation to
realize the twin goals of maintaining diversity in the population
and sustaining the convergence capacity of the GA. With the
approach of adaptive probabilities of crossover and mutation,
we also provide a solution to the problem of choosing the
optimal values of the probabilities of crossover and mutation
(hereafter referred to as
p,
and
p,
respectively) for the
GA. The choice of
p,
and
p,
is known to critically affect
the behavior and performance of the GA, and a number of
guidelines exist in the literature for choosing
p,
and
p,
[6],
[SI,
[lo], [16], [22]. These generalized guidelines are
inadequate as the choice of the optimal
p,
and
p,
becomes
specific to the problem under consideration. Grefenstette
[
161
has formulated the problem of selecting
p,
and
p,
as an
optimization problem in itself, and has recommended the use
of a second-level GA to determine the parameters of the GA.
The disadvantage of Grefenstette’s method is that it could
prove to be computationally expensive. In our approach,
p,
and
p,
are determined adaptively by the GA itself, and the
user is relieved of the burden of specifying the values of
p,
and
p,.
The paper is organized as follows. In Section
I1
we discuss
the problems of multimodal function optimization, and the
various techniques proposed in the literature to overcome
the problems. Section 111 describes our approach of using
adaptively varying probabilities of crossover and mutation for
multimodal function optimization. In Section
IV
we compare
the AGA with previous techniques at adapting operator proba-
I
bilities in GAS. In Section
V
we derive the Schema theorem for
GA and analyze the variation of schema fitnesses. In Section
VI,
we present experimental results to compare the perfor-
mance of the GAS with and without adaptive probabilities of
crossover and mutation. The conclusions and directions for
future work are presented in Section
VII.
11.
GENETIC ALGORITHMS
AND
MULTIMODAL FUNCTION O~IZATION
In optimizing unimodal functions,
it
is important that the
GA should be able to converge to the optimum in as few
generations as possible. For multimodal functions, there is
a need to be able to locate the region in which the global
optimum exists, and then to converge to the optimum. GAS
possess hill-climbing properties essential for multimodal func-
tion optimization, but they too are vulnerable to getting stuck
at a local optimum (notably when the populations are small).
In this section, we discuss the role of the parameters
p,
and
p,
(probabilities of crossover and mutation) in controlling the
behavior of the GA. We also discuss the techniques proposed
in the literature for enhancing the performance
of
GAS for
optimizing multimodal functions.
The significance of
p,
and
p,
in controlling GA per-
formance has long been acknowledged in GA research
[7],
[lo]. Several studies, both empirical [16], [22] and theoretical
[20] have been devoted to identify optimal parameter settings
for GAS. The crossover probability
p,
controls the rate at
which solutions are subjected to crossover. The higher the
value of
p,,
the quicker are the new solutions introduced into
the population. As
p,
increases, however, solutions can be
disrupted faster than selection can exploit them. Typical values
of
p,
are in the range
0.5-1.0.
Mutation is only a secondary
operator to restore genetic material. Nevertheless the choice of
p,
is critical to GA performance and has been emphasized in
DeJong’s inceptional work
[6].
Large values of
p,
transform
the GA into a purely random search algorithm, while some
mutation is required to prevent the premature convergence of
the GA to suboptimal solutions. Typically
p,
is chosen in the
range
0.005-0.05.
Efforts to improve the performance of the GA in optimizing
multimodal functions date back to DeJong’s work
[6].
DeJong
introduced the ideas of ‘overlapping populations’ and ‘crowd-
ing’ in his work. In the case of ‘overlapping populations’,
newly generated offspring replace similar solutions of the
population, primarily to sustain the diversity of solutions in
the population and to prevent premature convergence. The
technique however introduces a parameter CF (the crowding
factor), which has to be tuned to ensure optimal performance
of the GA. The concept of ‘crowding’ led to the ideas
of
‘niche’ and ‘speciation’ in GAS. Goldberg’s ‘sharing function’
has been employed in the context of multimodal function
optimization;
[
151 describes a method of encouraging ‘niche’
formation and ‘speciation’ in GAS. More recently, Goldberg
has proposed a Boltzmann tournament selection scheme
[l
11
for forming and sizing stable sub-populations. This technique
is based on ideas from simulated annealing and promises
convergence to the global optimum.

658
IEEE
TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS,
VOL.
24,
NO.
4,
APRIL
1994
0.6
Besl
I
I
I
‘.
--a
-..--_
0.5
-
aJ
a
9
0.4-
I
’\
I
s
\
I
‘,Pop.
Max.-
Avg.
I
I
,
\
!
\
I
’1.
I
I
I
average fitness value
7
of the population in relation to the
maximum fitness value
fmax
of the population.
fmax
-
f
is
likely to
be less for a population that has converged to an
optimum solution than that for a population scattered in the
solution space. We have observed the above property in
all
our
experiments with
GAS,
and Fig. 2 illustrates
-
the property for a
the
GA
converges to a local optimum with a fitness value of
0.5
(The globally optimal solution has a fitness value of
1.0).
-
typical case. In Fig.
2,
we notice that
fmax
-
f
decreases when
We use the difference in the average and maximum fitness
values,
fmax
-
f,
as a yardstick for detecting the convergence
-
-
Fig.
2.
Variation
of
fmax
-
f
and
fbest
(best
fitness).
In all the techniques described above, no emphasis is placed
on the choice of
p,
and
p,.
The choice of
p,
and
p,
is
still left to the user to be determined statically prior to the
execution of the
GA.
The idea of adaptive operators to improve
GA
performance has been employed earlier
[13
131 191 1241.
Our approach to multimodal function optimization also uses
adaptive probabilities of crossover and mutation, but in a
manner different from these previous approaches. We devote
Section
IV
to discuss the above approaches, and compare them
with the
AGA.
In the next section, we discuss the motivation
for having adaptive probabilities of crossover and mutation,
and describe the methods adopted to realize them.
111.
ADAFTIVE
PROBABILITIES
OF
CROSSOVER
AND
MUTATION
A.
Motivations
It is essential to have
two
characteristics in
GAS
for op-
timizing multimodal functions. The first characteristic is the
capacity to converge to an optimum (local or global) after
locating the region containing the optimum. The second char-
acteristic is the capacity to explore new regions of the solution-
space in search of the global optimum. The balance between
these characteristics of the
GA
is dictated by the values of
p,
and
p,,
and the type of crossover employed 1231. Increasing
values of
p,
and
p,
promote exploration at the expense of
exploitation. Moderately large values of
p,
(0.5-1.0)
and small
values of
p,
(0.001-0.05)
are commonly employed in
GA
practice. In
our
approach, we aim at achieving this trade-off
between exploration and exploitation in a different manner, by
varying
p,
and
p,
adaptively in response to the fitness values
of the solutions;
p,
and
p,
are
increased when the population
tends to get stuck at a local optimum and
are
decreased when
the population is scattered in the solution space.
B.
Design
of
Adaptive
pc
and
p,
To
vary
p,
and
p,
adaptively, for preventing premature
decreases,
p,
and
p,
will have to be varied inversely with
fmax
-
f.
The expressions that we have chosen for
p,
and
p,
are of the form
,-
and
-
P,
=
k2
/
(
fmax
-
f
)
*
It has to be observed in the above expressions that
p,
and
p,
do not depend on the fitness value of any particular solution,
and have the same values for all the solutions of the population.
Consequently, solutions with high fitness values as well as
solutions with low fitness values are subjected to
the
same
levels of mutation and crossover. When a population converges
to
a globally optimal solution (or even a locally optimal
solution),
p,
and
p,
increase and may cause the disruption of
the near-optimal solutions. The population may never converge
to the global optimum. Though we may prevent the
GA
from
getting stuck at a local optimum, the performance of the
GA
(in terms of the generations required for convergence) will
certainly deteriorate.
To
overcome the above-stated problem, we need to preserve
‘good’ solutions of the population. This can be achieved by
having lower values of
p,
and
p,
for high fitness solutions
and higher values of
p,
and
p,
for low fitness solutions. While
the high fitness solutions aid in the convergence of the
GA,
the low fitness solutions prevent the
GA
from getting stuck
at a local optimum.
-
The value of
p,
should depend not only
on
fmax
-
f,
but also on the fitness value
f
of the solution.
Similarly,
p,
should depend on the fitness values of both the
parent solutions. The closer
f
is to
fmax,
the smaller
p,
should
be, i.e.,
p,
should vary directly as
fmax
-
f.
Similarly,
p,
should vary directly as
fmax
-
f’,
where
f’
is the larger of the
fitness values of the solutions to be crossed. The expressions
for
p,
and
p,
now take the forms
-
pc
=
h(fmax
-
f‘)/(fmax
-
f),
kl
I
1.0
(1)
and
.~
convergence of the
GA
to aiocal optimum, it is essential to be
able to identify whether the
GA
is converging to an optimum.
(ICl
and
k2
have to be less than
1.0
to constrain
p,
and
p,
to the range
0.0-1.0).

SRINIVAS
AND
PATNAIK
CROSSOVER AND
MUTATION
IN
GENETIC
ALGORITHMS
659
Note that
p,
and
p,
are zero for the solution with the
maximum fitness.
Also
p,
=
kl
for a solution with
f’
=
7,
and
p,
=
k2
for a solution with
f
=
7.
For solutions with
subaverage fitness values i.e.,
f
<
7,
p,
and
p,
might assume
values larger than 1.0
. To prevent the overshooting of
p,
and
p,
beyond 1.0, we also have the following constraints,
pc
=
k3,
f’
57
(3)
Pm=k4r
fs7
(4)
and
where
k3,k4
5
1.0.
C. Practical Considerations and Choice
of Values for
k1,
k2,
k3
and
k4
In the previous section, we saw that for a solution with
the maximum fitness value,
p,
and
p,
are both zero. The best
solution in a population is transferred undisrupted into the next
generation. Together with the selection mechanism, this may
lead to an exponential growth of the solution in the population
and may cause premature convergence. To overcome the above
stated problem, we introduce a default mutation rate (of
0.005)
for every solution in the
AGA.
We now discuss the choice of values for
kl,
k2,
k3,
and
k4.
For convenience, the expressions for
p,
and
p,
are given as
-
~c
=
kl(fmax
-
f’)/(fmax
-
f),
(5)
Pc
=
163,
f’
<
7
(6)
pm
=
k2(fmax
-
f)/(fmax
-
f),
(7)
Pm=k4,
f<7
(8)
f’
2
7,
and
-
f
2
7,
where
k1,k2,k3,k4
5
1.0.
It has been well established in
GA
literature [6] [lo] that
moderately large values of
p,
(0.5
<
p,
<
1.0),
and small
values of
p,
(0.001
<
p,
<
0.05)
are essential for the
successful working of
GAS.
The moderately large values of
p,
promote the extensive recombination of schemata, while
small values of
p,
are necessary to prevent the disruption
of the solutions. These guidelines, however, are useful and
relevant when the values of
p,
and
p,
do not vary.
One of the goals of our approach is to prevent the
GA
from getting stuck at a local optimum. To achieve this goal,
we employ solutions with subaverage fitnesses to search the
search space for the region containing the global optimum.
Such solutions need to be completely disrupted, and for this
purpose we use a value of
0.5
for
k4.
Since solutions with
a fitness value of
7
should also be disrupted completely, we
assign a value of
0.5
to
k2
as well.
Based on similar reasoning, we assign
k1
and
163
a value of
1
.O.
This ensures that all solutions with a fitness value less than
or equal to
7
compulsarily undergo crossover. The probability
of crossover decreases as the fitness value (maximum of the
fitness values of the parent solutions) tends to
fmax
and is
0.0
for solutions with a fitness value equal to
fmax.
In the next section, we compare the
AGA
with previous
approaches for employing adaptive operators in
GAS.
IV.
COMPARISON
OF
AGA
WITH
OTHER
ADAPTIVE
STRATEGIES
The idea of adapting crossover and mutation operators to
improve the performance of
GAS
has been employed earlier
[ll, [31,
191,
1241. This section reviews these techniques and
compares them with our approach.
Schaffer
et al.
[l] discuss a crossover mechanism wherein
the distribution of crossover points is adapted based on the
performance of the generated offspring. The distribution in-
formation is encoded into each string using additional bits.
Selection and recombination of the distribution bits occurs in
the normal fashion along with the other bits of the solutions.
Davis [3], [4] discusses an effective method of adapting
operator probabilities based on the performance of the opera-
tors. The adaptation mechanism provides for the alteration of
operator probabilities in proportion to the fitnesses of strings
created by the operators. Simply stated, operators which create
and cause the generation of better strings are alloted higher
probabilities. The technique has been developed in the context
of a steady-state
GA
(see [24]), and experimental evidence has
demonstrated considerable promise.
Fogarty [9] has studied the effects of varying the mutation
rate over generations and integer encodings. Specifically, a
mutation rate that decreases exponentially with generations has
demonstrated superior performance for a single application.
In an approach employing a form of adaptive mutation,
Whitley
et al.
[24] have reported significant performance
improvements. The probability of mutation is a dynamically
varying parameter determined from the Hamming distance
between the parent solutions. The diversity in the population
is sustained by subjecting similar solutions to increased levels
of mutation.
The adaptation policy in
AGA
is different from all the
approaches described above; [l] is not related to adapting
mutation and crossover rates.
AGA
is different from [3]
and [9] as, in the
AGA,
p,
and
p,
are determined for
each individual as a function of its fitness. In [9],
p,
is
varied in a predetermined fashion. In [3] too, the operator
probabilities are invariant with the individual fitnesses of
solutions, although they are modified periodically based on the
average performance of the operators (determined indirectly
from the fitnesses of solutions).
The
AGA
bears closer resemblance to Whitley’s adaptive
mutation approach [24]. In both cases, the mutation rate is
determined specifically for each solution. Both techniques are
also derived from the idea of sustaining the diversity in the
population without affecting the convergence properties. In
Whitley’s approach, however, the adaptive mutation technique
has been employed in the context of a steady state
GA,
while
we are concemed with generational replacement, in the
AGA.
Since the steady state
GA
employs a form of populationary
elitism, there is no need to ‘protect’ the best solutions from
the high levels of disruption. In the
AGA,
the best solutions
are explicitly protected from disruption. The criterion for
adaptation is also different in both cases: in [24]
p,
is varied
based on the Hamming distance between solutions, while in
our approach
p,
and
p,
are adapted based on fitness values.

660
IEEE TRANSACTIONS ON
SYSTEMS,
MAN
AND CYBERNETICS,
VOL.
24,
NO.
4,
APRIL
1994
The experimental results in
[24]
and our own experiments
(Section
V)
demonstrate the efficacy of
this
line of approach.
V.
THE
SCHEMA
THEOREM
AND
THE
'AGA'
The Schema theorem [7],
[
101,
[
171, has been
the
predomi-
nant method for analyzing GAS. Schemata are building blocks
that form the solutions, and the Schema theorem predicts the
growth of high fitness building blocks at the expense of low
fitness ones. The Schema theorem also models the detrimental
effects of crossover and mutation on the propagation of
schemata from generation to generation. In this section, we
derive the Schema theorem for the GA with adaptive
p,
and
p,.
The notation that we have used in the derivation is as
follows. We now derive the expression to predict
Nh(t
+
1)
h
:
a schema
fi
f
fh
-
fmax
f;
:
the fitness value of an instance (solution) of
:
the average fitness value of the population
:
the average fitness value of schema h
:
the maximum fitness value of the population
:
the average of the square of fitness values
(second moment of fitness values) for the
schema
h
n;(t
+
1)
:
the expected number of offspring created in
schema
-
-
To
transform the two inequalities of (10) and (11) into one
inequality, we recall the assumption made in the previous
section that
kl
=
k3.
Now, we get a single inequality without any constraints on
fi,
To
get an estimate for
Nh(t+l),
we consider the summation of
n:
over all the solutions,
i,
that
are
instances of the schema
h,
i.e.,
Nh
(t)
Nh(t
+
1)
=
n!(t
+
1).
i=l
Equivalently, from (12), we get,
Since,
(x2:)
fi)
=
(Nh(t)
x
x),
and
(xzJt)
fi2)
=
(Nh
(t)
x
E),
(13)
gets modified to
generation
t
+
1
due to a solution
i
of schema
h
After rearranging the terms, (15) can be rewritten as
(and of the generation
t)
:
the number of solutions of generation
t
which
are
instances of the schema
h
:
the
dejning length
of the schema
h
:
the length of the solution, i.e., the number of
binary bits in the encoded solution.
Nh
(t)
Z(h)
L
from Nh(t). The selection criterion that we have used for the
GA is that of proportional selection. We first consider the effect
of crossover and then generalize the results for mutation.
The expected number of offspring generated by a solution
a
of the schema
h
is given by
The expression that we have used for
pc
is given by
(fmax
-
fi,,
fi
7,
(fmax
-
f)
Pc
=
kl
~c=k3,
fi
7
where
kl,k3
5
1.0
and
kl
5
k3.
After substituting for
pc
in
(9),
we get
(9)
'In
our research,
we
have used a binary alphabet
for
encoding
the
solutions.
(15) represents the schema theorem when adaptive crossover
is used in the GA. We now consider some special cases of
(15) based on the value of
5.
A.
ESfect
of
Mutation
may be generalized to the form
When we include the disruptive effects of mutation, (12)
where
(1
-
k2u)n
(fmax-f)
gives the probability that
the solution
i
survives disruption due to mutation. For
kz
<<
1,
the right side of (16) may be approximated
to

Citations
More filters
Journal ArticleDOI

AORCEA - An adaptive operator rate controlled evolutionary algorithm

TL;DR: This paper investigates an adaptive strategy controlling the rates of arbitrary chosen genetic operators by evaluating a success and a diversity measure for each operator and finding more efficient operators are favored in order to find better solutions with less evaluations.
Journal ArticleDOI

An Improved Catastrophic Genetic Algorithm and Its Application in Reactive Power Optimization

TL;DR: The Improved Catastrophic Genetic Algorithm can prevent premature convergence and instability of genetic-catastrophic algorithms (GCA) and is suitable for reactive power optimization in power system.
Journal ArticleDOI

Function optimization using an adaptive crossover operator based on locality

TL;DR: A new crossover operator in genetic algorithms for function optimization that restricts the crossover range by using a bias_value, computed by the fitness function value, the performance ratio, and the number of generations can reduce the computational complexity of obtaining the global optimum.
Journal ArticleDOI

Genetic Programming for Energy-Efficient and Energy-Scalable Approximate Feature Computation in Embedded Inference Systems

TL;DR: This work explores the use of genetic programming (GP) to compute approximate features for seizure detection and evaluates the proposed methodologies through two case studies, based on energy modeling of a custom low-power microprocessor with a classification accelerator.
Proceedings ArticleDOI

An improved genetic algorithm of solving IFS code of fractal image

TL;DR: A new method (GAIFS) based on genetic algorithm to obtain the IFS code of the fractal image is developed, the influence of mutation and crossover probability on the process is discussed and several processing methods preventing this algorithm from converging into local optimum are proposed.
References
More filters
Book

Handbook of Genetic Algorithms

TL;DR: This book sets out to explain what genetic algorithms are and how they can be used to solve real-world problems, and introduces the fundamental genetic algorithm (GA), and shows how the basic technique may be applied to a very simple numerical optimisation problem.
Proceedings ArticleDOI

On genetic algorithms

TL;DR: C Culling is near optimal for this problem, highly noise tolerant, and the best known a~~roach in some regimes, and some new large deviation bounds on this submartingale enable us to determine the running time of the algorithm.
Journal ArticleDOI

Optimization of Control Parameters for Genetic Algorithms

TL;DR: GA's are shown to be effective for both levels of the systems optimization problem and are applied to the second level task of identifying efficient GA's for a set of numerical optimization problems.
Proceedings Article

Genetic algorithms with sharing for multimodal function optimization

TL;DR: In this article, the authors developed and investigated the method of sharing functions to permit the formation of stable subpopulations of different strings within a GA, thereby permitting the parallel investigation of many peaks.