scispace - formally typeset

Proceedings ArticleDOI

Deceptiveness and neutrality the ND family of fitness landscapes

08 Jul 2006-pp 507-514

TL;DR: This work proposes three steps to design landscapes which make it possible to tune precisely neutral degree distribution, and uses a simulated annealing heuristic to bring closer the two distributions.
Abstract: When a considerable number of mutations have no effects on fitness values, the fitness landscape is said neutral. In order to study the interplay between neutrality, which exists in many real-world applications, and performances of metaheuristics, it is useful to design landscapes which make it possible to tune precisely neutral degree distribution. Even though many neutral landscape models have already been designed, none of them are general enough to create landscapes with specific neutral degree distributions. We propose three steps to design such landscapes: first using an algorithm we construct a landscape whose distribution roughly fits the target one, then we use a simulated annealing heuristic to bring closer the two distributions and finally we affect fitness values to each neutral network. Then using this new family of fitness landscapes we are able to highlight the interplay between deceptiveness and neutrality.
Topics: Neutral network (66%), Fitness landscape (66%)

Content maybe subject to copyright    Report

Deceptiveness and Neutrality
The ND Family of Fitness Landscapes
William Beaudoin, S´ebastien Verel, Philippe Collard, Cathy Escazut
University of Nice-Sophia Antipolis
I3S Laboratory
Sophia Antipolis, France
{beaudoin,verel,pc,escazut}@i3s.unice.fr
ABSTRACT
When a considerable number of mutations have no effects on fit-
ness values, the fitness landscape is said neutral. In order to study
the interplay between neutrality, which exists in many real-world
applications, and performances of metaheuristics, it is useful to de-
sign landscapes which make it possible to tune precisely neutral
degree distribution. Even though many neutral landscape models
have already been designed, none of them are general enough to
create landscapes with specific neutral degree distributions. We
propose three steps to design such landscapes: first using an al-
gorithm we construct a landscape whose distribution roughly fits
the target one, then we use a simulated annealing heuristic to bring
closer the two distributions and finally we affect fitness values to
each neutral network. Then using this new family of fitness land-
scapes we are able to highlight the interplay between deceptiveness
and neutrality.
Categories and Subject Descriptors
I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods,
and Search.
General Terms
Algorithms, performance, design, experimentation.
Keywords
Fitness landscapes, genetic algorithms, search, benchmark.
1. INTRODUCTION
The Adaptative Landscape metaphor introduced by S. Wright [1]
has dominated the view of adaptive evolution: an uphill walk of a
population on a mountainous fitness landscape in which it can get
stuck on suboptimal peaks. Results from molecular evolution has
changed this picture: Kimura’s model [2] assumes that the over-
whelming majority of mutations are either effectively neutral or
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
GECCO’06, July 8–12, 2006, Seattle, Washington, USA.
Copyright 2006 ACM 1-59593-186-4/06/0007 ...
$5.00.
lethal and in the latter case purged by negative selection. This as-
sumption is called the neutral hypothesis. Under this hypothesis,
dynamics of populations evolving on such neutral landscapes are
different from those on adaptive landscapes: they are characterized
by long periods of fitness stasis (population stated on a ’neutral
network’) punctuated by shorter periods of innovation with rapid
fitness increases [3]. In the field of evolutionary computation, neu-
trality plays an important role in real-world problems: in design
of digital circuits [4] [5] [6], in evolutionary robotics [7] [8]. In
those problems, neutrality is implicitly embedded in the genotype
to phenotype mapping.
1.1 Neutrality
We recall a few fundamental concepts about fitness landscapes
and neutrality (see [9] for a more detailed treatment). A landscape
is a triplet (S, V, f) where S is a set of potential solutions i.e. a
search space, V : S 2
S
,aneighbourhood structure, is a func-
tion that assigns to every s S a set of neighbours V (s), and
f : S IR is a fitness function that can be pictured as the “height”
of the corresponding potential solutions. The neighbourhood is of-
ten defined by an operator like bitflip mutation. A neutral neigh-
bour of s is a neighbour with the same fitness f(s). The neutral
degree of a solution is the number of its neutral neighbours. A
fitness landscape is neutral if there are many solutions with high
neutral degree. A neutral network, denoted NN, is a connected
graph where vertices are solutions with the same fitness value and
two vertices are connected if they are neutral neighbours.
1.2 Fitness Landscapes with Neutrality
In order to study the relationship between neutrality, dynamics of
Evolutionary Algorithms (EA) and search difficulty, some bench-
marks of neutral landscapes have been proposed. More often neu-
trality is either an add-on feature, as in NK-landscapes, or an inci-
dental property, as in Royal-Road functions. In most cases the de-
sign acts upon the amount of solutions withthe same fitness. Royal-
Road functions [10] are defined on binary strings of length N =
n.k where n is the number of blocks and k the size of one block.
The fitness function corresponds to the number of blocks which are
set with k bits value 1 and neutrality increases with k. Numerous
landscapes are variant of NK-Landscapes [11]. The fitness function
of an NK-landscape is a function f : {0, 1}
N
[0, 1) defined
on binary strings with N bits. An ’atom’ with fixed epistasis level
is represented by a fitness component f
i
: {0, 1}
K+1
[0, 1) as-
sociated to each bit i. It depends on the value at bit i and also on
the values at K other epistatic bits. The fitness f is the average
of the values of the N fitness components f
i
. Several variants of
NK-landscapes try to reduce the number of fitness values in or-
507

der to add some neutrality. In NKp-landscapes [12], f
i
(x) has a
probability p to be equal to 0 ;inNKq-landscapes [13], f
i
(x) is
uniformly distributed in the interval [0,q1]IN ;inTechnological
Landscapes [14], tuned by a natural number M, f(x) is rounded
so that it can only take M different values. For all those problems,
neutrality is tuned by one parameter only: neutrality increases ac-
cording to p and decreases with q or M (see for example Figure
2).
Dynamics of a population on a neutral network are complex,
even on flat landscapes as shown by Derrida [15]. The works [16]
[17] [18] [19], at the interplay of molecular evolution and optimiza-
tion, study the convergence of a population on neutral networks. In
the case of infinite population under mutation and selection, they
show distribution on a NN is only determined by the topology of
this network. That is to say, the population converges to the solu-
tions in the NN with high neutral degree. Thus, the neutral degree
distribution is an important feature of neutral landscapes.
In order to study more precisely neutrality, for instance link be-
tween neutrality and search difficulty, we need for “neutrality-driven
design” where neutrality really guides the design process. In this
paper we propose to generate a family of landscapes where it is
possible to tune accurately the neutral degree of solutions.
2. ND-LANDSCAPES
In this section, we first present an algorithm to create a land-
scape with a given neutral degree distribution. Then we will refine
the method to obtain more accurate landscapes and finally we will
study time and space complexity of the algorithm.
2.1 An algorithm to design
small ND-Landscape
We now introduce a simple model of neutral landscapes called
ND-Landscapes where N refers to the number of bits of a solution
and D to the neutral degree distribution. In this first step our aim is
to provide an exhaustive definition of the landscape assigning one
fitness value to each solution. We fix N to 16 bits and so the size of
search space is 2
16
. Building a ND-Landscape is done by splitting
the search space into neutral networks. However the fitness value
of each neutral network has no influence on the neutrality. This is
why these fitness values are randomly chosen.
Let D be an array of size N+1 representing a neutral degree dis-
tribution. N and D are given as inputs and the algorithm (see algo-
rithm 1) returns a fitness function f from {0, 1}
N
to IR such that
the neutral degree distribution of the fitness landscape is similar to
D. For more simplicity, we chose to give a different fitness value
to each neutral network. We define RouletteWheel(D) as a random
variable whose density is given by distribution D. It is directly in-
spired from the genetic algorithm selection operator. For example:
let Δ be the following distribution :
Δ[0]=0 Δ[1]=0.25 Δ[2]=0.5 Δ[3]=0.25. RouletteWheel(Δ)
will return value 1 in 25 % of the time, 2 in 50 % of the time and
3 in 25 % of the time. Figure 1 shows the neutral networks of an
ideal ND-Landscape (size=2
5
) for the distribution Δ.
2.2 A metaheuristic to improve the ND design
Using algorithm 1, exhaustive fitness allocation does not create
a landscape with a neutral degree distribution close enough to the
input distribution. The reason is the fitness function is completely
defined before the neutral degree of every solution has been con-
sidered. Hence, we use a simulated annealing metaheuristic to im-
prove the landscape created by algorithm 1. Here, simulated an-
nealing is not used to find a good solution of a ND-Landscape but
to adjust the landscape by modifying the fitness of some solutions
00001
10001
01000
0110011000
10000 00011
00111 10011
10010
11010
01111
00101
01101
00110
01110
10100
11111
11100
01011
11110
10111 11101 11011
00000 00010 00100
01001 01010
10101
10110
11001
Figure 1: Example of a tiny ND-Landscape. Each node repre-
sents a solution and two nodes are connected if they have the
same fitness value and are Hamming neighbours. In this exam-
ple there are five neutral networks.
Algorithm 1 Generation of ND-Landscapes
s S, f[s] unaffected
randomly choose one solution s
0
.
CandidatesList S sorted by distance from s
0
.
while not empty(CandidatesList) do
s head(CandidatesList)
for d=0toNdo
if s can’t have d neutral neighbours
then D’[d] 0
else D’[d] D[d]
end for
n RouletteWheel(D’[d])
Give a value to some neighbours such that s has exactly n neu-
tral neighbours without changing the neutral degrees of solu-
tions which have already been chosen (∈ CandidatesList).
D[n] D[n] -
1
2
N
CandidatesList next(CandidatesList)
end while
508

0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0 2 4 6 8 10 12 14
Frequency
Neutral Degree
0
0.05
0.1
0.15
0.2
0.25
0 2 4 6 8 10 12
Frequency
Neutral Degree
NKp with N =16, K =5, p =0.8 NKq with N =16, K =4, q =2
average: 5.16 std dev:2.37 average: 3.94 std dev:1.74
0
0.05
0.1
0.15
0.2
0.25
0 2 4 6 8 10 12 14
Frequency
Neutral Degree
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0 2 4 6 8 10 12 14 16
Frequency
Neutral Degree
Technological landscape with N =16, K =4, M =20 Royal Road with N =16, n =4, k =4
average: 5.49 std dev:1.99 average: 14.0 std dev:2.00
Figure 2: Neutral degree distribution for some neutral landscapes
such as neutral distribution of a ND-Landscape be closer to the in-
put distribution. The local operator is the changement of fitness
value of one solution of the landscape, which can alter at most N+1
neutral degrees. The acceptance of a transition is determined by the
difference between the distance to the input distribution before and
after this transition. The distance we use to compare two distribu-
tions is the root mean square :
dist(D, D
0
)=
v
u
u
t
N
X
i=0
(D[i] D
0
[i])
2
Simulated annealing appeared to be a fast and efficient method
for this particular task. Improvements made by simulated annealing
are shown in figure 3. Easiest neutral distributions to obtain seemed
to be smooth ones like gaussian distributions (which are the most
encountered when dealing with real or artificial neutral problems).
On the other hand, sharp distributions (like the middle-right one of
figure 3) are really hard to approach. In addition, independently
of the shape, distributions with higher average neutral degree are
harder to approximate.
2.3 Space and Time complexity
To create a landscape with a search space of size 2
N
, we use an
array of size 2
N
containing fitness values and a list of forbidden
values for each solution. Thus we need a memory space of size
O(2
N
× N). Consequently the space complexity is : O(2
N
× N)
In order to know what are the possible neutral degrees of an un-
affected solution s, we must consider every interesting value for s
(the fitnesses of all neighbour solutions and a random value), and
for each of these values, we must find out all possible neutral de-
grees. This can be done in a time O(N
2
). We evaluate the possible
neutral degrees once for each solution. Time allowed for simulated
annealing is proportional to the time elapsed during construction.
Thus, the time complexity of the algorithm is O(2
N
N
2
).
Consequently we can only construct ND-Landscapes with a small
N( 16) but we will see in section 4 how to create Additive Ex-
tended ND-Landscapes with far greater search spaces.
2.4 Sizes of the generated Neutral Networks
Figure 4 shows the diversity of sizes of neutral networks for 4
distributions. For every distribution we created 50 different ND-
Landscapes. Graphics on the left show the input and the mean re-
sulting distribution. Graphics on the right show all of the networks
of these landscapes sorted by decreasing size with a logarithmic
scale. We clearly see that the neutral degree distribution is a really
determining parameter for the structure of the generated landscape.
3. TUNING DECEPTIVENESS
OF ND-LANDSCAPES
Once we have generated a landscape with a specific neutral de-
gree distribution, we can change the fitness value of all neutral net-
works without changing the neutral degree distribution (as long as
we do not give the same fitness to two adjacent networks). Hence,
for a given neutral distribution, we can tune the difficulty of a ND-
Landscape. For instance if each NN have a random fitness value
from [0, 1] then the landscape is very hard to optimize. Here, we
will use the well known Trap Functions [20] to affect fitnesses to
NN in order to obtain a ND-Landscape with tunable deceptive-
ness.
The trap functions are defined from the distance to one particular
solution. They admit two optima, a global one and a local one.
They are parametrized by two values b and r. The first one, b allows
509

0
0.05
0.1
0.15
0.2
0.25
0
2
4
6
8
10
D1 : construction
D2 : adjustment
D0 : target distribution
0
0.05
0.1
0.15
0.2
0.25
0.3
0
2
4
6
8
10
D1 : construction
D2 : adjustment
D0 : target distribution
dist(D
1
,D
0
)=0.110 dist(D
1
,D
0
)=0.119
dist(D
2
,D
0
)=0.00338 dist(D
2
,D
0
)=0.0175
0
0.05
0.1
0.15
0.2
0.25
0
2
4
6
8
10
12
14
16
D1 : construction
D2 : adjustment
D0 : target distribution
0
0.05
0.1
0.15
0.2
0.25
0
2
4
6
8
10
D1 : construction
D2 : adjustment
D0 : target distribution
dist(D
1
,D
0
)=0.0937 dist(D
1
,D
0
)=0.151
dist(D
2
,D
0
)=0.00246 dist(D
2
,D
0
)=0.0693
0
0.05
0.1
0.15
0.2
0.25
0
2
4
6
8
10
D1 : construction
D2 : adjustment
D0 : target distribution
0
0.05
0.1
0.15
0.2
0
2
4
6
8
10
D1 : construction
D2 : adjustment
D0 : target distribution
dist(D
1
,D
0
)=0.0750 dist(D
1
,D
0
)=0.0917
dist(D
2
,D
0
)=0.0103 dist(D
2
,D
0
)=0.00403
Figure 3: Neutral Degree Distributions obtained by algorithm 1 (D
1
) and then adjusted by simulated annealing (D
2
). Neutral degrees
are on abcissa. Impulses represent the target distributions (D
0
).
to set the width of the attractive basin for each optima, and r sets
theirs relative importance. The function f
T
: {0, 1}
N
IR is so
defined by:
f
T
(x)=
(
1.0
d(x)
b
if d(x) b,
r ( d( x)b)
1.0b
elsewhere
where d(x) is the Hamming distance to the global optimum, di-
vided by N, between x and one particular solution. The problem is
most deceptive as r is low and b is high. In our experiment, we will
use two kinds of Trap functions with r =0.9, one with b =0.25
and another one with b =0.75 (see figure 5 (a) and (b)).
To affect a fitness value to each neutral network, we first choose
the optimum neutral network, denoted NN
opt
, (for example the
one containing the solution 0
N
) and set its fitness to the maximal
value 1.0. Then, for each neutral network, we compute the distance
d between its centroid
1
and the centroid of NN
opt
; finally the fit-
ness value of the NN is set according to a trap function
2
and the
distance d. In order to ensure that all adjacent networks have dif-
ferent fitness values, it is possible to add a white noise to the fitness
values of each NN. In the following experiments, the length of
bitstring is N =16. ND-landscapes are constructed with uniform
neutral degree distributions. We use the distributions defined by
D
p,w
[i]=
(
1/w if i ∈{p, p + w 1},
0 elsewhere
where p ∈{0, 7}, w ∈{3, 4}, and the two Trap functions de-
o
1
The centroid of a NN is the string of the frequency of appearance
of bit value 1 at each position.
2
This trap function is defined for all real numbers between 0 and N
510

0
0.05
0.1
0.15
0.2
0.25
0.3
0
2
4
6
8
10
12
14
16
proportion
neutral degree
D2
D0
10
4
10
3
10
2
10
1
10
3
10
2
10
1
size (log)
rank (log)
mean neutral degree 5.15
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0
2
4
6
8
10
12
14
16
proportion
neutral degree
D2
D0
10
4
10
3
10
2
10
1
10
3
10
2
10
1
size (log)
rank (log)
mean neutral degree 5.00
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0
2
4
6
8
10
12
14
16
proportion
neutral degree
D2
D0
10
4
10
3
10
2
10
1
10
3
10
2
10
1
size (log)
rank (log)
mean neutral degree 5.50
0
0.05
0.1
0.15
0.2
0.25
0.3
0
2
4
6
8
10
12
14
16
proportion
neutral degree
D2
D0
10
4
10
3
10
2
10
1
10
3
10
2
10
1
size (log)
rank (log)
mean neutral degree 4.00
Figure 4: Neutral networks sizes for various ND-landscapes
landscapes were generated.
3.1 Fitness Distance Correlation
of ND-Landscapes
To estimate the difficulty to search in these landscapes we will
use a measure introduced by Jones [21] called fitness distance cor-
relation (FDC). Given a set F = {f
1
,f
2
, ..., f
m
} of m individual
fitness values and a corresponding set D = {d
1
,d
2
, ..., d
m
} of the
m distances to the global optimum, FDC is defined as:
FDC =
C
FD
σ
F
σ
D
where:
C
FD
=
1
m
m
X
i=1
(f
i
f)(d
i
d)
is the covariance of F and D and σ
F
, σ
D
, f and d are the standard
deviations and averages of F and D. Thus, by definition, FDC
stands in the range [1, 1]. As we hope that fitness increases as
distance to global optimum decreases, we expect that, with an ideal
fitness function, FDC will assume the value of 1. According to
Jones [21], problems can be classified in three classes, depending
on the value of the FDC coefficient:
fined in figure 5. F r each distribution and each Trap function, 30
511

Citations
More filters

Journal ArticleDOI
TL;DR: An overview of techniques from the 1980s to the present is provided, revealing the wide range of factors that can influence problem difficulty and emphasising the need for a shift in focus away from predicting problem hardness towards measuring characteristics.
Abstract: Real-world optimisation problems are often very complex. Metaheuristics have been successful in solving many of these problems, but the difficulty in choosing the best approach can be a huge challenge for practitioners. One approach to this dilemma is to use fitness landscape analysis to better understand problems before deciding on approaches to solving the problems. However, despite extensive research on fitness landscape analysis and a large number of developed techniques, very few techniques are used in practice. This could be because fitness landscape analysis in itself can be complex. In an attempt to make fitness landscape analysis techniques accessible, this paper provides an overview of techniques from the 1980s to the present. Attributes that are important for practical implementation are highlighted and ways of adapting techniques to be more feasible or appropriate are suggested. The survey reveals the wide range of factors that can influence problem difficulty, emphasising the need for a shift in focus away from predicting problem hardness towards measuring characteristics. It is hoped that this survey will invoke renewed interest in the field of understanding complex optimisation problems and ultimately lead to better decision making on the use of appropriate metaheuristics.

166 citations


Cites background from "Deceptiveness and neutrality the ND..."

  • ...[3] found that neutrality had a smoothing effect on problem difficulty in that adding neutrality to a deceptive landscape made the problem easier, whereas adding neutrality to an easy landscape made it harder (as measured by the fitness distance correlation difficulty metric [33])....

    [...]


Book ChapterDOI
01 Jan 2009
TL;DR: This chapter aims to address some of the fundamental issues that are often encountered in optimization problems, making them difficult to solve, and to help both practitioners and fellow researchers to create more efficient optimization applications and novel algorithms.
Abstract: This chapter aims to address some of the fundamental issues that are often encountered in optimization problems, making them difficult to solve. These issues include premature convergence, ruggedness, causality, deceptiveness, neutrality, epistasis, robustness, overfitting, oversimplification, multi-objectivity, dynamic fitness, the No Free Lunch Theorem, etc. We explain why these issues make optimization problems hard to solve and present some possible countermeasures for dealing with them. By doing this, we hope to help both practitioners and fellow researchers to create more efficient optimization applications and novel algorithms.

105 citations


Journal ArticleDOI
Thomas Weise1, Raymond Chiong2, Ke Tang1Institutions (2)
TL;DR: The goal is to equip practitioners and researchers alike with a clear picture and understanding of what kind of problems can render EC applications unsuccessful and how to avoid them from the start.
Abstract: Evolutionary computation (EC), a collective name for a range of metaheuristic black-box optimization algorithms, is one of the fastest-growing areas in computer science. Many manuals and “how-to”s on the use of different EC methods as well as a variety of free or commercial software libraries are widely available nowadays. However, when one of these methods is applied to a real-world task, there can be many pitfalls and booby traps lurking — certain aspects of the optimization problem that may lead to unsatisfactory results even if the algorithm appears to be correctly implemented and executed. These include the convergence issues, ruggedness, deceptiveness, and neutrality in the fitness landscape, epistasis, non-separability, noise leading to the need for robustness, as well as dimensionality and scalability issues, among others. In this article, we systematically discuss these related hindrances and present some possible remedies. The goal is to equip practitioners and researchers alike with a clear picture and understanding of what kind of problems can render EC applications unsuccessful and how to avoid them from the start.

105 citations


Cites background from "Deceptiveness and neutrality the ND..."

  • ..., landscapes with a well-defined degree of neutrality, it has been shown that neutrality may “destroy” useful information such as correlation([116])....

    [...]


Journal ArticleDOI
TL;DR: This article provides a general overview on the work carried out on neutrality in EAs using as a framework the origin of neutrality and its study in different paradigms of EAs (e.g., Genetic Algorithms, Genetic Programming).
Abstract: Over the last years, the effects of neutrality have attracted the attention of many researchers in the Evolutionary Algorithms (EAs) community. A mutation from one gene to another is considered as neutral if this modification does not affect the phenotype. This article provides a general overview on the work carried out on neutrality in EAs. Using as a framework the origin of neutrality and its study in different paradigms of EAs (e.g., Genetic Algorithms, Genetic Programming), we discuss the most significant works and findings on this topic. This work points towards open issues, which we belive the community needs to address.

51 citations


Cites background or result from "Deceptiveness and neutrality the ND..."

  • ...1 depicts an example), KKp (Barnett 1998), NKq (Newman and Engelhardt 1998b), technological (Lobo et al. 2004), and ND (Beaudoin et al. 2006) landscapes....

    [...]

  • ...Beaudoin et al. (2006) proposed another type of fitness landscape denominated ND landscape, where N is the length of the genome and D is what they called the neutral degree distribution....

    [...]

  • ...The results reported in Beaudoin et al. (2006) are particularly interesting because they match perfectly our own independent findings (Galván-López and Poli 2006a, b, 2007, 2010; Galván-López 2007)....

    [...]

  • ...The results reported in Beaudoin et al. (2006) are particularly interesting because they match perfectly our own independent findings (Galván-López and Poli 2006a, b, 2007, 2010; Galván-López 2007)....

    [...]


Journal ArticleDOI
Riccardo Poli1, Edgar Galván-López2Institutions (2)
TL;DR: Two very simple forms of neutrality are considered: constant neutrality-a neutral network of constant fitness, identically distributed in the whole search space-and bit-wise neutrality, where each phenotypic bit is obtained by transforming a group of genotypic bits via an encoding function, are studied.
Abstract: Kimura's neutral theory of evolution has inspired researchers from the evolutionary computation community to incorporate neutrality into evolutionary algorithms (EAs) in the hope that it can aid evolution. The effects of neutrality on evolutionary search have been considered in a number of studies, the results of which, however, have been highly contradictory. In this paper, we analyze the reasons for this and make an effort to shed some light on neutrality by addressing them. We consider two very simple forms of neutrality: constant neutrality-a neutral network of constant fitness, identically distributed in the whole search space-and bit-wise neutrality, where each phenotypic bit is obtained by transforming a group of genotypic bits via an encoding function. We study these forms of neutrality both theoretically and empirically (both for standard benchmark functions and a class of random MAX-SAT problems) to see how and why they influence the behavior and performance of a mutation-based EA. In particular, we analyze how the fitness distance correlation of landscapes changes under the effect of different neutral encodings and how phenotypic mutation rates vary as a function of genotypic mutation rates. Both help explain why the behavior of a mutation-based EA may change so radically as problem, form of neutrality, and mutation rate are varied.

26 citations


Cites methods from "Deceptiveness and neutrality the ND..."

  • ...The fitness distance correlation approach has been successfully used in a wide variety of problems to assess hardness for EAs [33]–[36] and genetic programming systems [12], [37]– [43]....

    [...]


References
More filters

Journal ArticleDOI
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

8,548 citations


Book
01 Jan 1983
Abstract: Motoo Kimura, as founder of the neutral theory, is uniquely placed to write this book. He first proposed the theory in 1968 to explain the unexpectedly high rate of evolutionary change and very large amount of intraspecific variability at the molecular level that had been uncovered by new techniques in molecular biology. The theory - which asserts that the great majority of evolutionary changes at the molecular level are caused not by Darwinian selection but by random drift of selectively neutral mutants - has caused controversy ever since. This book is the first comprehensive treatment of this subject and the author synthesises a wealth of material - ranging from a historical perspective, through recent molecular discoveries, to sophisticated mathematical arguments - all presented in a most lucid manner.

7,833 citations


Book
01 Jan 1993

3,230 citations


"Deceptiveness and neutrality the ND..." refers background in this paper

  • ...d k the size of one block. The fitness function corresponds to the number of blocks whic h are set with k bits value 1 and neutrality increases with k. Numerous landscapes are variant of NK-Landscapes [11]. The fitness function of an NK-landscape is a functionf : {0,1}N →[0,1) defined on binary strings with N bits. An ’atom’ with fixed epistasis level is represented by a fitness component f i : {0,1}K+1 →[...

    [...]


Journal ArticleDOI
Motoo Kimura1Institutions (1)
17 Feb 1968-Nature
TL;DR: Calculating the rate of evolution in terms of nucleotide substitutions seems to give a value so high that many of the mutations involved must be neutral ones.
Abstract: Calculating the rate of evolution in terms of nucleotide substitutions seems to give a value so high that many of the mutations involved must be neutral ones.

3,125 citations


Journal ArticleDOI
Thomas H. Jukes1Institutions (1)
01 Mar 2000-Genetics
TL;DR: It is stated that these sequences differed in the cytochromes c of various species to an extent that seemed unnecessary from the standpoint of their function.
Abstract: IN 1966, I became interested in the amino acid sequences of cytochrome c molecules ([Jukes 1966][1]). I noted that these sequences differed in the cytochromes c of various species to an extent that seemed unnecessary from the standpoint of their function. I stated, “The changes produced in

2,819 citations


Additional excerpts

  • ...Results from molecular evolutio n has changed this picture: Kimura’s model [2] assumes that the ov erwhelming majority of mutations are either effectively neut ral or lethal and in the latter case purged by negative selection....

    [...]


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20201
20192
20181
20161
20142
20133