scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A tutorial for competent memetic algorithms: model, taxonomy, and design issues

01 Oct 2005-IEEE Transactions on Evolutionary Computation (Institute of Electrical and Electronics Engineers)-Vol. 9, Iss: 5, pp 474-488
TL;DR: This paper reviews some works on the application of MAs to well-known combinatorial optimization problems, and places them in a framework defined by a general syntactic model, which provides them with a classification scheme based on a computable index D, which facilitates algorithmic comparisons and suggests areas for future research.
Abstract: The combination of evolutionary algorithms with local search was named "memetic algorithms" (MAs) (Moscato, 1989). These methods are inspired by models of natural systems that combine the evolutionary adaptation of a population with individual learning within the lifetimes of its members. Additionally, MAs are inspired by Richard Dawkin's concept of a meme, which represents a unit of cultural evolution that can exhibit local refinement (Dawkins, 1976). In the case of MA's, "memes" refer to the strategies (e.g., local refinement, perturbation, or constructive methods, etc.) that are employed to improve individuals. In this paper, we review some works on the application of MAs to well-known combinatorial optimization problems, and place them in a framework defined by a general syntactic model. This model provides us with a classification scheme based on a computable index D, which facilitates algorithmic comparisons and suggests areas for future research. Also, by having an abstract model for this class of metaheuristics, it is possible to explore their design space and better understand their behavior from a theoretical standpoint. We illustrate the theoretical and practical relevance of this model and taxonomy for MAs in the context of a discussion of important design issues that must be addressed to produce effective and efficient MAs.

Summary (6 min read)

Introduction

  • Specifically, solutions to a given problem are codified in so-called chromosomes.
  • Thus, a memetic model of adaptation exhibits the plasticity of individuals that a strictly genetic model fails to capture.
  • The authors adopt the name of MAs for this metaheuristic, because they think it encompasses all the major concepts involved by the other ones, and for better or worse has become the de facto standard, e.g., [13]–[15].

II. GOALS, AIMS, AND METHODS

  • The process of designing effective and efficient MAs currently remains fairly ad hoc and is frequently hidden behind problem-specific details.
  • The first goal is to define a syntactic model which enables a better understanding of the interplay between the different component parts of an MA.
  • At the same time, it will provide a conceptual framework to deal with more difficult questions about the general behavior of MAs.
  • Section V presents a syntax-only model for MAs and a taxonomy of possible architectures for these metaheuristics is given in Section VI.
  • Finally, the authors conclude with a discussion and conclusions in Section VIII.

A. Defining the Subject of Study

  • It has been argued that the success of MAs is due to the tradeoff between the exploration abilities of the EA, and the exploitation abilities of the local search used.
  • A priori formalizations such as [13] and [19] inevitably leave out many demonstrably successful MAs and can seriously limit analysis and generalization of the (already complex) behavior of MAs.
  • This is not because MAs are unsuited to these domains—they have been very successfully applied to the fields of multiobjective optimization (see, e.g., [21]–[24], an extensive bibliography can be found in [25]), and numerical optimization (see, e.g., [26]–[32]).
  • Rather, the reason for this omission is partly practical, to do with the space this large field would demand.
  • Nevertheless, it is worth stressing that the issues cloud the exposition, rather than invalidate the concept of “schedulers” which leads to their syntactic model and taxonomy, and the subsequent design guidelines which can equally well be applied in these more complex domains.

B. Design Issues for MAs

  • Having provided a fairly broad-brush definition of the class of metaheuristics that the authors are concerned with, it is still vital to note that the design of “competent” [33].
  • As the authors will see in the following sections, there are a host of possible answers to these questions, and it is important to use both empirical experience and theoretical reasoning in the search for answers.
  • The aim of their syntactic model is to provide a sound basis for understanding and comparing the effects of different schemes.

A. MAs for the TSP

  • The TSP is one of the most studied combinatorial optimization problems.
  • In [37], early works on the application of MAs to the TSP were commented on.
  • The local search procedure is used after the application of each of the genetic operators and not only once in every iteration of the EA.
  • In [38], the local search used is based on the powerful guided local search (GLS) metaheuristic [39].
  • GLS_Based_Memetic_Algorithm Begin Initialize ; For To sizeOf Do ; ; Evaluate; Od Repeat Until (termination_condition) Do For To #recombinations Do. selectToMerge a set ; ; ; Evaluate; Add to ; Od For To #mutations.

B. MAs for the QAP

  • The QAP is found in the core of many practical problems such as facility location, architectural design, VLSI optimization, etc.
  • The problem is formally defined as the following.
  • Because of the nature of QAP, it is difficult to treat with exact methods, and many heuristics and metaheuristics have been used to solve it.
  • As in , the optimization step is applied only to the newly generated individual, that is, the output of the crossover stage.
  • Regardless of the new representation and crossover on which the MA relies to perform its search, it should be particularly noted that is applied only when a diversity crisis arises, and immediately after mutating a solution of the population a new local search improvement is performed.

C. MAs for the BQP

  • Binary quadratic programming is defined by the following.
  • Binary Quadratic Programming Problem Instance: Symmetric rational matrix .
  • The benefit of , i.e, Aim: Maximum benefit solution.
  • As well as being a well-known NP-Hard problem, BQP has many applications, i.e., financial analysis, CAD problems, machine scheduling, etc. In [47], the authors used an MA with the same architecture as in but tailored for BQP, and they were able to improve over previous approaches based on TS and simulated annealing (SA).
  • They also were able to find new best solutions for instances in the ORLIB [48].

D. MAs for the MGC

  • The MGC is one of the most studied problems in graph theory, with many applications in the area of scheduling and timetabling.
  • In [49], an MA was presented for this problem which used an embedded kind of after the mutation stage.
  • The authors reported what, at the time the paper was written, were exciting results.
  • Fleurent and Ferland [50] studied a number of MAs for MGC based on the hybridization of a standard steady-state GA with problem-specific local searchers and TS.
  • The improvement stage was used instead of the mutation stage of the standard GA.

E. MAs for the PSP

  • Protein structure prediction is one the most exciting problems that computational biology faces today.
  • There remains the problem of how the one-dimensional string of amino acids folds up to form a three-dimensional protein it would be extremely useful to be able to deduce the three-dimensional form of a protein from the base sequence of the genes coding for it; but this is still beyond us.the authors.
  • One well-studied example is Dill’s HP model [53].
  • A replacement strategy was used, together with fitness-proportionate selection for mating.
  • In [58], several MAs for other molecular conformation problems are briefly commented on.

A. Syntactic Model for EAs

  • Following [62], the EA can be formalized within a “generateand-test” framework by the following: : Initial population.
  • Note that in this model, the authors consider the effects of survivor selection at the end of one generation, and parent selection at the start of the next, to be amortized into a single function , which is responsible for updating the working memory of their algorithm.
  • The MAs’ literature, as does the general EA literature, contains examples of the incorporation of diversity-preservation measures into .
  • This issue will be discussed in more depth in Section VII.
  • 1Note that the use of the superscript permits the modeling of crossover operators with variable arity, e.g., Moscato’s K-mergers.

B. Extension to MAs

  • The authors will need to extend this notation to include local search operators as new generating functions.
  • Examples of so called “multimeme algorithms” where the local search phase has access to several distinct local searchers (i.e., ) can be found in [20] and [67].
  • In general, the authors will assume that and, consequently, drop the subscript for the sake of clarity, but as an example of a local searcher with , the reader might consider Jones’ Crossover Hill Climber [68].
  • To model this, the authors define entities called schedulers which are higher order functions.
  • An early example of the application of higher order functions to MAs see [69], where the authors implement Radcliffe and Surry’s formalism [19] in a functional language.

C. Coordinating Local Search With Crossover and Mutation

  • The fine-grain scheduler (fS) coordinates when, where, and with which parameters local searchers from will be applied during the mutation and crossover stages of the evolutionary cycle.
  • It has the following signature: The receives three arguments.
  • Usually will have the value 1: for example, in most of the examples above, local search is applied after recombination or mutation.
  • The symmetric case is equally valid, i.e., applying mutation to the result of improving with .
  • The crossover is a local search procedure that uses a two-solutions-based neighborhood.

D. Coordinating Local Search With Population Management

  • An alternative model, as illustrated in Section IV-E, is to coordinate the action of local search with the population management and updating functions.
  • A coarse-grain scheduler (cS) is defined by In this scheduler, the formal parameters stand for the updating function , the set of the local searchers , the sets of parents and offspring ( and , respectively), and the operator specific parameter sets and .
  • Further, it is possible to model the local search methods described in [26], where statistics from the population are used to apply local search selectively.
  • With the introduction of this scheduler, a new class of metaheuristics is available given by the many possible instantiations of (2) where the use of superscripts recognizes that the several parameters may be time-dependant.
  • As an example of its use, one can imagine that the elements of are based on TS and that the metascheduler uses the information of ancient populations to update their tabu lists, thus combining global and local information across time.

A. Scheduler-Based Taxonomy

  • With the use of (2), it is possible to model the vast majority of the MAs found in the literature, capturing the interaction between local search and the standard evolutionary operators (mutation, crossover, selection).
  • To understand the ordering of the bits, note that the least significant bit is associated to the scheduler that receives as one of its arguments at most one solution, the next bit to the one that receives at most solutions, the next 2 bits are assigned to the schedulers that employ at most or solutions, respectively, in their arguments.
  • Table I classifies the various methods discussed in Section III accordingly to their number, but it will rapidly be seen that only a small fraction of the alternative MAs were employed and investigated, and that the pattern is inconsistent across different problem types.
  • Of particular interest are the frontiers for and .
  • The authors have included in this table a reference to [21].

B. Relationship to Other Taxonomies

  • The taxonomy presented here complements the one introduced in [91] by Calegary et al. who provide a comprehensive taxonomic framework for EAs.
  • He then develops a hierarchical organization for each one.
  • The authors approach categorizes the architecture of a subclass of the algorithms both of the previous taxonomies include.
  • In that way, a more refined classification is obtained for the subclass of EAs and hybrid metaheuristic that are MAs.
  • Of course such a syntactic model and taxonomy is of little interest to the practitioner unless it in some way aids in the conceptualization and design process.

C. Distinguishing Types of Local Search Strategies

  • Making the separation into two sets of objects (candidate solutions in the EA’s population, and local search heuristics), with interactions mediated by a set of schedulers facilitates a closer examination of the potential nature of the elements of .
  • If adapts through changes in its parameters as increases, then the authors call an adaptive meme.
  • In the same way, if just one is self-adaptive then the entire is self-adaptive.
  • The simplest case uses static memes and requires that is enlarged to include a probability distribution function (pdf) for the likelihood of applying the different memes, in addition to their operational parameters.
  • The simplest adaptive case requires that is time-dependent, with the scheduler becoming responsible for adapting the pdf.

VII. DESIGN ISSUES FOR “COMPETENT” MAS

  • In [33], Goldberg describes “competent” GAs as: Genetic algorithms that solve hard problems quickly, reliably, and accurately.
  • As the authors have described above, for a wide variety of problems, MAs can fulfil these criteria better than traditional EAs.
  • It is now appropriate to revisit these issues, in the light of their syntactic model and taxonomy, in order to see what new insights can be gained.

A. Choice of Local Search Operators

  • The reader will probably not be surprised to find that their answer to the first question is “it depends.”.
  • In [67], the authors showed that even within a single problem class (in that case TSP) the choice of which single LS operator gave the best results when incorporated in an MA was entirely instance-specific.
  • It is perhaps worth noting that in [95], it was shown that while coarse-grain adaptation of was sufficient for a steepest-ascent LS, the extra noise inherent in an first-ascent approach gave worse results.
  • The hoped-for synergy in such an MA is that the use of genetic variation operators will produce offspring which are more likely to be in the basin of attraction of a high-quality local optimum than simply randomly selecting another point to optimize.

C. Managing the Global-Local Search Tradeoff

  • The majority of MAs in the literature apply local search to every individual in every generation of the EA, their model makes it clear that this is not mandatory.
  • They achieve this by providing sophisticated coarse-grain schedulers that measure population statistics and take them into consideration at the time of applying local search.
  • In [74], Land addresses the problem of how to best integrate the local search operators with the genetic operators.
  • That is, instead of performing a complete local search in every solution generated by the evolutionary operators, a partial local search is applied; only those solutions that are in promising basin of attraction will be assigned later (by the coarse-grain scheduler) anextended CPU budget for local search.

VIII. CONCLUSION AND FURTHER WORK

  • The authors committed ourselves to the study of several works on MAs, coming from different sources, with the purpose of designing a syntactical model for MAs.
  • The authors were able to identify two kinds of helpers, static and adaptive, and to generalize a third type: self-adaptive helpers.
  • While examples were found of the first two types, the third type was just recently explored [93]–[98] suggesting another interesting line of research.
  • Another important avenue of research is the study of which kind of MA, defined by its index, is suitable for different types of problems.
  • Both the syntactic model and the taxonomy aids their understanding of the design issues involved in the engineering of MAs.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

474 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 9, NO. 5, OCTOBER 2005
A Tutorial for Competent Memetic Algorithms:
Model, Taxonomy, and Design Issues
Natalio Krasnogor and Jim Smith
Abstract—The combination of evolutionary algorithms with
local search was named “memetic algorithms” (MAs) (Moscato,
1989). These methods are inspired by models of natural systems
that combine the evolutionary adaptation of a population with
individual learning within the lifetimes of its members. Addition-
ally, MAs are inspired by Richard Dawkin’s concept of a meme,
which represents a unit of cultural evolution that can exhibit local
refinement (Dawkins, 1976). In the case of MA’s, “memes” refer
to the strategies (e.g., local refinement, perturbation, or construc-
tive methods, etc.) that are employed to improve individuals. In
this paper, we review some works on the application of MAs to
well-known combinatorial optimization problems, and place them
in a framework defined by a general syntactic model. This model
provides us with a classification scheme based on a computable
index
, which facilitates algorithmic comparisons and suggests
areas for future research. Also, by having an abstract model for
this class of metaheuristics, it is possible to explore their design
space and better understand their behavior from a theoretical
standpoint. We illustrate the theoretical and practical relevance of
this model and taxonomy for MAs in the context of a discussion
of important design issues that must be addressed to produce
effective and efficient MAs.
Index Terms—Design issues, evolutionary global–local search
hybrids, memetic algorithms (MAs), model, taxonomy.
I. INTRODUCTION
E
VOLUTIONARY ALGORITHMS (EAs) are a class of
search and optimization techniques that work on a prin-
ciple inspired by nature: Darwinian Evolution. The concept of
natural selection is captured in EAs. Specifically, solutions to a
given problem are codified in so-called chromosomes. The evo-
lution of chromosomes due to the action of crossover, mutation,
and natural selection are simulated through computer code.
It is now well established that pure EAs are not well suited
to fine tuning search in complex combinatorial spaces and that
hybridization with other techniques can greatly improve the ef-
ficiency of search [3]–[6]. The combination of EAs with local
search (LS) was named “memetic algorithms” (MAs) in [1].
MAs are extensions of EAs that apply separate processes to
refine individuals, for example, improving their fitness by hill-
climbing.
Manuscript received July 15, 2003; revised January 27, 2005.
N. Krasnogor is with School of Computer Science and Information
Technology, University of Nottingham, Nottingham NG7 2RD, U.K. (e-mail:
natalio.krasnogor@nottingham.ac.uk).
J. Smith is with the Faculty of Computing, Engineering and Mathematical
Sciences, University of the West of England, Bristol, BS16 1QY England, U.K.
(e-mail: james.smith@uwe.ac.uk).
Digital Object Identifier 10.1109/TEVC.2005.850260
These methods are inspired by models of adaptation in natural
systems that combine the evolutionary adaptation of a popula-
tion with individual learning within the lifetimes of its members.
The choice of name is inspired by Richard Dawkins’ concept of
a meme, which represents a unit of cultural evolution that can
exhibit local refinement [2]. In the context of heuristic optimiza-
tion, a meme is taken to represent a learning or development
strategy. Thus, a memetic model of adaptation exhibits the plas-
ticity of individuals that a strictly genetic model fails to capture.
In the literature, MAs have also been named hybrid genetic
algorithms (GAs) (e.g., [7]–[9]), genetic local searchers (e.g.,
[10]), Lamarckian GAs (e.g., [11]), and Baldwinian GAs (e.g.,
[12]), etc. As noted above, they typically combine local search
heuristics with the EAs’ operators, but combinations with con-
structive heuristics or exact methods may also be considered
within this class of algorithms. We adopt the name of MAs
for this metaheuristic, because we think it encompasses all the
major concepts involved by the other ones, and for better or
worse has become the de facto standard, e.g., [13]–[15].
EAs and MAs have been applied in a number of different
areas, for example, operational research and optimization, auto-
matic programming, and machine and robot learning. They have
also been used to study and optimize of models of economies,
immune systems, ecologies, population genetics, and social sys-
tems, and the interaction between evolution and learning, to
name but a few applications.
From an optimization point of view, MAs have been shown
to be both more efficient (i.e., requiring orders of magnitude
fewer evaluations to find optima) and more effective (i.e., iden-
tifying higher quality solutions) than traditional EAs for some
problem domains. As a result, MAs are gaining wide accep-
tance, in particular, in well-known combinatorial optimization
problems where large instances have been solved to optimality
and where other metaheuristics have failed to produce compa-
rable results (see for example [16] for a comparison of MAs
against other approaches for the quadratic assignment problem).
II. G
OALS,AIMS, AND
METHODS
Despite the impressive results achieved by some MA prac-
titioners, the process of designing effective and efficient MAs
currently remains fairly ad hoc and is frequently hidden behind
problem-specific details. This paper aims to begin the process
of placing MA design on a sounder footing. In order to do this,
we begin by providing some examples of MAs successfully ap-
plied to well-known combinatorial optimization problems, and
draw out those differences which specifically arise from the hy-
bridization of the underlying EA, as opposed to being design
choices within the EA itself. These studies are exemplars, in the
1089-778X/$20.00 © 2005 IEEE

KRASNOGOR AND SMITH: TUTORIAL FOR COMPETENT MEMETIC ALGORITHMS: MODEL, TAXONOMY, AND DESIGN ISSUES 475
sense that they represent a wide range of applications and algo-
rithmic options for a MA.
The rst goal is to dene a syntactic model which enables
a better understanding of the interplay between the different
component parts of an MA. A syntactic model is devoid of
the semantic intricacies of each application domain and hence
exposes the bare bones of this metaheuristic to scrutiny. This
model should be able to represent the many different parts that
compose an MA, determine their roles and interrelations.
With such a model, we can construct a taxonomy of MAs, the
second goal of this paper. This taxonomy is of practical and the-
oretical relevance. It will allow for more sensible and fair com-
parisons of approaches and experimental designs. At the same
time, it will provide a conceptual framework to deal with more
difcult questions about the general behavior of MAs. More-
over, it will suggest directions of innovation in the design and
development of MAs.
Finally, by having a syntactic model and a taxonomy, the
process of more clearly identifying which of the many compo-
nents (and interactions) of these complex algorithms relate to
which of these design issues should be facilitated.
The rest of this paper is organized as follows. In Section III,
we motivate our denition of the class of metaheuristics under
consideration, and give examples of the type of design issues
that have motivated this study. This is followed in Section IV by
a review of some applications of MAs to well-known problems
in combinatorial optimization and bioinformatics. Section V
presents a syntax-only model for MAs and a taxonomy of
possible architectures for these metaheuristics is given in Sec-
tion VI. In Section VII, we return to the discussion of design
issues, showing how some of these can be aided by the insights
given by our model. Finally, we conclude with a discussion and
conclusions in Section VIII.
III. B
ACKGROUND
A. Defining the Subject of Study
In order to be able to dene a syntactic model and taxonomy,
we must rst clarify what we mean by an MA. It has been argued
that the success of MAs is due to the tradeoff between the ex-
ploration abilities of the EA, and the exploitation abilities of the
local search used. The well-known results of MAs over multi-
start local search (MSLS) [17] and greedy randomized adaptive
search procedure (GRASP) [8] suggest that, by transferring in-
formation between different runs of the local search (by means
of genetic operators) the MA is capable of performing a much
more efcient search. In this light, MAs have been frequently
described as genetic local search which might be thought as the
following process [18]:
In each generation of GA, apply the LS operator to all
solutions in the offspring population, before applying the
selection operator.
Although many MAs indeed use this formula this is a somewhat
restrictive view of MAs, and we will show in the following sec-
tions that many other ways have been used to hybridize EAs
with LS with impressive results.
In [19], the authors present an algebraic formalization of
MAs. In their approach, an MA is a very special case of GA
where just one period of local search is performed. As we will
show in the following sections, MAs are used in a plethora
of alternative ways and not just in the way the formalism
introduced in [19] suggests.
It has recently been argued by Moscato that the class of MAs
should be extended to contain not only EA-based MAs, but
effectively include any population-based approach based on
a k-merger operator to combine information from solutions
[13], creating a class of algorithms called the polynomial
merger algorithm (PMA). However, PMA ignores mutation
and selection as important components of the evolutionary
metaheuristic. Rather, it focuses exclusively on recombination,
or it is more general form, the k-merger operator. Therefore,
we do not use this denition here, as we feel that it is both
restrictive (in that it precludes the possibility of EAs which do
not use recombination), and also so broad that it encompasses
such a wide range of algorithms as to make analysis difcult.
As the limits of what is and what is not an MA are
stretched, it becomes more and more difcult to assess the
benet of each particular component of the metaheuristic in
search or optimization. A priori formalizations such as [13]
and [19] inevitably leave out many demonstrably successful
MAs and can seriously limit analysis and generalization of the
(already complex) behavior of MAs. Our intention is to provide
an a posteriori model of MAs, using algorithms as data; that
is, applications of MAs that have been proven successful. It
will be designed in such a way to encompass those algorithms.
Thus, we use a commonly accepted denition, which may be
summarized as [20]:
An MA is an EA that includes one or more local search
phases within its evolutionary cycle.
While this denition clearly limits the scope of our study, it
does not curtail the range of algorithms that can t this scope.
As with any formal model and taxonomy, ours will have its own
outsiders, but hopefully they will be less numerous than those
left aside by [13] and [19]. The extension of our model to other
population-based metaheuristics is being considered in a sepa-
rate paper.
Finally, we should note that we have restricted the survey part
of this paper to MAs approaches for single-objective combina-
torial optimization problems (as opposed to multiobjective or
numerical optimization problems). This is not because MAs are
unsuited to these domainsthey have been very successfully
applied to the elds of multiobjective optimization (see, e.g.,
[21][24], an extensive bibliography can be found in [25]),
and numerical optimization (see, e.g., [26][32]). Rather, the
reason for this omission is partly practical, to do with the space
this large eld would demand. It is also partly because we wish
to introduce our ideas in the context of the simple algorithm
, where it is straightforward to
dene a neighborhood, improvement, and the concept of local
optimality. When we consider multiobjective problems, the
whole concept of optimality becomes clouded by the trade-
offs between objectives, and dominance relations are usually
preferred. Similarly, in the case of numerical optimization, the
concept of local optimality is clouded by the difculty, in the
absence of derivative information, of knowing when a solution
is truly locally optimal, as opposed to say, a point, a very small

476 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 9, NO. 5, OCTOBER 2005
distance away. Nevertheless, it is worth stressing that the issues
cloud the exposition, rather than invalidate the concept of
schedulers which leads to our syntactic model and taxonomy,
and the subsequent design guidelines which can equally well
be applied in these more complex domains.
B. Design Issues for MAs
Having provided a fairly broad-brush denition of the class
of metaheuristics that we are concerned with, it is still vital to
note that the design of competent [33] MAs raises a number
of important issues which must be addressed by the practitioner.
Perhaps the foremost of these issues may be stated as:
What is the best tradeoff between local search and the
global search provided by evolution?
This leads naturally to questions such as the following.
Where and when should local search be applied within the
evolutionary cycle?
Which individuals in the population should be improved
by local search, and how should they be chosen?
How much computational effort should be allocated to
each local search?
How can the genetic operators best be integrated with
local search in order to achieve a synergistic effect?
As we will see in the following sections, there are a host
of possible answers to these questions, and it is important to
use both empirical experience and theoretical reasoning in the
search for answers. The aim of our syntactic model is to pro-
vide a sound basis for understanding and comparing the effects
of different schemes. The use of a formal model aids in this by
making some of the design choices more explicit, and by pro-
viding a means of comparing the existing MA literature with the
(far broader) body of research into EAs.
Similarly, while theoretical understanding of the interplay be-
tween local and global search is much less developed than that
of pure EAs, it is possible to look in that literature for tools
and concepts that may aid in the design of competent MAs, for
example:
Is a Baldwinian or Lamarckian model of improvement to
be preferred?
What tness landscape(s) does the population of the MA
operate on?
What local optima are the MAs operating with?
How can we engineer MAs that efciently traverse large
neutral plateaus and avoid deep local optima?
IV. S
OME EXAMPLE APPLICATIONS OF
MAS
IN OPTIMIZATION AND SEARCH
In this section, we will briey comment on the use of MAs
on different combinatorial optimization problems and adaptive
landscapes. Applications to traveling salesman problem (TSP),
quadratic assignment problem (QAP), binary quadratic pro-
gramming (BQP), minimum graph coloring (MGC), and protein
structure prediction problem (PSP) will be reviewed.
This section does not pretend to be an exhaustive bibliog-
raphy survey, but rather a gallery of well-known applications
of MAs from which some architectural and design conclusions
might be drawn. In [34], a comprehensive bibliography can be
found.
For the denition of the problems, the notation in [35] will
be used. The reader interested in the complexity and approx-
imability results of those problems is referred to the previous
reference. The pseudocode used to illustrate the different algo-
rithms is shown as used by the respective authors, with only
some minor changes made for the sake of clarity.
In [36], a standard local search algorithm is dened in terms
of a local search problem. Because this standard algorithm is
implicit in many MAs, we repeat it here.
Begin
produce a starting solution
to problem instance ;
Repeat Until (locally optimal) Do
using
and generate the next
neighbor
;
If (
is better than ) Then
;
Fi
Od
End.
Algorithm captures the in-
tuitive notion of searching a neighborhood as a means of
identifying a better solution. It does not specify tie-breaking
policies, neighborhood structure, etc.
This algorithm uses a greedy rather than a steepest policy,
i.e., it accepts the rst better neighbor that it nds. In general,
a given solution might have several better neighbors, and the
rule that assigns one of the (potentially many) better neigh-
bors to a solution is called a pivot rule. The selection of the
pivot rule or rules to use in a given instantiation of the stan-
dard local search algorithm has tremendous impact on the com-
plexity of the search and potentially in the quality of the solu-
tions explored.
Note also that the algorithm above implies that local search
continues until a local optima is found. This may take a long
time, and in the continuous domain proof of local optimality
may be decidedly nontrivial. Many of the local search proce-
dures embedded within the MAs in the literature are not stan-
dard in this sense, that is, they usually perform a shorter trun-
cated local search.
A. MAs for the TSP
The TSP is one of the most studied combinatorial optimiza-
tion problems. It is dened by the following.
Traveling Salesman Problem
Instance: A set
of cities, and for each pair of cities
, a distance .
Solution: A tour of
, i.e.,
a permutation
.
Measure: The length of the tour, i.e.,
.
Aim: minimum length tour
.

KRASNOGOR AND SMITH: TUTORIAL FOR COMPETENT MEMETIC ALGORITHMS: MODEL, TAXONOMY, AND DESIGN ISSUES 477
In [37], a short review on early MAs for the TSP is presented,
where an MA was dened by the following skeleton code.
Begin
/
, , /
For
To Do
Iterative_Improvement
;
Od
stop_criterion:= false
While (
stop_criterion) Do
;
For
To Do
/
Mate /
;
/
Recombine /
;
Iterative_Improvement
;
;
Od
/
Select /
;
evaluate stop_criterion
Od
End.
Here, we can regard as a
particular instantiation of
, and
appropriate code should be used to initialize the population,
mate solutions, and select the next generation. Note that the
mutation stage was replaced by the local search procedure.
Also, a
selection strategy was applied. The use of
local search and the absence of mutation is a clear difference
between
and standard EAs.
In [37], early works on the application of MAs to the TSP
were commented on. Those works used different instantiations
of the above skeleton to produce near-optimal solution for small
instances of the problem. Although the results were not den-
itive, they were very encouraging, and many of the following
applications of MAs to the TSP (and also to other NPO prob-
lems) were inspired by those early works.
In [38], the MA
is
used which has several nonstandard features. For details, the
reader is referred to [13] and [38]. We are interested here
in remarking the two important differences with the MA
shown previously. In this MA,
the local search procedure is used after the application of each of
the genetic operators and not only once in every iteration of the
EA. These two metaheuristics differ also in that in the last case
a clear distinction is made between mutations and local search.
In [38], the local search used is based on the powerful guided
local search (GLS) metaheuristic [39]. This algorithm was com-
pared against MSLS, GLS, and a second MA, where the local
search engine was the same basic move used by GLS without
the guiding strategy. In this paper, results were presented from
experiments using instances taken from TSPLIB [40] and
fractal instances [41]. In no case was the MSLS able to achieve
an optimal tour unlike the other three approaches. Out of 31
instances tested, the
solved 24 to optimality, MSLS 0, MA with simple local search
10, and GLS 16. It is interesting to note that the paper was not
intended as a better than paper but rather as a pedagogical
paper, where the MAs were exposed as a new metaheuristic in
optimization.
GLS_Based_Memetic_Algorithm
Begin
Initialize population;
For
To sizeOf(population) Do
;
;
Evaluate(individual);
Od
Repeat Until (termination_condition)
Do
For
To #recombinations Do.
selectToMerge a set
;
;
;
Evaluate(offspring);
Add offspring to population;
Od
For
To #mutations Do
selectToMutate an individual in
population;
Mutate(individual);
;
Evaluate(individual);
Add individual to population;
Od
population=SelectPop(population);
If (population has converged) Then
population=RestartPop(population);
Fi
Od
End.
Merz and Freisleben in [42][44] show many different com-
binations of local search and genetic search for the TSP [in both
its symmetric (STSP) and asymmetric (ATSP) versions], while
dening purpose-specic crossover and mutation operators. In
[42], the following code was used to conduct the simulations.
STSP-GA
Begin
Initialize pop
with
;
For
To Do
, ;

478 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 9, NO. 5, OCTOBER 2005
Od
Repeat Until (converged) Do
For
To #crossover Do
Select two parents
randomly;
;
Lin-Kernighan-Opt
;
With probability
do
Mutation-STSP
;
Replace an individual of
by ;
Od
Od
End.
In this pseudocode, the authors employ specialized crossover
and mutation operators for the TSP (and a similar algorithm
for the ATSP). As in previous examples, the initial popu-
lation is a set of local optima, in this case, with respect to
. In this case, the LK heuristic
is also applied to the results of crossover and mutation. The
authors motivate this, saying:
and let a GA operate on the set of local optima to
determine the global optimum.
However, they also note that this can lead to a disastrous
loss of diversity, which prompts their use of a selection strategy
which is neither a
nor a but a hybrid between
the two, whereby the new offspring replaces the most similar
member of the population, (subject to elitism). As the authors re-
mark, the large step Markov chains and iterated-Lin-Kernighan
techniques are special cases of their algorithm.
In [44], the authors change their optimization scheme to one
similar to
, which has a
more traditional mutation and selection scheme and in [43] they
use the same scheme as
but after nalization
of the GA run, postprocessing by means of local search is per-
formed.
It is important to notice that Merz and Freislebens MAs are
perhaps the most successful metaheuristics for TSP and ATSP,
and a predecessor of the schemes described was the winning al-
gorithm of the First International Contest on Evolutionary Op-
timization.
In [45], Nagata and Kobayashi described a powerful MA with
an intelligent crossover, in which the local searcher is embedded
in the genetic operator. The authors of [46] describe a detailed
study of Nagata and Kobayashis work, and relate it to the local
searcher used by Merz and Freisleben.
B. MAs for the QAP
The QAP is found in the core of many practical problems such
as facility location, architectural design, VLSI optimization, etc.
Also, the TSP and GP can be recast as special cases of QAP. The
problem is formally dened as the following.
Quadratic Assignment Problem
Instance: A,B matrices of
.
Solution: A permutation
.
Measure: The cost of the permutation, i.e.,
Aim: Minimun cost permutation
.
Because of the nature of QAP, it is difcult to treat with
exact methods, and many heuristics and metaheuristics have
been used to solve it. In this section, we will briey comment
on the application of MAs to the QAP.
In [9], the following MA described as a hybrid GA meta-
heuristic is proposed.
Begin
;
For
To m Do
generate a random permutation
;
Add
to ;
Od
Sort
;
For
To number_of_generations Do
For
To num_offspring_per_
generation Do
select two parents
, from ;
;
Add
to ;
Od
Sort
;
;
Od
Return the best
;
End.
In the code shown above, and are initializa-
tion and improvement heuristics, respectively. In particular, the
authors reports on experiments where
is a tabu search
(TS) heuristic. At the time that paper was written, their MA was
one of the best heuristics available (in terms of solution quality
for standard test instances).
It is interesting to remark that as in
and , the
GA is seeded with a high-quality initial population, which is
the output of an initial local search strategy,
(TS in this
case). Again, we nd that the selection strategy, represented
by
,isa strategy as in the previous MAs.
The authors further increase the selection pressure by using a
mating selection strategy. As in
,
no explicit mutation strategy is used: Fleurent and Ferland
regard
and as mutations that are applied with
a probability 1. As in
, the opti-
mization step is applied only to the newly generated individual,
that is, the output of the crossover stage.
In [16], results were reported which are improvements to
those in the paper previously commented, and for other meta-
heuristics for QAP. The sketch of the algorithm used is the
following.

Citations
More filters
Journal ArticleDOI
TL;DR: A family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member, and is shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions.
Abstract: Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design.

1,086 citations


Cites background from "A tutorial for competent memetic al..."

  • ...3) MA-S2: Memetic algorithms (MAs) [63], [ 64 ] are based on the hybridization of genetic algorithm (GA) with local search (LS) techniques....

    [...]

Journal ArticleDOI
TL;DR: A fresh treatment is introduced that classifies and discusses existing work within three rational aspects: what and how EA components contribute to exploration and exploitation; when and how Exploration and exploitation are controlled; and how balance between exploration and exploited is achieved.
Abstract: “Exploration and exploitation are the two cornerstones of problem solving by search.” For more than a decade, Eiben and Schippers' advocacy for balancing between these two antagonistic cornerstones still greatly influences the research directions of evolutionary algorithms (EAs) [1998]. This article revisits nearly 100 existing works and surveys how such works have answered the advocacy. The article introduces a fresh treatment that classifies and discusses existing work within three rational aspects: (1) what and how EA components contribute to exploration and exploitation; (2) when and how exploration and exploitation are controlled; and (3) how balance between exploration and exploitation is achieved. With a more comprehensive and systematic understanding of exploration and exploitation, more research in this direction may be motivated and refined.

1,029 citations


Cites background from "A tutorial for competent memetic al..."

  • ...These algorithms are also known as hybrid EAs [Blum et al. 2011; Liao 2010; Mashinchi et al. 2011; Mongus et al. 2012; Misevi . cius and Rubliauskas 2008], genetic local search algorithms [Merz and Freisleben 2000], or memetic algorithms [Moscato 1999; Krasnogor and Smith 2005; Ong et al. 2006]....

    [...]

  • ...…and exploitation Ancestry-based [Tsutsui et al. 1997b, 1997a; Oppacher and Wineberg 1999; Goh et al. 2003] [Hart 1994; Freisleben and Merz 1996; Moscato 1999; Merz and Freisleben 2000; Ursem 2002; Ishibuchi et al. 2003, 2010a; Alba and Dorronsoro 2005; Krasnogor and Smith 2005; Ong et al....

    [...]

  • ...Nevertheless, the success of memetic algorithms has often been credited to a good trade-off between the exploration abilities of EAs and the exploitation abilities of the local search [Krasnogor and Smith 2005]....

    [...]

01 Jan 2012
TL;DR: In this paper, a good ratio between exploration and exploitation of a search space is defined as the ratio between the probability that a search algorithm is successful and the probability of being successful.
Abstract: Every search algorithm needs to address the exploration and exploitation of a search space. Exploration is the process of visiting entirely new regions of a search space, whilst exploitation is the process of visiting those regions of a search space within the neighborhood of previously visited points. In order to be successful a search algorithm needs to establish a good ratio between exploration and exploitation. In this respect Evolutionary Algorithms (EAs) [De Jong 2002; Eiben and Smith 2008], such as Genetic Algorithms (GAs) [Michalewicz 1996; Goldberg 2008], Evolutionary Strategies (ES) [Back 1996], Evolutionary Programming (EP) [Fogel 1999], and Genetic Programming (GP) [Koza 1992], to name the more well-known instances, are no exception. Herrera and Lozano [1996] emphasized this by saying “The genetic algorithm behaviour is determined by the exploitation and exploration relationship kept throughout the run.” Many researchers believe that EAs are effective because of their good ratio between exploration and exploitation. Michalewicz [1996] stated that “Genetic Algorithms are a

769 citations

Journal ArticleDOI
01 Sep 2011
TL;DR: A survey of some of the most important lines of hybridization of metaheuristics with other techniques for optimization, which includes, for example, the combination of exact algorithms and meta heuristics.
Abstract: Research in metaheuristics for combinatorial optimization problems has lately experienced a noteworthy shift towards the hybridization of metaheuristics with other techniques for optimization. At the same time, the focus of research has changed from being rather algorithm-oriented to being more problem-oriented. Nowadays the focus is on solving the problem at hand in the best way possible, rather than promoting a certain metaheuristic. This has led to an enormously fruitful cross-fertilization of different areas of optimization. This cross-fertilization is documented by a multitude of powerful hybrid algorithms that were obtained by combining components from several different optimization techniques. Hereby, hybridization is not restricted to the combination of different metaheuristics but includes, for example, the combination of exact algorithms and metaheuristics. In this work we provide a survey of some of the most important lines of hybridization. The literature review is accompanied by the presentation of illustrative examples.

684 citations

Journal ArticleDOI
TL;DR: Several classes of optimization problems, such as discrete, continuous, constrained, multi-objective and characterized by uncertainties, are addressed by indicating the memetic “recipes” proposed in the literature.
Abstract: Memetic computing is a subject in computer science which considers complex structures such as the combination of simple agents and memes, whose evolutionary interactions lead to intelligent complexes capable of problem-solving. The founding cornerstone of this subject has been the concept of memetic algorithms, that is a class of optimization algorithms whose structure is characterized by an evolutionary framework and a list of local search components. This article presents a broad literature review on this subject focused on optimization problems. Several classes of optimization problems, such as discrete, continuous, constrained, multi-objective and characterized by uncertainties, are addressed by indicating the memetic “recipes” proposed in the literature. In addition, this article focuses on implementation aspects and especially the coordination of memes which is the most important and characterizing aspect of a memetic structure. Finally, some considerations about future trends in the subject are given.

522 citations


Cites background from "A tutorial for competent memetic al..."

  • ..., [105, 199, 230], where the memes, either directly encoded within the candidate solutions or evolving in parallel to them, take part in the evolution and undergo recombination and selection in order to select the most promising operators...

    [...]

References
More filters
Book
01 Jan 1976
TL;DR: In this paper, the authors take up the concepts of altruistic and selfish behaviour; the genetical definition of selfish interest; the evolution of aggressive behaviour; kinship theory; sex ratio theory; reciprocal altruism; deceit; and the natural selection of sex differences.
Abstract: Science need not be dull and bogged down by jargon, as Richard Dawkins proves in this entertaining look at evolution. The themes he takes up are the concepts of altruistic and selfish behaviour; the genetical definition of selfish interest; the evolution of aggressive behaviour; kinship theory; sex ratio theory; reciprocal altruism; deceit; and the natural selection of sex differences. Readership: general; students of biology, zoology, animal behaviour, psychology.

10,880 citations

Journal ArticleDOI
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

10,771 citations

Journal ArticleDOI
TL;DR: It is shown that both the traditional and Lamarckian genetic algorithms can handle ligands with more degrees of freedom than the simulated annealing method used in earlier versions of AUTODOCK, and that the Lamarckia genetic algorithm is the most efficient, reliable, and successful of the three.
Abstract: A novel and robust automated docking method that predicts the bound conformations of flexible ligands to macromolecular targets has been developed and tested, in combination with a new scoring function that estimates the free energy change upon binding. Interestingly, this method applies a Lamarckian model of genetics, in which environmental adaptations of an individual's phenotype are reverse transcribed into its genotype and become . heritable traits sic . We consider three search methods, Monte Carlo simulated annealing, a traditional genetic algorithm, and the Lamarckian genetic algorithm, and compare their performance in dockings of seven protein)ligand test systems having known three-dimensional structure. We show that both the traditional and Lamarckian genetic algorithms can handle ligands with more degrees of freedom than the simulated annealing method used in earlier versions of AUTODOCK, and that the Lamarckian genetic algorithm is the most efficient, reliable, and successful of the three. The empirical free energy function was calibrated using a set of 30 structurally known protein)ligand complexes with experimentally determined binding constants. Linear regression analysis of the observed binding constants in terms of a wide variety of structure-derived molecular properties was performed. The final model had a residual standard y1 y1 .

9,322 citations

Book
01 Jan 1991
TL;DR: This book sets out to explain what genetic algorithms are and how they can be used to solve real-world problems, and introduces the fundamental genetic algorithm (GA), and shows how the basic technique may be applied to a very simple numerical optimisation problem.
Abstract: This book sets out to explain what genetic algorithms are and how they can be used to solve real-world problems. The first objective is tackled by the editor, Lawrence Davis. The remainder of the book is turned over to a series of short review articles by a collection of authors, each explaining how genetic algorithms have been applied to problems in their own specific area of interest. The first part of the book introduces the fundamental genetic algorithm (GA), explains how it has traditionally been designed and implemented and shows how the basic technique may be applied to a very simple numerical optimisation problem. The basic technique is then altered and refined in a number of ways, with the effects of each change being measured by comparison against the performance of the original. In this way, the reader is provided with an uncluttered introduction to the technique and learns to appreciate why certain variants of GA have become more popular than others in the scientific community. Davis stresses that the choice of a suitable representation for the problem in hand is a key step in applying the GA, as is the selection of suitable techniques for generating new solutions from old. He is refreshingly open in admitting that much of the business of adapting the GA to specific problems owes more to art than to science. It is nice to see the terminology associated with this subject explained, with the author stressing that much of the field is still an active area of research. Few assumptions are made about the reader's mathematical background. The second part of the book contains thirteen cameo descriptions of how genetic algorithmic techniques have been, or are being, applied to a diverse range of problems. Thus, one group of authors explains how the technique has been used for modelling arms races between neighbouring countries (a non- linear, dynamical system), while another group describes its use in deciding design trade-offs for military aircraft. My own favourite is a rather charming account of how the GA was applied to a series of scheduling problems. Having attempted something of this sort with Simulated Annealing, I found it refreshing to see the authors highlighting some of the problems that they had encountered, rather than sweeping them under the carpet as is so often done in the scientific literature. The editor points out that there are standard GA tools available for either play or serious development work. Two of these (GENESIS and OOGA) are described in a short, third part of the book. As is so often the case nowadays, it is possible to obtain a diskette containing both systems by sending your Visa card details (or $60) to an address in the USA.

6,758 citations


"A tutorial for competent memetic al..." refers background in this paper

  • ...It is now well established that pure EAs are not well suited to fine tuning search in complex combinatorial spaces and that hybridization with other techniques can greatly improve the efficiency of search [3]–[6]....

    [...]

Journal ArticleDOI
01 May 1977-Nature

6,044 citations


"A tutorial for competent memetic al..." refers background in this paper

  • ...Additionally, MAs are inspired by Richard Dawkin’s concept of a meme, which represents a unit of cultural evolution that can exhibit local refinement [2]....

    [...]

  • ...The choice of name is inspired by Richard Dawkins’ concept of a meme, which represents a unit of cultural evolution that can exhibit local refinement [2]....

    [...]

Frequently Asked Questions (10)
Q1. What are the contributions in "A tutorial for competent memetic algorithms: model, taxonomy, and design issues" ?

In this paper, the authors review some works on the application of MAs to well-known combinatorial optimization problems, and place them in a framework defined by a general syntactic model. This model provides us with a classification scheme based on a computable index, which facilitates algorithmic comparisons and suggests areas for future research. The authors illustrate the theoretical and practical relevance of this model and taxonomy for MAs in the context of a discussion of important design issues that must be addressed to produce effective and efficient MAs. 

The meta scheduler is able to use information from previous populations to influence its search by means of and the elements of , hence a kind of evolutionary memory is introduced into the evolutionary search mechanisms. 

The hoped-for synergy in such an MA is that the use of genetic variation operators will produce offspring which are more likely to be in the basin of attraction of a high-quality local optimum than simply randomly selecting another point to optimize. 

A coarse-grain scheduler (cS) is defined byIn this scheduler, the formal parameters stand for the updating function , the set of the local searchers , the sets of parents and offspring ( and , respectively), and the operator specific parameter sets and . 

In [57] and [71], the issue of large neutral plateaus and deep local optima is addressed by providing modified local searchers that can change their behavior accordingly to the convergence state of the evolutionary search. 

Another important avenue of research is the study of which kind of MA, defined by its index, is suitable for different types of problems. 

Note that in this model, the authors consider the effects of survivor selection at the end of one generation, and parent selection at the start of the next, to be amortized into a single function , which is responsible for updating the working memory of their algorithm. 

In a related and complementary work, Talbi [92] provides a general classification scheme for metaheuristics based on two different aspects of a metaheuristic: its design space and its implementation space. 

In earlier sections, the authors have listed a number of papers in the recent MA literature which use multiple LS operators, and the authors would certainly argue that faced with a choice of operators, a sensible design approach would be not to decide a priori but to incorporate several. 

When the authors consider multiobjective problems, the whole concept of optimality becomes clouded by the tradeoffs between objectives, and dominance relations are usually preferred.