scispace - formally typeset
Search or ask a question
Book ChapterDOI

A generic framework for population-based algorithms, implemented on multiple FPGAs

14 Aug 2005-pp 43-55
TL;DR: This work outlines a generic framework that captures a collection of population-based algorithms, allowing commonalities to be factored out, and properties previously thought particular to one class of algorithms to be applied uniformly across all the algorithms.
Abstract: Many bio-inspired algorithms (evolutionary algorithms, artificial immune systems, particle swarm optimisation, ant colony optimisation,...) are based on populations of agents. Stepney et al [2005] argue for the use of conceptual frameworks and meta-frameworks to capture the principles and commonalities underlying these, and other bio-inspired algorithms. Here we outline a generic framework that captures a collection of population-based algorithms, allowing commonalities to be factored out, and properties previously thought particular to one class of algorithms to be applied uniformly across all the algorithms. We then describe a prototype proof-of-concept implementation of this framework on a small grid of FPGA (field programmable gate array) chips, thus demonstrating a generic architecture for both parallelism (on a single chip) and distribution (across the grid of chips) of the algorithms.

Summary (4 min read)

1 Introduction

  • Many bio-inspired algorithms are based on populations of agents trained to solve some problem such as optimising functions or recognising categories.
  • The authors take up this challenge, and, in section 2, outline a generic framework abstracted from the individual population-based models of the following classes: genetic algorithms (GA), AIS negative selection, AIS clonal selection, PSO, and ant colony optimisation (ACO).
  • The framework provides a basis for factoring out the commonalities, and applying various properties uniformly across all the classes of algorithms, even where they were previously thought particular to one class (section 3).
  • In section 4 the authors describe their proof-of-concept prototype implementation of the generic framework on a platform of multiple field programmable gate array (FPGA) chips.

2 The generic framework for population algorithms

  • There are many specific algorithms and implementation variants of the different classes.
  • Rather, the authors take a step back from the specifics, and abstract the basic underlying concepts, particularly the more bio-inspired ones, of each class of algorithm.
  • The authors unify the similarities between these basics in order to develop a generic framework.
  • The intention is that such a framework provides a useful starting point for the subsequent development of more sophisticated variants of the algorithms.

Basic underlying concepts

  • Each individual contains a set of characteristics, which represent the solution.
  • The individuals are antibodies; each characteristic is a shape receptor, also known as AIS negative selection.
  • There are two populations, also known as AIS clonal selection.
  • There is also a population of memory cells drawn from this main population.
  • The individuals are the complete paths (not the ants, which are merely mechanisms to construct the complete paths from path steps); the characteristics are the sequence of path steps, where each step has an associated characteristic of length and pheromone level, also known as Ants.

Algorithm stages

  • The different specific algorithms each exhibit six clearly distinct stages, comprising a generation.
  • These are generalised as: 1. Create : make novel members of the population 2. Evaluate : evaluate each individual for its affinity to the solution 3.
  • The authors describe each of these stages, covering the generic properties, and how they are instantiated for each specific class of algorithm.
  • Rather than saying that some individuals survive from generation to generation, for uniformity the authors consistently consider each generation to be a completely fresh set of individuals, with some possibly being copies of previous generation individuals.
  • As another example, the pheromone changes in the Ant algorithm is mapped to the generic mutate step.

Create

  • Creation makes novel members of the populations.
  • In the first generation, the whole population is set up, and the members have their characteristics initialised.
  • On subsequent generations, creation “tops up” the population with fresh individuals, as necessary.

Evaluate

  • The affinity measures how well each individual solves (part of) the problem.
  • This function should ideally (but does not always) have the structure of a metric over the space defined by the characteristics.

Test

  • The test for termination is either (a) a sufficiently good solution is found, or (b) enough generations have been run without finding a sufficiently good solution.
  • On termination, the solution is: GA, Swarms, Ants : the highest affinity individual AIS negative selection : the set of individuals with above-threshold affinities AIS clonal selection : the population of memory cells.

Select

  • High affinity individuals are selected to contribute somehow to the next generation’s population.
  • There are several selection algorithms commonly used.
  • N best selects the n highest affinity individuals from the current population.
  • Roulette wheel selection randomly chooses a given number of individuals, with probability of selection proportional to their affinity, or to their ranking.
  • Tournament randomly selects teams of individuals, and then selects a subset of individuals from each team.

Spawn

  • Production of new individuals for the next generation usually involves combining the characteristics of parent individuals from the selected population (ants are a special case).
  • If the crossover mask is set to the identity, then the two new individuals are clones of the two parents.
  • The selected parents become the basis of the new generation (which is topped up to the population size by creating sufficient new individuals), also known as AIS negative selection.
  • The new position is derived from the parent’s position and velocity, the velocity is modified to point towards the best neighbour, and the neighbourhood group is copied from the parent.
  • Ants : no individuals are specifically spawned for the next generation: each generation is created afresh from the path steps (whose characteristics are changed by the mutate step).

Other generalisations

  • The generic framework allows further features of one specific algorithm to be generalised to the others.
  • Evolutionary Strategies encode the mutation rates as characteristics: a similar approach can be used in the other algorithms.
  • The ant algorithm could allow the pheromone decay rate to be a characteristic.
  • The range of selection strategies can be employed across all the algorithms that have a non-trivial selection stage.
  • In particular, AIS clonal selection has two populations: selection strategies could be used on the memory cell population too.

4 The prototype implementation

  • There is much opportunity for parallelism in these algorithms: individuals can (to some degree) be evaluated, selected, and created in parallel.
  • This suggests efficiency gains by executing these algorithms on parallel hardware.

FPGAs and Handel-C

  • The authors chose as their prototype implementation platform a small grid of FPGAs, executing the framework implemented in Handel-C.
  • So each individual FPGA can host multiple individuals executing in parallel, and multiple FPGAs allow distributed implementations.
  • Handel-C is essentially an executable subset of CSP [Stepney 2003], with some extensions to support FPGA hardware.
  • It would have been possible to design a protocol to implement this, allowing the distributed program to be (very close to) a pure Handel-C program.
  • So for this prototype, a simple handshaking protocol has been used, and the inter-chip communication hidden in a wrapper.

The implemented framework

  • The prototype implementation of the framework provides much of the functionality described above.
  • The Handel-C compiler optimises away dead code, so options that are not selected by the user (such as various choices of creation or selection functions) do not appear in the compiled code.
  • It is also possible to return intermediate results every generation, to allow investigation of the performance, or for debugging, but this introduces a communication bottleneck.
  • Each FPGA chip holds a certain number of islands, each of which holds its individuals.
  • Then the appropriate selection method is used on each team in parallel.

Restrictions due to the platform choice

  • Some of the design decisions for the framework prototype are due to specific features and limitations of FPGAs and Handel-C, and different platform choices could result in different decisions.
  • The use of families is to cope with the limited size of the FPGAs.
  • Certain parts of the selection can be performed in parallel, for example, to find the n best, where each individual can read the affinity of all its teammates in parallel.
  • Handel-C supports variable bit-width values, requiring explicit casting between values with different widths.
  • This can lead to arcane code, particularly when trying to write generic routines.

5 Preliminary results

  • The number of (families of) individuals possible per chip varies depending on the settings.
  • With all the capabilities turned on, this number drops to about 18 individuals run sequentially, or four if run in parallel, the reduction being due to the increased routing and copies of code.
  • The FPGAs being used (300K gate Xilinx SpartanIIE chips) are relatively small: it was thought more important for this proof of concept work to get the maximum number of FPGAs for the budget, rather than the maximum size of each one.
  • Looking at only the evaluate stage shows the sequential form taking about twice as long as the parallel form.
  • The experiment compares running four individuals in parallel on one chip versus four individuals in parallel on each of the five chips (20 individuals in total), migrating the two best individuals every 100 generations.

7 Acknowledgments

  • The authors would like to thank Wilson Ifill and AWE, who provided funding for the FPGAs used in this work.
  • Also thanks to Neil Audsley and Michael Ward for turning a large box of components into a usable FPGA grid, and to Fiona Polack and Jon Timmis for detailed comments on earlier versions.

8 References

  • 4th Asia-Pacific Conference on Simulated Evolution and Learning, 2002. [3].
  • Exploiting Parallelism Inherent in AIRS, an Artificial Immune Classifier.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

A generic framework for population-based algorithms,
implemented on multiple FPGAs
John Newborough and Susan Stepney
Department of Computer Science, University of York, Heslington, York, YO10 5DD, UK
Abstract. Many bio-inspired algorithms (evolutionary algorithms, artificial immune
systems, particle swarm optimisation, ant colony optimisation, …) are based on
populations of agents. Stepney et al [2005] argue for the use of conceptual frameworks
and meta-frameworks to capture the principles and commonalities underlying these, and
other bio-inspired algorithms. Here we outline a generic framework that captures a
collection of population-based algorithms, allowing commonalities to be factored out, and
properties previously thought particular to one class of algorithms to be applied uniformly
across all the algorithms. We then describe a prototype proof-of-concept implementation
of this framework on a small grid of FPGA (field programmable gate array) chips, thus
demonstrating a generic architecture for both parallelism (on a single chip) and distribution
(across the grid of chips) of the algorithms.
1 Introduction
Many bio-inspired algorithms are based on populations of agents trained to solve some
problem such as optimising functions or recognising categories. For example,
Evolutionary Algorithms (EA) are based on analogy to populations of organisms
mutating, breeding and selecting to become “fitter” [Mitchell 1996]. The negative and
clonal selection algorithms of Artificial Immune Systems (AIS) use populations of agents
trained to recognise certain aspects of interest (see de Castro & Timmis [2002] for an
overview): negative selection involves essentially random generation of candidate
recognisers, whilst clonal selection uses reinforcement based on selection and mutation of
the best recognisers. Particle swarm optimisation (PSO) [Kennedy & Eberhart 2001] and
social insect algorithms [Bonabeau 1999] use populations of agents whose co-operations
(direct, or stigmergic) result in problem solving.
Stepney et al [2005] argue for the use of conceptual frameworks and meta-frameworks
to capture the principles and commonalities underlying various bio-inspired algorithms.
We take up this challenge, and, in section 2, outline a generic framework abstracted from
the individual population-based models of the following classes: genetic algorithms (GA),
AIS negative selection, AIS clonal selection, PSO, and ant colony optimisation (ACO).
The framework provides a basis for factoring out the commonalities, and applying various
properties uniformly across all the classes of algorithms, even where they were previously
thought particular to one class (section 3).
ICARIS 2005, Banff, Canada, August 2005
.
LNCS 3627:43-55. Springer, 2005

2
In section 4 we describe our proof-of-concept prototype implementation of the generic
framework on a platform of multiple field programmable gate array (FPGA) chips. Thus
the generic architecture naturally permits both parallelism (multiple individuals executing
on a single chip) and distribution (multiple individuals executing across the array of chips)
of the algorithms. In section 5 we outline what needs to be done next to take these
concepts into a fully rigorous framework architecture and implementation.
2 The generic framework for population algorithms
There are many specific algorithms and implementation variants of the different classes.
To take one case, AIS clonal selection, see, for example [Cutello et al 2004] [Garrett
2004] [Kim & Bentley 2002]. It is not our intention to capture every detail of all the
variants in the literature. Rather, we take a step back from the specifics, and abstract the
basic underlying concepts, particularly the more bio-inspired ones, of each class of
algorithm. So when we refer to “GA” or “AIS clonal selection”, for example, we are not
referring to any one specific algorithm or implementation, but rather of the general
properties of this class. We unify the similarities between these basics in order to develop
a generic framework. The intention is that such a framework provides a useful starting
point for the subsequent development of more sophisticated variants of the algorithms.
Basic underlying concepts
The generic algorithm is concerned with a population of individuals, each of which
captures a possible solution, or part of a solution. Each individual contains a set of
characteristics, which represent the solution. The characteristics define the (phase or
state) space that the population of individuals inhabit. The goal of the algorithm is to find
“good” regions of this space, based on some affinity (a measure that relates position in the
space to goodness of solution, so defining a landscape). The individuals and
characteristics of the specific classes of algorithm are as follows:
GA : the individuals are chromosomes; each characteristic is a gene.
AIS negative selection : the individuals are antibodies; each characteristic is a shape
receptor.
AIS clonal selection : there are two populations. In the main population the
individuals are antibodies; each characteristic is a shape receptor. There is also a
population of memory cells drawn from this main population.
Swarms : the individuals are boids; the characteristics are position, velocity and
neighbourhood group (the other visible individuals).
Ants: the individuals are the complete paths (not the ants, which are merely
mechanisms to construct the complete paths from path steps); the characteristics are the
sequence of path steps, where each step has an associated characteristic of length and
pheromone level.

A generic framework for population-based algorithms, implemented on multiple FPGAs 3
Algorithm stages
The different specific algorithms each exhibit six clearly distinct stages, comprising a
generation. These are generalised as:
1. Create : make novel members of the population
2. Evaluate : evaluate each individual for its affinity to the solution
3. Test : test if some termination condition has been met
4. Select : select certain individuals from the current generation, based on their affinity, to
be used in the creation of the next generation
5. Spawn : create new individuals for the next generation
6. Mutate : change selected individuals
We describe each of these stages, covering the generic properties, and how they are
instantiated for each specific class of algorithm. Using this framework results in
descriptions that sometimes differ from, but are equivalent to, the traditional descriptions
of the algorithms. For example, rather than saying that some individuals survive from
generation to generation, for uniformity we consistently consider each generation to be a
completely fresh set of individuals, with some possibly being copies of previous
generation individuals. As another example, the pheromone changes in the Ant algorithm
is mapped to the generic mutate step.
Create
Creation makes novel members of the populations. In the first generation, the whole
population is set up, and the members have their characteristics initialised. On subsequent
generations, creation “tops up” the population with fresh individuals, as necessary.
GA: an individual chromosome is created usually with random characteristics, giving a
broad coverage of the search space
AIS negative selection : an individual antibody is created usually with random shape
receptors
AIS clonal selection : an individual antibody in the main population is created usually
with random shape receptors; memory cells are not created, rather they are spawned from
the main population
Swarms : an individual boid is created usually with random position and velocity
characteristics, giving a broad coverage of the search space; the neighbourhood
characteristic is usually set to implement a ring, grid or star connection topology
Ants : each path step is initially set up usually with a fixed pheromone level, and with
the relevant (fixed) path length; the population of paths is created by the ants from these
steps each generation

4
Evaluate
The affinity measures how well each individual solves (part of) the problem. It is a user-
defined function of (some of) an individual’s characteristics. This function should ideally
(but does not always) have the structure of a metric over the space defined by the
characteristics.
GA : the affinity is the fitness function, a function of the values of the genes
AIS : the affinity is a measure of how closely the shape receptors complement the
target of recognition, inspired by the “lock and key” metaphor
Swarms : the affinity, or fitness function, is a function of the current position
Ants : the affinity is the (inverse of the) path length
Test
The test for termination is either (a) a sufficiently good solution is found, or (b) enough
generations have been run without finding a sufficiently good solution. On termination,
the solution is:
GA, Swarms, Ants : the highest affinity (fittest) individual
AIS negative selection : the set of individuals with above-threshold affinities
AIS clonal selection : the population of memory cells
Select
High affinity individuals are selected to contribute somehow to the next generation’s
population. There are several selection algorithms commonly used. n best selects the n
highest affinity individuals from the current population. Threshold selects all the
individuals with an affinity greater than some given threshold value. Roulette wheel
selection randomly chooses a given number of individuals, with probability of selection
proportional to their affinity, or to their ranking. Tournament randomly selects teams of
individuals, and then selects a subset of individuals from each team.
GA : different variants use any of the above methods of selection, to find the parents
that will produce the next generation
AIS negative selection : threshold selection is used to find the next generation
AIS clonal selection : a combination of n best and threshold selection is used to find
the next generation of the main population; all individuals of the memory cell population
are selected to become the basis of its next generation
Swarms : all individuals are selected to become the basis of the next generation
Ants : no individuals are specifically selected to become the next generation: each
generation is created afresh from the path steps (whose characteristics are changed by the
mutate step)

A generic framework for population-based algorithms, implemented on multiple FPGAs 5
Spawn
Production of new individuals for the next generation usually involves combining the
characteristics of parent individuals from the selected population (ants are a special case).
GA : the characteristics of pairs of selected parents are combined by using a crossover
mask (predefined or randomly generated) to generate two new individuals. If the
crossover mask is set to the identity, then the two new individuals are clones of the two
parents.
AIS negative selection : the selected parents become the basis of the new generation
(which is topped up to the population size by creating sufficient new individuals). If the
threshold is a constant value throughout the run, this has the effect that an individual, once
selected, continues from generation to generation, and only the newly created individuals
need be evaluated.
AIS clonal selection : in the main population new individuals are spawned as clones of
each parent, with the number of clones being produced proportional to the parent’s
affinity; in the memory cell population, the selected parents become the basis of the new
generation, and a new individual is spawned, as (a copy of) the best individual of the main
population.
Swarms : a new individual is spawned from the sole parent and the highest affinity
individual in that parent’s neighbourhood group, with the intention of making the new
individual “move towards” the best neighbour. The new position is derived from the
parent’s position and velocity, the velocity is modified to point towards the best
neighbour, and the neighbourhood group is copied from the parent.
Ants : no individuals are specifically spawned for the next generation: each generation
is created afresh from the path steps (whose characteristics are changed by the mutate
step)
Mutate
Mutation involves altering the characteristics of single individuals in the population. It
would be possible to unify spawning and mutation into a single generate stage, but since
most algorithms consider these to be separate processes, we have followed that view,
rather than strive for total generality at this stage. The mutation rate might be globally
random, or based on the value of a characteristic or the affinity of each individual. How a
characteristic is mutated depends on its type: a boolean might be flipped, a numerical
value might be increased or decreased by an additive or multiplicative factor, etc.
GA, Swarms : individuals are mutated, usually randomly, in order to reintroduce lost
values of characteristics; evolutionary strategy algorithms encode mutation rates as
characteristics
AIS negative selection : no mutation occurs. (That is, the next generation consists of
copies of the selected above threshold individuals, topped up with newly created
individuals. An alternative, but equivalent, formulation in terms of this framework would

Citations
More filters
Book ChapterDOI
Susan Stepney1
01 Jan 2017
TL;DR: This chapter provides a broad overview of the field of unconventional computation, UComp, and includes discussion of novel hardware and embodied systems; software, particularly bio-inspired algorithms; and emergence and open-endedness.
Abstract: This chapter provides a broad overview of the field of unconventional computation, UComp. It includes discussion of novel hardware and embodied systems; software, particularly bio-inspired algorithms; and emergence and open-endedness.

2 citations

Dissertation
01 Dec 2010
TL;DR: Together, this thesis offers this as a modern reformulation of the interface between computer science and immunology, established in the seminal work of Perelson and collaborators, over 20 years ago.
Abstract: The immune system has long been attributed cognitive capacities such as "recognition" of pathogenic agents; "memory" of previous infections; "regulation" of a cavalry of detector and effector cells; and "adaptation" to a changing environment and evolving threats. Ostensibly, in preventing disease the immune system must be capable of discriminating states of pathology in the organism; identifying causal agents or ``pathogens''; and correctly deploying lethal effector mechanisms. What is more, these behaviours must be learnt insomuch as the paternal genes cannot encode the pathogenic environment of the child. Insights into the mechanisms underlying these phenomena are of interest, not only to immunologists, but to computer scientists pushing the envelope of machine autonomy. This thesis approaches these phenomena from the perspective that immunological processes are inherently inferential processes. By considering the immune system as a statistical decision maker, we attempt to build a bridge between the traditionally distinct fields of biological modelling and statistical modelling. Through a mixture of novel theoretical and empirical analysis we assert the efficacy of competitive exclusion as a general principle that benefits both. For the immunologist, the statistical modelling perspective allows us to better determine that which is phenomenologically sufficient from the mass of observational data, providing quantitative insight that may offer relief from existing dichotomies. For the computer scientist, the biological modelling perspective results in a theoretically transparent and empirically effective numerical method that is able to finesse the trade-off between myopic greediness and intractability in domains such as sparse approximation, continuous learning and boosting weak heuristics. Together, we offer this as a modern reformulation of the interface between computer science and immunology, established in the seminal work of Perelson and collaborators, over 20 years ago.

2 citations

Book ChapterDOI
26 Jul 2010
TL;DR: This paper asserts the primacy of competitive exclusion over selection and mutation; providing theoretical analysis and empirical results that support this position.
Abstract: Clonal selection is the keystone of mainstream immunology and computational systems based on immunological principles. For the latter, clonal selection is often interpreted as an asexual variant of natural selection, and thus, tend to be variations on evolutionary strategies. Retro-fitting immunological sophistication and theoretical rigour onto such systems has proved to be unwieldy. In this paper we assert the primacy of competitive exclusion over selection and mutation; providing theoretical analysis and empirical results that support our position.
Dissertation
01 May 2017
TL;DR: The proposed enhanced technique will depend on the input and output values in the robot’s controller to diagnose other robots within the swarm during the entire swarm operation, and communication plays an important part of the diagnosis procedure.
Abstract: A robotic swarm needs to ensure continuous operation even in the event of a failure of one or more individual robots. If one robot breaks down, another robot can take steps to repair the failed robot, or take over the failed robot’s task. Even with a small number of faulty robots, the expected time to achieve the swarm task will be affected. Observing failure detection techniques requires an investigation of similar techniques in insects. The synchronisation approach of fireflies is an exogenous failure detection technique. This approach requires all the robots in the swarm to be initially synchronised together in order to announce a healthy status for each individual robot. Another exogenous failure detection approach is the Robot Internal Simulator. The concept of this approach is to have robots that are capable of detecting partial failures by possessing a copy of every other robot’s controller, which they then instantiate within an internal simulator on-board to be run for a short period of time to predict the future state of the other robots. The work in this research draws inspiration from both approaches, which both still have several issues when they are implemented in swarm robotics. The enhanced technique developed in this research will depend on the input and output values in the robot’s controller to diagnose other robots within the swarm during the entire swarm operation. In this research, communication plays an important part of the diagnosis procedure. While robots retain possession of their own controller values including their co-ordination, the receiver computes the distance between them based on the signal strength. A fault suspicion is generated if the computed distances do not match and an acknowledgement of the failure will be broadcast to the robotic swarm. This research explores the performance of the simulation experimental results. It has shown that failed robots are rapidly detected failures using the proposed technique. A mitigation procedure takes place after the faulty robot is shut down, either by pushing it away or allowing it to work as a communication bridge to operational robots.
References
More filters
Journal ArticleDOI
TL;DR: It is suggested that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method.
Abstract: This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of a familiar programming exercises.

11,419 citations

Book
01 Jan 1996
TL;DR: An Introduction to Genetic Algorithms focuses in depth on a small set of important and interesting topics -- particularly in machine learning, scientific modeling, and artificial life -- and reviews a broad span of research, including the work of Mitchell and her colleagues.
Abstract: From the Publisher: "This is the best general book on Genetic Algorithms written to date. It covers background, history, and motivation; it selects important, informative examples of applications and discusses the use of Genetic Algorithms in scientific models; and it gives a good account of the status of the theory of Genetic Algorithms. Best of all the book presents its material in clear, straightforward, felicitous prose, accessible to anyone with a college-level scientific background. If you want a broad, solid understanding of Genetic Algorithms -- where they came from, what's being done with them, and where they are going -- this is the book. -- John H. Holland, Professor, Computer Science and Engineering, and Professor of Psychology, The University of Michigan; External Professor, the Santa Fe Institute. Genetic algorithms have been used in science and engineering as adaptive algorithms for solving practical problems and as computational models of natural evolutionary systems. This brief, accessible introduction describes some of the most interesting research in the field and also enables readers to implement and experiment with genetic algorithms on their own. It focuses in depth on a small set of important and interesting topics -- particularly in machine learning, scientific modeling, and artificial life -- and reviews a broad span of research, including the work of Mitchell and her colleagues. The descriptions of applications and modeling projects stretch beyond the strict boundaries of computer science to include dynamical systems theory, game theory, molecular biology, ecology, evolutionary biology, and population genetics, underscoring the exciting "general purpose" nature of genetic algorithms as search methods that can be employed across disciplines. An Introduction to Genetic Algorithms is accessible to students and researchers in any scientific discipline. It includes many thought and computer exercises that build on and reinforce the reader's understanding of the text. The first chapter introduces genetic algorithms and their terminology and describes two provocative applications in detail. The second and third chapters look at the use of genetic algorithms in machine learning (computer programs, data analysis and prediction, neural networks) and in scientific models (interactions among learning, evolution, and culture; sexual selection; ecosystems; evolutionary activity). Several approaches to the theory of genetic algorithms are discussed in depth in the fourth chapter. The fifth chapter takes up implementation, and the last chapter poses some currently unanswered questions and surveys prospects for the future of evolutionary computation.

9,933 citations

Book
01 Jan 1985

9,210 citations

Journal ArticleDOI
TL;DR: An Introduction to Genetic Algorithms as discussed by the authors is one of the rare examples of a book in which every single page is worth reading, and the author, Melanie Mitchell, manages to describe in depth many fascinating examples as well as important theoretical issues.
Abstract: An Introduction to Genetic Algorithms is one of the rare examples of a book in which every single page is worth reading. The author, Melanie Mitchell, manages to describe in depth many fascinating examples as well as important theoretical issues, yet the book is concise (200 pages) and readable. Although Mitchell explicitly states that her aim is not a complete survey, the essentials of genetic algorithms (GAs) are contained: theory and practice, problem solving and scientific models, a \"Brief History\" and \"Future Directions.\" Her book is both an introduction for novices interested in GAs and a collection of recent research, including hot topics such as coevolution (interspecies and intraspecies), diploidy and dominance, encapsulation, hierarchical regulation, adaptive encoding, interactions of learning and evolution, self-adapting GAs, and more. Nevertheless, the book focused more on machine learning, artificial life, and modeling evolution than on optimization and engineering.

7,098 citations


"A generic framework for population-..." refers background in this paper

  • ...For example, Evolutionary Algorithms (EA) are based on analogy to populations of organisms mutating, breeding and selecting to become “fitter” [Mitchell 1996]....

    [...]

  • ...© Springer-Verlag Berlin Heidelberg 2005...

    [...]

BookDOI
01 Jan 1999
TL;DR: This chapter discusses Ant Foraging Behavior, Combinatorial Optimization, and Routing in Communications Networks, and its application to Data Analysis and Graph Partitioning.
Abstract: 1. Introduction 2. Ant Foraging Behavior, Combinatorial Optimization, and Routing in Communications Networks 3. Division of Labor and Task Allocation 4. Cemetery Organization, Brood Sorting, Data Analysis, and Graph Partitioning 5. Self-Organization and Templates: Application to Data Analysis and Graph Partitioning 6. Nest Building and Self-Assembling 7. Cooperative Transport by Insects and Robots 8. Epilogue

5,822 citations