scispace - formally typeset
Search or ask a question

Showing papers in "Evolutionary Computation in 1993"


Journal ArticleDOI
TL;DR: In this paper, three main streams of evolutionary algorithms (EAs), probabilistic optimization algorithms based on the model of natural evolution, are compared in a comparison with respect to certain characteristic components of EAs: the representation scheme of object variables, mutation, recombination and the selection operator.
Abstract: Three main streams of evolutionary algorithms (EAs), probabilistic optimization algorithms based on the model of natural evolution, are compared in this article: evolution strategies (ESs), evolutionary programming (EP), and genetic algorithms (GAs). The comparison is performed with respect to certain characteristic components of EAs: the representation scheme of object variables, mutation, recombination, and the selection operator. Furthermore, each algorithm is formulated in a high-level notation as an instance of the general, unifying basic algorithm, and the fundamental theoretical results on the algorithms are presented. Finally, after presenting experimental results for three test functions representing a unimodal and a multimodal case as well as a step function with discontinuities, similarities and differences of the algorithms are elaborated, and some hints to open research questions are sketched.

1,960 citations


Journal ArticleDOI
TL;DR: The numerical performance of the BGA is demonstrated on a test suite of multimodal functions and the number of function evaluations needed to locate the optimum scales only as n ln(n) where n is thenumber of parameters.
Abstract: In this paper a new genetic algorithm called the Breeder Genetic Algorithm (BGA) is introduced. The BGA is based on artificial selection similar to that used by human breeders. A predictive model for the BGA is presented that is derived from quantitative genetics. The model is used to predict the behavior of the BGA for simple test functions. Different mutation schemes are compared by computing the expected progress to the solution. The numerical performance of the BGA is demonstrated on a test suite of multimodal functions. The number of function evaluations needed to locate the optimum scales only as n ln(n) where n is the number of parameters. Results up to n = 1000 are reported.

1,267 citations


Journal ArticleDOI
TL;DR: An algorithm based on a traditional genetic algorithm that involves iterating the GA but uses knowledge gained during one iteration to avoid re-searching, on subsequent iterations, regions of problem space where solutions have already been found.
Abstract: A technique is described that allows unimodal function optimization methods to be extended to locate all optima of multimodal problems efficiently. We describe an algorithm based on a traditional genetic algorithm (GA). This technique involves iterating the GA but uses knowledge gained during one iteration to avoid re-searching, on subsequent iterations, regions of problem space where solutions have already been found. This gain is achieved by applying a fitness derating function to the raw fitness function, so that fitness values are depressed in the regions of the problem space where solutions have already been found. Consequently, the likelihood of discovering a new solution on each iteration is dramatically increased. The technique may be used with various styles of GAs or with other optimization methods, such as simulated annealing. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The technique is at least as fast as fitness sharing methods. It provides an acceleration of between 1 and l0p on a problem with p optima, depending on the value of p and the convergence time complexity.

526 citations


Journal ArticleDOI
TL;DR: The paper reports simulation experiments on two pattern-recognition problems that are relevant to natural immune systems and reviews the relation between the model and explicit fitness-sharing techniques for genetic algorithms, showing that the immune system model implements a form of implicit fitness sharing.
Abstract: This paper describes an immune system model based on binary strings. The purpose of the model is to study the pattern-recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of the model. The paper reports simulation experiments on two pattern-recognition problems that are relevant to natural immune systems. Finally, it reviews the relation between the model and explicit fitness-sharing techniques for genetic algorithms, showing that the immune system model implements a form of implicit fitness sharing.

277 citations


Journal ArticleDOI
TL;DR: Analysis of a simplified genetics-based machine learning system considers a model of an immune system and shows how GAs can automatically and simultaneously discover effective groups of cooperative computational structures.
Abstract: In typical applications, genetic algorithms (GAs) process populations of potential problem solutions to evolve a single population member that specifies an 'optimized' solution. The majority of GA analysis has focused on these optimization applications. In other applications (notably learning classifier systems and certain connectionist learning systems), a GA searches for a population of cooperative structures that jointly perform a computational task. This paper presents an analysis of this type of GA problem. The analysis considers a simplified genetics-based machine learning system: a model of an immune system. In this model, a GA must discover a set of pattern-matching antibodies that effectively match a set of antigen patterns. Analysis shows how a GA can automatically evolve and sustain a diverse, cooperative population. The cooperation emerges as a natural part of the antigen-antibody matching procedure. This emergent effect is shown to be similar to fitness sharing, an explicit technique for multimodal GA optimization. Further analysis shows how the GA population can adapt to express various degrees of generalization. The results show how GAs can automatically and simultaneously discover effective groups of cooperative computational structures.

275 citations


Journal ArticleDOI
TL;DR: Simulations indicate three distinct patterns of behaviors in which mutual cooperation is inevitable, improbable, or apparently random, which can be reliably predicted by examining the payoff matrix that defines the reward for alternative joint behaviors.
Abstract: Evolutionary programming experiments are conducted to investigate the conditions that promote the evolution of cooperative behavior in the iterated prisoner's dilemma. A population of logical stimulus-response devices is maintained over successive generations with selection based on individual fitness. The reward for selfish behavior is varied across a series of trials. Simulations indicate three distinct patterns of behaviors in which mutual cooperation is inevitable, improbable, or apparently random. The ultimate behavior can be reliably predicted by examining the payoff matrix that defines the reward for alternative joint behaviors.

248 citations


Journal ArticleDOI
TL;DR: It is shown how the response to selection equation and the concept of heritability can be applied to predict the behavior of the BGA and it is shown that recombination and mutation are complementary search operators.
Abstract: The breeder genetic algorithm (BGA) models artificial selection as performed by human breeders. The science of breeding is based on advanced statistical methods. In this paper a connection between genetic algorithm theory and the science of breeding is made. We show how the response to selection equation and the concept of heritability can be applied to predict the behavior of the BGA. Selection, recombination, and mutation are analyzed within this framework. It is shown that recombination and mutation are complementary search operators. The theoretical results are obtained under the assumption of additive gene effects. For general fitness landscapes, regression techniques for estimating the heritability are used to analyze and control the BGA. The method of decomposing the genetic variance into an additive and a nonadditive part connects the case of additive fitness functions with the general case.

241 citations


Journal ArticleDOI
TL;DR: This study particularly focuses on the addition of learning to the development process and the evolution of grammar trees, and suggests that merely using learning to change the fitness landscape can be as effective as Lamarckian strategies at improving search.
Abstract: A grammar tree is used to encode a cellular developmental process that can generate whole families of Boolean neural networks for computing parity and symmetry. The development process resembles biological cell division. A genetic algorithm is used to find a grammar tree that yields both architecture and weights specifying a particular neural network for solving specific Boolean functions. The current study particularly focuses on the addition of learning to the development process and the evolution of grammar trees. Three ways of adding learning to the development process are explored. Two of these exploit the Baldwin effect by changing the fitness landscape without using Lamarckian evolution. The third strategy is Lamarckian in nature. Results for these three modes of combining learning with genetic search are compared against genetic search without learning. Our results suggest that merely using learning to change the fitness landscape can be as effective as Lamarckian strategies at improving search.

217 citations


Journal ArticleDOI
TL;DR: A method for the determination of the progress rate and the probability of success for the Evolution strategy is presented based on the asymptotical behavior of the -distribution and yields exact results in the case of infinite-dimensional parameter spaces.
Abstract: A method for the determination of the progress rate and the probability of success for the Evolution Strategy (ES) is presented. The new method is based on the asymptotical behavior of the χ-distribution and yields exact results in the case of infinite-dimensional parameter spaces. The technique is demonstrated for the (l,+ λ) ES using a spherical model including noisy quality functions. The results are used to discuss the convergence behavior of the ES.

173 citations


Journal ArticleDOI
TL;DR: The existence of a unique asymptotic probability distribution (stationary distribution) for the Markov chain when the mutation probability is used with any constant nonzero probability value and a Cramer's Rule representation is developed to show that the stationary distribution possesses a zero mutation probability limit.
Abstract: This paper develops a theoretical framework for the simple genetic algorithm (combinations of the reproduction, mutation, and crossover operators) based on the asymptotic state behavior of a nonstationary Markov chain algorithm model. The methodology borrows heavily from that of simulated annealing. We prove the existence of a unique asymptotic probability distribution (stationary distribution) for the Markov chain when the mutation probability is used with any constant nonzero probability value. We develop a Cramer's Rule representation of the stationary distribution components for all nonzero mutation probability values and then extend the representation to show that the stationary distribution possesses a zero mutation probability limit. Finally, we present a strong ergodicity bound on the mutation probability sequence that ensures that the nonstationary algorithm (which results from varying mutation probability during algorithm execution) achieves the limit distribution asymptotically. Although the focus of this work is on a nonstationary algorithm in which mutation probability is reduced asymptotically to zero via a schedule (in a fashion analogous to simulated annealing), the stationary distribution results (existence, Cramer's Rule representation, and zero mutation probability limit) are directly applicable to conventional, simple genetic algorithm implementations as well.

168 citations


Journal ArticleDOI
TL;DR: It is argued that (for a particular problem) stronger evolution programs (in terms of the problem-specific knowledge incorporated in the system) should perform better than weaker ones.
Abstract: In this paper we present the concept of evolution programs and discuss a hierarchy of such programs for a particular problem. We argue that (for a particular problem) stronger evolution programs (in terms of the problem-specific knowledge incorporated in the system) should perform better than weaker ones. This hypothesis is based on a number of experiments and a simple intuition that problem-specific knowledge enhances an algorithm's performance; at the same time it narrows the applicability of an algorithm. Trade-offs between the effort of finding an effective representation for general-purpose evolution programs and the effort of developing more specialized systems are also discussed.

Journal ArticleDOI
TL;DR: In this article, a new genetic algorithm for channel routing in the physical design process of VLSI circuits is presented, which is based on a problem-specific representation scheme and problem specific genetic operators.
Abstract: A new genetic algorithm for channel routing in the physical design process of VLSI circuits is presented. The algorithm is based on a problem-specific representation scheme and problem-specific genetic operators. The genetic encoding and our genetic operators are described in detail. The performance of the algorithm is tested on different benchmarks, and it is shown that the results obtained using the proposed algorithm are either qualitatively similar to or better than the best published results.

Journal ArticleDOI
TL;DR: This paper introduces the following original features of ALECSYS, a parallel version of a standard learning classifier system (CS), and presents simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source.
Abstract: It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficulty in regulating interplay between the reward system and the background genetic algorithm (GA), rule chains' instability, default hierarchies' instability, among others. ALECSYS is a parallel version of a standard learning classifier system (CS) and, as such, suffers from these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec is a new genetic operator used to specialize potentially useful classifiers. Energy is a quantity introduced to measure global convergence to apply the genetic algorithm only when the system is close to a steady state. Dynamic adjustment of the classifiers set cardinality speeds up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source.

Journal ArticleDOI
TL;DR: This work proposes an alternative approach for genetic algorithms applied to hard combinatoric search which it calls Evolutionary Divide and Conquer (EDAC), which has potential for any search problem in which knowledge of good solutions for subproblems can be exploited to improve the solution of the problem itself.
Abstract: Experiments with genetic algorithms using permutation operators applied to the traveling salesman problem (TSP) tend to suggest that these algorithms fail in two respects when applied to very large problems: they scale rather poorly as the number of cities n increases, and the solution quality degrades rapidly. We propose an alternative approach for genetic algorithms applied to hard combinatoric search which we call Evolutionary Divide and Conquer (EDAC). This method has potential for any search problem in which knowledge of good solutions for subproblems can be exploited to improve the solution of the problem itself. The idea is to use the genetic algorithm to explore the space of problem subdivisions rather than the space of solutions themselves. We give some preliminary results of this method applied to the geometric TSP.

Journal ArticleDOI
TL;DR: Analytical investigations of simulated annealing and single-trial versions of evolution strategies lead to a cross-fertilization of both approaches, resulting in new theoretical results, new parallel population-based algorithms, and a better understanding of the interrelationships.
Abstract: Simulated annealing and single-trial versions of evolution strategies possess a close relationship when they are designed for optimization over continuous variables. Analytical investigations of their differences and similarities lead to a cross-fertilization of both approaches, resulting in new theoretical results, new parallel population-based algorithms, and a better understanding of the interrelationships.

Journal ArticleDOI
TL;DR: The initial implementation, which chooses stimulus-response parameters using a parallel genetic algorithm, succeeds in finding good, novel solutions for a test suite of SC problems involving unbranched 2-D linkages.
Abstract: Motion-synthesis problems arise in the creation of physically realistic animations involving autonomous characters. Trpically characters are required to perform goal tasks, subject to physical law and other constraints on their motion. Witkin and Kass (1988) dubbed this class of problems “Spacetime Constraints“ (SC) and presented results for specific problems involving an articulated figure. Their approach was based on a procedure for the local optimization of an initial approximate trajectory supplied by the user. Unfortunately, SC problems are typically multimodal and discontinuous, and the number of decision alternatives available at each time step can be exponential in the number of degrees of freedom in the system. Thus, constructing even coarse trajectories for subsequent optimization can be difficult. We present an algorithm that constructs such trajectories de novo, without directive input from the user. Rather than use a time-series representation, which might be appropriate for local optimization, our algorithm uses a stimulus-response model. Locomotive skills appropriate for the given articulated figure are acquired through repeated testing of (simulated) reality. Our initial implementation, which chooses stimulus-response parameters using a parallel genetic algorithm, succeeds in finding good, novel solutions for a test suite of SC problems involving unbranched 2-D linkages.