scispace - formally typeset
Search or ask a question

Showing papers in "Evolutionary Computation in 1997"


Journal ArticleDOI
TL;DR: This work uses the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution, which changes the way fitness is measured, shared sampling provides a method for selecting a strong, diverse set of parasites and the hall of fame encourages arms races by saving good individuals from prior generations.
Abstract: We consider “competitive coevolution,” in which fitness is based on direct competition among individuals selected from two independently evolving populations of “hosts” and “parasites.” Competitive coevolution can lead to an “arms race,” in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. “Competitive fitness sharing” changes the way fitness is measured; “shared sampling” provides a method for selecting a strong, diverse set of parasites; and the “hall of fame” encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift.

536 citations


Journal ArticleDOI
TL;DR: In this paper, an exact response to selection (RS) equation is derived for proportionate selection given an infinite population in linkage equilibrium, where the genotype frequencies are the product of the univariate marginal frequencies.
Abstract: The Breeder Genetic Algorithm (BGA) was designed according to the theories and methods used in the science of livestock breeding. The prediction of a breeding experiment is based on the response to selection (RS) equation. This equation relates the change in a population's fitness to the standard deviation of its fitness, as well as to the parameters selection intensity and realized heritability. In this paper the exact RS equation is derived for proportionate selection given an infinite population in linkage equilibrium. In linkage equilibrium the genotype frequencies are the product of the univariate marginal frequencies. The equation contains Fisher's fundamental theorem of natural selection as an approximation. The theorem shows that the response is approximately equal to the quotient of a quantity called additive genetic variance, VA, and the average fitness. We compare Mendelian two-parent recombination with gene-pool recombination, which belongs to a special class of genetic algorithms that we call univariate marginal distribution (UMD) algorithms. UMD algorithms keep the genotypes in linkage equilibrium. For UMD algorithms, an exact RS equation is proven that can be used for long-term prediction. Empirical and theoretical evidence is provided that indicates that Mendelian two-parent recombination is also mainly exploiting the additive genetic variance. We compute an exact RS equation for binary tournament selection. It shows that the two classical methods for estimating realized heritability---the regression heritability and the heritability in the narrow sense---may give poor estimates. Furthermore, realized heritability for binary tournament selection can be very different from that of proportionate selection. The paper ends with a short survey about methods that extend standard genetic algorithms and UMD algorithms by detecting interacting variables in nonlinear fitness functions and using this information to sample new points.

521 citations


Journal ArticleDOI
TL;DR: The symbiotic adaptive neuroevolution (SANE) system as mentioned in this paper coevolves a population of neurons that cooperate to form a functioning neural network, resulting in a robust encoding of control behavior.
Abstract: This article demonstrates the advantages of a cooperative, coevolutionary search in difficult control problems. The symbiotic adaptive neuroevolution (SANE) system coevolves a population of neurons that cooperate to form a functioning neural network. In this process, neurons assume different but overlapping roles, resulting in a robust encoding of control behavior. SANE is shown to be more efficient and more adaptive and to maintain higher levels of diversity than the more common network-based population approaches. Further empirical studies illustrate the emergent neuron specializations and the different roles the neurons assume in the population.

326 citations


Journal ArticleDOI
TL;DR: This work compares PIPE to GP on a function regression problem and the 6-bit parity problem, and uses it to solve tasks in partially observable mazes, where the best programs have minimal runtime.
Abstract: Probabilistic incremental program evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, population-based incremental learning, and tree-coded programs like those used in some variants of genetic programming (GP). PIPE iteratively generates successive populations of functional programs according to an adaptive probability distribution over all possible programs. Each iteration, it uses the best program to refine the distribution. Thus, it stochastically generates better and better programs. Since distribution refinements depend only on the best program of the current population, PIPE can evaluate program populations efficiently when the goal is to discover a program with minimal runtime. We compare PIPE to GP on a function regression problem and the 6-bit parity problem. We also use PIPE to solve tasks in partially observable mazes, where the best programs have minimal runtime.

242 citations


Journal ArticleDOI
TL;DR: This study focuses on the effects of different fitness evaluation schemes on the types of genotypes and phenotypes that evolve and compares fitness evaluation based on a large static set of problems and fitness evaluationbased on small coevolving sets of problems.
Abstract: Most evolutionary optimization models incorporate a fitness evaluation that is based on a predefined static set of test cases or problems. In the natural evolutionary process, selection is of course not based on a static fitness evaluation. Organisms do not have to combat every existing disease during their lifespan; organisms of one species may live in different or changing environments; different species coevolve. This leads to the question of how information is integrated over many generations. This study focuses on the effects of different fitness evaluation schemes on the types of genotypes and phenotypes that evolve. The evolutionary target is a simple numerical function. The genetic representation is in the form of a program (i.e., a functional representation, as in genetic programming). Many different programs can code for the same numerical function. In other words, there is a many-to-one mapping between “genotypes” (the programs) and “phenotypes”. We compare fitness evaluation based on a large static set of problems and fitness evaluation based on small coevolving sets of problems. In the latter model very little information is presented to the evolving programs regarding the evolutionary target per evolutionary time step. In other words, the fitness evaluation is very sparse. Nevertheless the model produces correct solutions to the complete evolutionary target in about half of the simulations. The complete evaluation model, on the other hand, does not find correct solutions to the target in any of the simulations. More important, we find that sparse evaluated programs are better generalizable compared to the complete evaluated programs when they are evaluated on a much denser set of problems. In addition, the two evaluation schemes lead to programs that differ with respect to mutational stability; sparse evaluated programs are less stable than complete evaluated programs.

140 citations


Journal ArticleDOI
TL;DR: An extension of evolution strategies to multiparent recombination involving a variable number of parents to create an offspring individual is proposed and is experimentally evaluated on a test suite of functions differing in their modality and separability and the regular/irregular arrangement of their local optima.
Abstract: An extension of evolution strategies to multiparent recombination involving a variable number ρ of parents to create an offspring individual is proposed. The extension is experimentally evaluated on a test suite of functions differing in their modality and separability and the regular/irregular arrangement of their local optima. Multiparent diagonal crossover and uniform scanning crossover and a multiparent version of intermediary recombination are considered in the experiments. The performance of the algorithm is observed to depend on the particular combination of recombination operator and objective function. In most of the cases a significant increase in performance is observed as the number of parents increases. However, there might also be no significant impact of recombination at all, and for one of the unimodal objective functions, the performance is observed to deteriorate over the course of evolution for certain choices of the recombination operator and the number of parents. Additional experiments with a skewed initialization of the population clarify that intermediary recombination does not cause a search bias toward the origin of the coordinate system in the case of domains of variables that are symmetric around zero.

136 citations


Journal ArticleDOI
TL;DR: This paper examines the issue of using partial Lamarckianism (i.e., the updating of the genetic representation for only a percentage of the individuals), as compared to pure Lamarckia and pure Baldwinian learning in hybrid GAs.
Abstract: Genetic algorithms (GAs) are very efficient at exploring the entire search space; however, they are relatively poor at finding the precise local optimal solution in the region in which the algorithm converges. Hybrid GAs are the combination of improvement procedures, which are good at finding local optima, and GAs. There are two basic strategies for using hybrid GAs. In the first, Lamarckian learning, the genetic representation is updated to match the solution found by the improvement procedure. In the second, Baldwinian learning, improvement procedures are used to change the fitness landscape, but the solution that is found is not encoded back into the genetic string. This paper examines the issue of using partial Lamarckianism (i.e., the updating of the genetic representation for only a percentage of the individuals), as compared to pure Lamarckian and pure Baldwinian learning in hybrid GAs. Multiple instances of five bounded nonlinear problems, the location-allocation problem, and the cell formation problem were used as test problems in an empirical investigation. Neither a pure Lamarckian nor a pure Baldwinian search strategy was found to consistently lead to quicker convergence of the GA to the best known solution for the series of test problems. Based on a minimax criterion (i.e., minimizing the worst case performance across all test problem instances), the 20% and 40% partial Lamarckianism search strategies yielded the best mixture of solution quality and computational efficiency.

110 citations


Journal ArticleDOI
TL;DR: A new type of genetic algorithm (GA), the forking GA (fGA), which divides the whole search space into subspaces, depending on the convergence status of the population and the solutions obtained so far, is proposed.
Abstract: In this article, we propose a new type of genetic algorithm (GA), the forking GA (fGA), which divides the whole search space into subspaces, depending on the convergence status of the population and the solutions obtained so far. The fGA is intended to deal with multimodal problems that are difficult to solve using conventional GAs. We use a multi-population scheme that includes one parent population that explores one subspace and one or more child populations exploiting the other subspace. We consider two types of fGAs, depending on the method used to divide the search space. One is the genoqtypic fGA (g-fGA), which defines the search subspace for each subpopulation, depending on the salient schema within the genotypic search space. The other is the phenotypic fGA (p-fGA), which defines a search subspace by a neighborhood hypercube around the current best individual in the phenotypic feature space. Empirical results on complex function optimization problems show that both the g-fGA and the p-GA perform well compared to conventional GAs. Two additional utilities of the p-fGA are also studied briefly.

105 citations


Journal ArticleDOI
TL;DR: A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle and is successfully applied to the induction of higher order neural trees.
Abstract: This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest.

99 citations


Journal ArticleDOI
TL;DR: It is concluded that the algebraic approach to fitness landscape analysis can be extended to recombination spaces and provides an effective way to analyze the relative hardness of a landscape for a given recombination operator.
Abstract: A new mathematical representation is proposed for the configuration space structure induced by recombination, which we call “P-structure.” It consists of a mapping of pairs of objects to the power set of all objects in the search space. The mapping assigns to each pair of parental “genotypes” the set of all recombinant genotypes obtainable from the parental ones. It is shown that this construction allows a Fourier decomposition of fitness landscapes into a superposition of “elementary landscapes.” This decomposition is analogous to the Fourier decomposition of fitness landscapes on mutation spaces. The elementary landscapes are obtained as eigenfunctions of a Laplacian operator defined for P-structures. For binary string recombination, the elementary landscapes are exactly the p-spin functions (Walsh functions), that is, the same as the elementary landscapes of the string point mutation spaces (i.e., the hypercube). This supports the notion of a strong homomorphism between string mutation and recombination spaces. However, the effective nearest neighbor correlations on these elementary landscapes differ between mutation and recombination and among different recombination operators. On average, the nearest neighbor correlation is higher for one-point recombination than for uniform recombination. For one-point recombination, the correlations are higher for elementary landscapes with fewer interacting sites as well as for sites that have closer linkage, confirming the qualitative predictions of the Schema Theorem. We conclude that the algebraic approach to fitness landscape analysis can be extended to recombination spaces and provides an effective way to analyze the relative hardness of a landscape for a given recombination operator.

89 citations


Journal ArticleDOI
TL;DR: The density of states is investigated, which describes the number of solutions with a certain fitness value in the stationary regime, and it is found that the network problem belongs to a class of optimization problems in which more effort in optimization certainly yields better solutions.
Abstract: A road network usually has to fulfill two requirements: (i) it should as far as possible provide direct connections between nodes to avoid large detours; and (ii) the costs for road construction and maintenance, which are assumed proportional to the total length of the roads, should be low. The optimal solution is a compromise between these contradictory demands, which in our model can be weighted by a parameter. The road optimization problem belongs to the class of frustrated optimization problems. In this paper, a special class of evolutionary strategies, such as the Boltzmann and Darwin and mixed strategies, are applied to find differently optimized solutions (graphs of varying density) for the road network, depending on the degree of frustration. We show that the optimization process occurs on two different time scales. In the asymptotic limit, a fixed relation between the mean connection distance (detour) and the total length (costs) of the network exists that defines a range of possible compromises. Furthermore, we investigate the density of states, which describes the number of solutions with a certain fitness value in the stationary regime. We find that the network problem belongs to a class of optimization problems in which more effort in optimization certainly yields better solutions. An analytical approximation for the relation between effort and improvement is derived.

Journal ArticleDOI
TL;DR: A flexible system called LOGENPRO (The LOgic grammar-based GENetic PROgramming system) that uses some of the techniques of GP and ILP and applies logic grammars to control the evolution of programs in various programming languages and represent context-sensitive information and domain-dependent knowledge.
Abstract: Program induction generates a computer program that can produce the desired behavior for a given set of situations. Two of the approaches in program induction are inductive logic programming (ILP) and genetic programming (GP). Since their formalisms are so different, these two approaches cannot be integrated easily, although they share many common goals and functionalities. A unification will greatly enhance their problem-solving power. Moreover, they are restricted in the computer languages in which programs can be induced. In this paper, we present a flexible system called LOGENPRO (The LOgic grammar-based GENetic PROgramming system) that uses some of the techniques of GP and ILP. It is based on a formalism of logic grammars. The system applies logic grammars to control the evolution of programs in various programming languages and represent context-sensitive information and domain-dependent knowledge. Experiments have been performed to demonstrate that LOGENPRO can emulate GP and GP with automatically defined functions (ADFs). Moreover, LOGENPRO can employ knowledge such as argument types in a unified framework. The experiments show that LOGENPRO has superior performance to that of GP and GP with ADFs when more domain-dependent knowledge is available. We have applied LOGENPRO to evolve general recursive functions for the even-n-parity problem from noisy training examples. A number of experiments have been performed to determine the impact of domain-specific knowledge and noise in training examples on the speed of learning.

Journal ArticleDOI
TL;DR: A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided, with Boolean AND and EXCLUSIVE-OR operators replaced by multiplication and addition over rings of integers.
Abstract: A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided.

Journal ArticleDOI
TL;DR: A novel genetic algorithm (GA) using minimal representation size cluster (MRSC) analysis is designed and implemented for solving multimodal function optimization problems and results in a highly parallel algorithm for finding multiple local minima.
Abstract: A novel genetic algorithm (GA) using minimal representation size cluster (MRSC) analysis is designed and implemented for solving multimodal function optimization problems. The problem of multimodal function optimization is framed within a hypothesize-and-test paradigm using minimal representation size (minimal complexity) for species formation and a GA. A multiple-population GA is developed to identify different species. The number of populations, thus the number of different species, is determined by the minimal representation size criterion. Therefore, the proposed algorithm reveals the unknown structure of the multimodal function when a priori knowledge about the function is unknown. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The proposed scheme results in a highly parallel algorithm for finding multiple local minima. In this paper, a path-planning algorithm is also developed based on the MRSC-GA algorithm. The algorithm utilizes MRSC_GA for planning paths for mobile robots, piano-mover problems, and N-link manipulators. The MRSC_GA is used for generating multipaths to provide alternative solutions to the path-planning problem. The generation of alternative solutions is especially important for planning paths in dynamic environments. A novel iterative multiresolution path representation is used as a basis for the GA coding. The effectiveness of the algorithm is demonstrated on a number of two-dimensional path-planning problems.

Journal ArticleDOI
TL;DR: A new representation combining redundancy and implicit fitness constraints is introduced that performs better than a simple genetic algorithm and a structured GA in experiments and provides the necessary flexibility to represent unstructured problem domains that do not have the explicit constraints required by fixed representations.
Abstract: A new representation combining redundancy and implicit fitness constraints is introduced that performs better than a simple genetic algorithm (GA) and a structured GA in experiments. The implicit redundant representation (IRR) consists of a string that is over-specified, allowing for sections of the string to remain inactive during function evaluation. The representation does not require the user to prespecify the number of parameters to evaluate or the location of these parameters within the string. This information is obtained implicitly by the fitness function during the GA operations. The good performance of the IRR can be attributed to several factors: less disruption of existing fit members due to the increased probability of crossovers and mutation affecting only redundant material; discovery of fit members through the conversion of redundant material into essential information; and the ability to enlarge or reduce the search space dynamically by varying the number of variables evaluated by the fitness function. The IRR GA provides a more biologically parallel representation that maintains a diverse population throughout the evolution process. In addition, the IRR provides the necessary flexibility to represent unstructured problem domains that do not have the explicit constraints required by fixed representations.

Journal ArticleDOI
TL;DR: The results of the experiments show an overall improvement on the average performance of CAGP over GP alone and a significant reduction of the complexity of the produced solution.
Abstract: Traditional software engineering dictates the use of modular and structured programming and top-down stepwise refinement techniques that reduce the amount of variability arising in the development process by establishing standard procedures to be followed while writing software. This focusing leads to reduced variability in the resulting products, due to the use of standardized constructs. Genetic programming (GP) performs heuristic search in the space of programs. Programs produced through the GP paradigm emerge as the result of simulated evolution and are built through a bottom-up process, incrementally augmenting their functionality until a satisfactory level of performance is reached. Can we automatically extract knowledge from the GP programming process that can be useful to focus the search and reduce product variability, thus leading to a more effective use of the available resources? An answer to this question is investigated with the aid of cultural algorithms. A new system, cultural algorithms with genetic programming (CAGP), is presented. The system has two levels. The first is the pool of genetic programs (population level), and the second is a knowledge repository (belief set) that is built during the GP run and is used to guide the search process. The microevolution within the population brings about potentially meaningful characteristics of the programs for the achievement of the given task, such as properties exhibited by the best performers in the population. CAGP extracts these features and represents them as the set of the current beliefs. Beliefs correspond to constraints that all the genetic operators and programs must follow. Interaction between the two levels occurs in one direction through the extraction process and, in the other, through the modulation of an individual's program parameters according to which, and how many, of the constraints it follows. CAGP is applied to solve an instance of the symbolic regression problem, in which a function of one variable needs to be discovered. The results of the experiments show an overall improvement on the average performance of CAGP over GP alone and a significant reduction of the complexity of the produced solution. Moreover, the execution time required by CAGP is comparable with the time required by GP alone.

Journal ArticleDOI
TL;DR: Evaluating programs concerns the efficient evolution of behavior that accomplishes a desired task and which is directed by a program, and the evolutionary induction of executable structures has been investigated in diverse ways employing distinct representations.
Abstract: Given accessible computational models of evolution, it is a short leap to the application of evolving programs. At its essence, evolving programs concerns the efficient evolution of behavior. This same question compels the currently thriving artificial life community. Unlike artificial life, however, the practical nature of evolving computer programs forces the evolutionary computation community to attend to the distinct issues involved in the induction of behavior that accomplishes a desired task and which is directed by a program. When evolving computer programs an issue of primary importance is the designation of an appropriate evolvable representation for the behavior. The general requirements of being sufficient to model the behavior and being amenable to evolutionary manipulation imply that an appropriate representation must accommodate the specific constraints of evolving executable structures: Executability must be maintained despite random variation, and variable length expression must be permitted. Popular programming languages permit a wealth of syntactic expressions in order to articulate the intended semantics in a fashion suitable for humans. While necessary for human programmers, extraneous syntax is problematic when evolving programs. Structures constrained by many syntactic rules often become unexecutable after unconstrained evolutionary manipulation. The solution has been to use structures that have minimal syntactic requirements or to employ specialized operators that respect the interpretation of an individual as an executable structure. In addition, because one does not know in advance how long the desired program should be, rather than stipulate a fixed length for an evolved program, dynamic length representations have provided often necessary flexibility. The evolutionary induction of executable structures has been investigated in diverse ways employing distinct representations. Friedberg (1958) and Friedberg, Dunham, and North (1959), working within the constraints of simple computers, evolved sequences of machine language instructions that performed modest computations. Fogel, Owens, and Walsh (1966), interested in the evolution of artificial intelligence, proposed evolving finite state machines as models of intelligent behavior by applying representation specific mutations without recombination in their early definition of evolutionary programming. Evolutionary programming has blossomed more recently into a much broader collection of methods centered

Journal ArticleDOI
TL;DR: This paper reviews the developments in evolvable hardware systems presented at the First International Conference on Evolvable Systems (ICES 96) and splits them into three broad groups according to whether they involve evolving a fit solution to a problem as a member of a population of competing candidates, evolving solutions that can individually learn from and adapt to their environments, or the embryonic growth of solutions.
Abstract: This paper reviews the developments in evolvable hardware systems presented at the First International Conference on Evolvable Systems (ICES 96). The main body of the review gives an overview of the 34 papers presented orally, splitting them into three broad groups according to whether they involve (1) evolving a fit solution to a problem as a member of a population of competing candidates, (2) evolving solutions that can individually learn from and adapt to their environments, or (3) the embryonic growth of solutions. We also review the discussion sessions of the conference and give pointers to related upcoming events.