scispace - formally typeset
Search or ask a question

Showing papers on "Genetic representation published in 1995"


Proceedings ArticleDOI
05 Jul 1995
TL;DR: C Culling is near optimal for this problem, highly noise tolerant, and the best known a~~roach in some regimes, and some new large deviation bounds on this submartingale enable us to determine the running time of the algorithm.
Abstract: We analyze the performance of a Genetic Type Algorithm we call Culling and a variety of other algorithms on a problem we refer to as ASP. Culling is near optimal for this problem, highly noise tolerant, and the best known a~~roach . . in some regimes. We show that the problem of learning the Ising perception is reducible to noisy ASP. These results provide an example of a rigorous analysis of GA’s and give insight into when and how C,A’s can beat competing methods. To analyze the genetic algorithm, we view it as a special type of submartingale. We prove some new large deviation bounds on this submartingale w~ich enable us to determine the running time of the algorithm.

4,520 citations


Journal ArticleDOI
TL;DR: In this paper, a tutorial on using genetic algorithms to optimize antenna and scattering patterns is presented, and three examples demonstrate how to optimize antennas and backscattering radar-cross-section patterns.
Abstract: This article is a tutorial on using genetic algorithms to optimize antenna and scattering patterns. Genetic algorithms are "global" numerical-optimization methods, patterned after the natural processes of genetic recombination and evolution. The algorithms encode each parameter into binary sequences, called a gene, and a set of genes is a chromosome. These chromosomes undergo natural selection, mating, and mutation, to arrive at the final optimal solution. After providing a detailed explanation of how a genetic algorithm works, and a listing of a MATLAB code, the article presents three examples. These examples demonstrate how to optimize antenna patterns and backscattering radar-cross-section patterns. Finally, additional details about algorithm design are given. >

831 citations


Proceedings ArticleDOI
29 Nov 1995
TL;DR: A framework of genetic algorithms to search for Pareto optimal solutions (i.e., non-dominated solutions) of multi-ohjectiv, e optimizution problems and the elite preserve strategy in this paper uses multiple elite solutions instead of a single eliie solution.
Abstract: In this paper, we propose a .framework of genetic algorithms to search for Pareto optimal solutions (i.e., non-dominated solutions) of multi-ohjectiv,e optimizution problems. Our approuch d!fers from single-objective genetic algorithms in its selection proceduiae and elite preserve strategy. The selection procedure in our genetic algorithms selects individuals for a cromover operation based on a weighted sum of multiple ohjective functions. The characteristic feature of the selection procedure is that the weights attached to the multiple objective ,functions are not constant but rundomly specified for each selection. 7he elite preserve strategy in our genetic algorithms uses multiple elite solutions instead of a single eliie solution. That is, a certain number of individuals are selected from a tentative set of Pareto optimal solutions and inherited to the next generation as elite individuals.

440 citations


Journal ArticleDOI
TL;DR: The essence of the parsimony problem is demonstrated empirically by analyzing error landscapes of programs evolved for neural network synthesis and an adaptive learning method is presented that automatically balances the model-complexity factor to evolve parsimonious programs without losing the diversity of the population needed for achieving the desired training accuracy.
Abstract: Genetic programming is distinguished from other evolutionary algorithms in that it uses tree representations of variable size instead of linear strings of fixed length. The flexible representation scheme is very important because it allows the underlying structure of the data to be discovered automatically. One primary difficulty, however, is that the solutions may grow too big without any improvement of their generalization ability. In this article we investigate the fundamental relationship between the performance and complexity of the evolved structures. The essence of the parsimony problem is demonstrated empirically by analyzing error landscapes of programs evolved for neural network synthesis. We consider genetic programming as a statistical inference problem and apply the Bayesian model-comparison framework to introduce a class of fitness functions with error and complexity terms. An adaptive learning method is then presented that automatically balances the model-complexity factor to evolve parsimonious programs without losing the diversity of the population needed for achieving the desired training accuracy. The effectiveness of this approach is empirically shown on the induction of sigma-pi neural networks for solving a real-world medical diagnosis problem as well as benchmark tasks.

252 citations


Journal Article
TL;DR: An adaptive mechanism for controlling the use of crossover in an EA is described and an improvement to the adaptive mechanism is presented, which can also be used to enhance performance in a non-adaptive EA.
Abstract: One of the issues in evolutionary algorithms (EAs) is the relative importance of two search operators: mutation and crossover. Genetic algorithms (GAs) and genetic programming (GP) stress the role of crossover, while evolutionary programming (EP) and evolution strategies (ESs) stress the role of mutation. The existence of many different forms of crossover further complicates the issue. Despite theoretical analysis, it appears difficult to decide a priori which form of crossover to use, or even if crossover should be used at all. One possible solution to this difficulty is to have the EA be self-adaptive, i.e., to have the EA dynamically modify which forms of crossover to use and how often to use them, as it solves a problem. This paper describes an adaptive mechanism for controlling the use of crossover in an EA and explores the behavior of this mechanism in a number of different situations. An improvement to the adaptive mechanism is then presented. Surprisingly this improvement can also be used to enhance performance in a non-adaptive EA.

242 citations


Journal ArticleDOI
TL;DR: In this article, a general framework for the use of coevolution to boost the performance of genetic search is proposed, which combines co-evolution with yet another biologically inspired technique, called lifetime fitness evaluation (LTFE).
Abstract: This article proposes a general framework for the use of coevolution to boost the performance of genetic search. It combines coevolution with yet another biologically inspired technique, called lifetime fitness evaluation (LTFE). Two unrelated problems-neural net learning and constraint satisfaction-are used to illustrate the approach. Both problems use predator-prey interactions to boost the search. In contrast with traditional "single population" genetic algorithms (GAs), two populations constantly interact and coevolve. However, the same algorithm can also be used with different types of coevolutionary interactions. As an example, the symbiotic coevolution of solutions and genetic representations is shown to provide an elegant solution to the problem of finding a suitable genetic representation. The approach presented here greatly profits from the partial and continuous nature of LTFE. Noise tolerance is one advantage. Even more important, LTFE is ideally suited to deal with coupled fitness landscapes typical for coevolution.

209 citations



Proceedings ArticleDOI
26 Feb 1995
TL;DR: An application to a GA-easy problem shows that greater efficiency can be obtained by evaluating only a small portion of the population, and a real-world search problem confirms these results.
Abstract: GAS have proven effective on a broad range of search problems. However, when each population member’s fitness evaluation is computationally expensive, the prospect of evaluating an entire population can prohibit use of the GA. This paper examines a GA that overcomes this difficulty by evaluating only a portion of the population. The remainder of the population has its fitness assigned by inheritance. Theoretical arguments justify this approach. An application to a GA-easy problem shows that greater efficiency can be obtained by evaluating only a small portion of the population. A real-world search problem confirms these results. Implications and future directions are discussed.

187 citations


Proceedings ArticleDOI
John R. Koza1
07 Nov 1995
TL;DR: This paper provides an introduction to genetic algorithms and genetic programming and lists sources of additional information, including books and conferences as well as e-mail lists and software that is available over the Internet.
Abstract: This paper provides an introduction to genetic algorithms and genetic programming and lists sources of additional information, including books and conferences as well as e-mail lists and software that is available over the Internet. 1 . GENETIC ALGORITHMS John Holland's pioneering book Adaptation in Natural and Artificial Systems (1975, 1992) showed how the evolutionary process can be applied to solve a wide variety of problems using a highly parallel technique that is now called the genetic algorithm. The genetic algorithm (GA) transforms apopulation (set) of individual objects, each with an associated fitness value, into a new generation of the population using the Darwinian principle of reproduction and survival of the fittest and analogs of naturally occurring genetic operations such as crossover (sexual recombination) and mutation. Each individual in the population represents a possible solution to a given problem. The genetic algorithm attempts to find a very good (or best) solution to the problem by genetically breeding the population of individuals over a series of generations. Before applying the genetic algorithm to the problem, the user designs an artificial chromosome of a certain fixed size and then defines a mapping (encoding) between the points in the search space of the problem and instances of the artificial chromosome. For example, in applying the genetic algorithm to a multidimensional optimization problem (where the goal is to find the global optimum of an unknown multidimensional function), the artificial chromosome may be a linear character string (modeled directly after the linear string of information found in DNA). A specific location (a gene) along this artificial chromosome is associated with each of the variables of the problem. Character(s) appearing at a particular location along the chromosome denote the value of a particular variable (Le., the gene value or allele). Each individual in the population has a fitness value (which, for a multidimensional optimization problem, is the value of the unknown function). The genetic algorithm then manipulates a population of such artificial chromosomes (usually starting from a randomly-created initial population of strings) using the operations of reproduction, crossover, and mutation. Individuals are probabilistically selected to participate in these genetic operations based on their fitness. The goal of the genetic algorithm in a multidimensional optimization problem is to find an artificial chromosome which, when decoded and mapped back into the search space of the problem, corresponds to a globally optimum (or near-optimum) point in the original search space of the problem. In preparing to use the conventional genetic algorithm operating on fixed-length character strings to solve a problem, the user must (1) determine the representation scheme, (2) determine the fitness measure, (3) determine the parameters and variables for controlling the algorithm, and (4) determine a way of designating the result and a criterion for terminating a run. In the conventional genetic algorithm, the individuals in the population are usually fixed-length character strings patterned after chromosome strings. Thus, specification of the representation scheme in the conventional genetic algorithm starts with a selection of the string length L and the alphabet size K . Often the alphabet is binary, so K equals 2. The most important part of the representation scheme is the mapping that expresses each possible point in the search space of the problem as a fixed-length character string (i.e., as a chromosome) and each chromosome as a point in the search space of the problem. Selecting a representation scheme that facilitates solution of the problem by the genetic algorithm often requires considerable insight into the problem and good judgment. The evolutionary process is driven by the fitness measure. The fitness measure assigns a fitness value to each possible fixed-length character string in the population. The primary parameters for controlling the genetic algorithm are the population size, M, and the maximum number of generations to be run, G. Populations can consist of hundreds, thousands, tens of thousands or more individuals. There can be dozens, hundreds, thousands, or more generations in a run of the genetic algorithm. Each run of the genetic algorithm requires specification of a termination criterion for deciding when to terminate a run and a method of result designation, One frequently used method of result designation for a run of the genetic algorithm is to designate the best individual obtained in any generation of the population during the run (i.e., the best-so-far individual) as the result of the run. Once the four preparatory steps for setting up the genetic algorithm have been completed, the genetic algorithm can be run. The evolutionary process described above indicates how a globally optimum combination of alleles (gene values) within a fixed-size chromosome can be evolved. The three steps in executing the genetic algorithm operating on fixed-length character strings are as follows: (1) Randomly create an initial population of individual fixedlength character strings. ISBN# 0-7803-2636-9 589 (2) Iteratively perform the following substeps on the population of strings until the termination criterion has been satisfied: (A) Assign a fitness value to each individual in the population using the fitness measure. (C) Create a new population of strings by applying the following three genetic operations. The genetic operations are applied to individual string(s) in the population chosen with a probability based on fitness. (i) Reproduce an existing individual string by copying it into the new population. (ii) Create two new strings from two existing strings by genetically recombining substrings using the crossover operation (described below) at a randomly chosen crossover point. randomly mutating the character at one randomly chosen position in the string. (iii) Create a new string from an existing string by (3) The string that is identified by the method of result designation (e.g., the best-so-far individual) is designated as the result of the genetic algorithm for the run. This result may represent a solution (or an approximate solution) to the problem. The genetic operation of reproduction is based on the Darwinian principle of reproduction and survival of the fittest. In the reproduction operation, an individual is probabilistically selected from the population based on its fitness (with reselection allowed) and then the individual is copied, without change, into the next generation of the population. The selection is done in such a way that the better an individual's fitness, the more likely it is to be selected. An important aspect of this probabilistic selection is that every individual, however poor its fitness, has some probability of selection. The genetic operation of crossover (sexual recombination) allows new individuals (i.e., new points in the search space) to be created and tested. The operation of crossover starts with two parents independently selected probabilistically from the population based on their fitness (with reselection allowed). As before, the selection is done in such a way that the better an individual's fitness, the more likely it is to be selected. The crossover operation produces two offspring. Each offspring contains some genetic material from each of its parents. Suppose that the crossover operation is to be applied to the two parental strings 10110 and 01 101 of length L = 5 over an alphabet of size K = 2. The crossover operation begins by randomly selecting a number between 1 and G 1 using a uniform probability distribution. Suppose that the third interstitial location is selected. This location becomes the crossover point. Each parent is then split at this crossover point into a crossover fragment and a remainder. The crossover operation then recombines remainder 1 (i.e., 1 0) with crossover fragment 2 (Le., 0 1 1 -) to create offspring 2 (Le., 01110). The crossover operation similarly recombines remainder 2 (i.e., 01) with crossover fragment 1 (i.e., 101 -) to create offspring 1 (i.e., 10101). The operation of mutation allows new individuals to be created. It begins by selecting an individual &om the population based on its fitness (with reselection allowed). A point along the string is selected at random and the character at that point is randomly changed. The altered individual is then copied into the next generation of the population. Mutation is used very sparingly in genetic algorithm work. 590 The genetic algorithm works in a domain-independent way on the fixed-length character strings in the population. The genetic algorithm searches the space of possible character strings in an attempt to find high-fitness strings. The fitness landscape may be very rugged and nonlinear. To guide this search, the genetic algorithm uses only the numerical fitness values associated with the explicitly tested strings in the population. Regardless of the particular problem domain, the genetic algorithm carries out its search by performing the same disarmingly simple operations of copying, recombining, and occasionally randomly mutating the strings. In practice, the genetic algorithm is surprisingly rapid in effectively searching complex, highly nonlinear, multidimensional search spaces. This is all the more surprising because the genetic algorithm does not know anything about the problem domain or the internal workings of the fitness measure being used. 1.1 Sources of Additional Information David Goldberg's Genetic Algorithms in Search, Optimization, and Machine Leaming (1989) is the leading textbook and best single source of additional information about the field of genetic Additional information on genetic algorithms can be found in Davis (1987, 1991), Michalewicz (1992), and Buckles and Petry (1992). The proceedings of th

179 citations


Proceedings ArticleDOI
29 Nov 1995
TL;DR: This work implements Gaussian mutation operators for Genetic Algorithms used to opt,imise numeric functions and shows it is superior to bit-flip mutation for most of the test functions.
Abstract: By considering the function variables rat,her than the binary-bits as genes, new mutation operators can be devised for GAS used t o optimise numeric functions. We implement Gaussian mutation operators for Genetic Algorithms used to opt,imise numeric functions and show it is superior t o bit-flip mutation for most of the test functions. Gaussian mutation is a fundamental operator of both Evolut,ionary St,rategies( ES) and Evolut,ionary Programming(EP) . We also implement self-adaptive Gaussian mutation (also used in Evolut,ionary Strategies and Evolutionary Programming) which allows the GA t,o vary the mutation strength during the run, this gives further improvement on some of the funct,ions. The performance our GA using a simple implementation of self-adaptive Gaussian mut,ation is now comparable t o ESs. This shows the importance of mutation and the importance of using appropiate mutation operators.

163 citations


Journal ArticleDOI
TL;DR: This paper describes the application of genetic algorithms to nonlinear constrained mixed discrete-integer optimization problems with optimal sets of parameters furnished by a meta-genetic algorithm.
Abstract: This paper describes the application of genetic algorithms to nonlinear constrained mixed discrete-integer optimization problems with optimal sets of parameters furnished by a meta-genetic algorithm. Genetic algorithms are combinatorial in nature, and therefore are computationally suitable for treating discrete and integer design variables. Careful attention has been paid to modify the genetic algorithms to promote computational efficiency. Some numerical experiments were performed so as to determine the appropriate range of genetic parameter values. Then the meta-genetic algorithm was employed to optimize these parameters to locate the best solution. Three examples are given to demonstrate the effectiveness of the methodology developed in this paper. Four crossover operators have been compared and the results show that a four-point crossover operator performs best.

Book ChapterDOI
01 Jan 1995
TL;DR: This hypothesis is tested and supported through studies of four different representations for the travelling sales-rep problem in the context of both formal representation-independent genetic algorithms and corresponding memetic algorithms.
Abstract: Representation is widely recognised as a key determinant of performance in evolutionary computation. The development of families of representation-independent operators allows the formulation of formal representation-independent evolutionary algorithms. These formal algorithms can be instantiated for particular search problems by selecting a suitable representation. The performance of different representations, in the context of any given formal representation-independent algorithm, can then be measured. Simple analyses suggest that fitness variance of formae (generalised schemata) for the chosen representation might act as a performance predictor for evolutionary algorithms. This hypothesis is tested and supported through studies of four different representations for the travelling sales-rep problem (TSP) in the context of both formal representation-independent genetic algorithms and corresponding memetic algorithms.

Book
01 Dec 1995
TL;DR: This book covers theoretical to practical applications of genetic algorithms, and the disk contains complete program details of each genetic algorithm discussed in the text.
Abstract: A genetic algorithm is an algorithm that the computer evaluates, alters slightly and then re-evaluates to see how the change affected the outcome. Genetic algorithms are useful for artificial intelligence, theoretical modeling and prediction programs. This book covers theoretical to practical applications of this exciting field. The disk contains complete program details of each genetic algorithm discussed in the text.


Journal ArticleDOI
TL;DR: Two examples show that evolutionary programming provides a feasible method for addressing such control problems as controlling unstable nonlinear systems with neural networks.
Abstract: Controlling unstable nonlinear systems with neural networks can be problematic. Two examples show that evolutionary programming provides a feasible method for addressing such control problems. >

Journal ArticleDOI
01 Jun 1995
TL;DR: The experiments indicate that evolutionary programming outperforms the genetic algorithm and potential difficulties in the design of suitable penalty functions for constrained optimization problems are indicated.
Abstract: Evolutionary programming and genetic algorithms are compared on two constrained optimization problems. The constrained problems are redesigned as related unconstrained problems by the application o...

Book ChapterDOI
29 Aug 1995
TL;DR: A problem-specific chromosome representation and knowledge-augmented genetic operators have been developed; these operators ‘intelligently’ avoid building illegal timetables.
Abstract: In this paper we describe a heavily constrained university timetabling problem, and our genetic algorithm based approach to solve it. A problem-specific chromosome representation and knowledge-augmented genetic operators have been developed; these operators ‘intelligently’ avoid building illegal timetables. The prototype timetabling system which is presented has been implemented in C and PROLOG, and includes an interactive graphical user interface. Tests with real data from our university were performed and yield promising results.

Book ChapterDOI
01 Jan 1995

Journal ArticleDOI
TL;DR: The GA-P performs symbolic regression by combining the traditional genetic algorithms function optimization strength with the genetic-programming paradigm to evolve complex mathematical expressions capable of handling numeric and symbolic data.
Abstract: The GA-P performs symbolic regression by combining the traditional genetic algorithms function optimization strength with the genetic-programming paradigm to evolve complex mathematical expressions capable of handling numeric and symbolic data. This technique should provide new insights into poorly understood data relationships. >

01 Jan 1995
TL;DR: This report describes the parallel implementation of genetic programming in the C programming language using a PC 486 type computer (running Windows) acting as a host and a network of transputers acting as processing nodes.
Abstract: This report describes the parallel implementation of genetic programming in the C programming language using a PC 486 type computer (running Windows) acting as a host and a network of transputers acting as processing nodes. Using this approach, researchers of genetic algorithms and genetic programming can acquire computing power that is intermediate between the power of currently available workstations and that of supercomputers at a cost that is intermediate between the two. A comparison is made of the computational effort required to solve the problem of symbolic regression of the Boolean even-5-parity function with different migration rates. Genetic programming required the least computational effort with an 8% migration rate. Moreover, this computational effort was less than that required for solving the problem with a serial computer and a panmictic population of the same size. That is, apart from the nearly linear speed-up in executing a fixed amount of code inherent in the parallel implementation of genetic programming, parallelization delivered more than linear speed-up in solving the problem using genetic programming.

Proceedings ArticleDOI
12 Sep 1995
TL;DR: A tree structured genetic algorithm is described, used to generate nonlinear models from process input output data and three examples are utilised to demonstrate the applicability of the technique within the domain of process engineering.
Abstract: A tree structured genetic algorithm is described. The algorithm is used to generate nonlinear models from process input output data. Three examples are utilised to demonstrate the applicability of the technique within the domain of process engineering.

Proceedings Article
15 Jul 1995
TL;DR: This paper considers further the question of whether the epistasis metric actually gives a good prediction of the ease or dii-culty of solution of a given problem by a GA, and introduces the concept of alias sets.
Abstract: In an earlier paper we examined the relationship between genetic algorithms (GAs) and traditional methods of experimental design. This was motivated by an investigation into the problems caused by epistasis in the implementation and application of GAs to optimization problems. We showed how this viewpoint enables us to gain further insights into the determination of epistatic effects , and into the value of diierent forms of encoding a problem for a GA solution. We also demonstrated the equivalence of this approach toWalsh transform analysis. In this paper we consider further the question of whether the epistasis metric actually gives a good prediction of the ease or dii-culty of solution of a given problem by a GA. Our original analysis assumed, as does the rest of the related literature, knowledge of the complete solution space. In practice, we only ever sample a fraction of all possible solutions , and this raises signiicant questions which are the subject of the second part of this paper. In order to analyse these questions , we introduce the concept of alias sets, and conclude by discussing some implications for the traditional understanding of how GAs work.

Book ChapterDOI
04 Jun 1995
TL;DR: It is claimed that the concept of “representations” is particularly useful to understand the evolution of complex adaptation and the origin of the modular design of higher organisms.
Abstract: In this paper the implications of the theory of evolutionary computation for evolutionary biology are explored. It is claimed that the concept of “representations” is particularly useful to understand the evolution of complex adaptation and the origin of the modular design of higher organisms. Modularity improves the adaptability of complex adaptive systems, but arises most likely as a side effect of adaptive evolution rather than being an adaptation itself.

Journal ArticleDOI
TL;DR: An elitist simple genetic algorithm, the CHC algorithm and Genitor are compared using new test problems that are not readily solved using simple local search methods and a hybrid algorithm is examined that combines local and genetic search.
Abstract: Genetic algorithms have attracted a good deal of interest in the heuristic search community. Yet there are several different types of genetic algorithms with varying performance and search characteristics. In this article we look at three genetic algorithms: an elitist simple genetic algorithm, the CHC algorithm and Genitor. One problem in comparing algorithms is that most test problems in the genetic algorithm literature can be solved using simple local search methods. In this article, the three algorithms are compared using new test problems that are not readily solved using simple local search methods. We then compare a local search method to genetic algorithms for geometric matching and examine a hybrid algorithm that combines local and genetic search. The geometric matching problem matches a model (e.g., a line drawing) to a subset of lines contained in a field of line fragments. Local search is currently the best known method for solving general geometric matching problems.


Journal ArticleDOI
TL;DR: There are benefits to be gained by going beyond a perspective constrained too tightly by the connotations of the term “genetic” and it is shown that the scatter search framework directly leads to processes for combining solutions that exhibit special properties for exploiting combinatorial optimization problems.
Abstract: Scatter search and genetic algorithms have originated from somewhat different traditions and perspectives, yet exhibit features that are strongly complementary. Links between the approaches have increased in recent years as variants of genetic algorithms have been introduced that embody themes in closer harmony with those of scatter search. Some researchers are now beginning to take advantage of these connections by identifying additional ways to incorporate elements of scatter search into genetic algorithm approaches. There remain aspects of the scatter approach that have not been exploited in conjunction with genetic algorithms, yet that provide ways to achieve goals that are basic to the genetic algorithm design. Part of the gap in implementing hybrids of these procedures may derive from relying too literally on the genetic metaphor, which in its narrower interpretation does not readily accommodate the strategic elements underlying scatter search. The theme of this paper is to show there are benefits to be gained by going beyond a perspective constrained too tightly by the connotations of the term “genetic”. We show that the scatter search framework directly leads to processes for combining solutions that exhibit special properties for exploiting combinatorial optimization problems. In the setting of zero-one integer programming, we identify a mapping that gives new ways to create combined solutions, producing constructions calledstar-paths for exploring the zero-one solution space. Star-path trajectories have the special property of lying within regions assured to include optimal solutions. They also can be exploited in association with both cutting plane and extreme point solution approaches. These outcomes motivate a deeper look into current conceptions of appropriate ways to combine solutions, and disclose there are more powerful methods to derive information from these combinations than those traditionally applied.

Proceedings Article
20 Aug 1995
TL;DR: A new algorithm called SIAO1 for learning first order logic rules with genetic algorithms using the covering principle developed in AQ where seed examples are generalized into rules using however a genetic search, as initially introduced in the SIA algorithm for attribute-based representation.
Abstract: This paper introduces a new algorithm called SIAO1 for learning first order logic rules with genetic algorithms. SIAO1 uses the covering principle developed in AQ where seed examples are generalized into rules using however a genetic search, as initially introduced in the SIA algorithm for attribute-based representation. The genetic algorithm uses a high level representation for learning rules in first order logic and may deal with numerical data as well as background knowledge such as hierarchies over the predicates or tree structured values. The genetic operators may for instance change a predicate into a more general one according to background knowledge, or change a constant into a variable. The evaluation function may take into account user preference biases.

Proceedings ArticleDOI
29 Nov 1995
TL;DR: It is argued that the appropriateness of particular variation operators depends on the level of abstraction of the simulation, and including spec@ random variation operators simply because they have a similar form as genetic operators that occur in nature does not, in general, lead to greater popularity in simulation.
Abstract: Evolutionary computation can be conducted at various levels of abstraction (e.g., genes, individuals, species). Recent claims have been made that simulated evolution can be made more biologically accurate by applying specific genetic operators that mimic low-level transformations to DNA. This paper argues instead that the appropriateness of particular variation operators depends on the level of abstraction of the simulation. Further, including spec@ random variation operators simply because they have a similar form as genetic operators that occur in nature does not, in general, lead lo greater fdelity in simulation.

Proceedings ArticleDOI
29 Nov 1995
TL;DR: By combining a hierarchical crossover operator with two traditional single-point search algorithms (simulated annealing and stochastic iterated hill climbing), this work has solved some problems by processing fewer candidate solutions and with a greater probability of success than genetic programming.
Abstract: Addresses the problem of program discovery as defined by genetic programming. By combining a hierarchical crossover operator with two traditional single-point search algorithms (simulated annealing and stochastic iterated hill climbing), we have solved some problems by processing fewer candidate solutions and with a greater probability of success than genetic programming. We have also enhanced genetic programming by hybridizing it with the simple idea of hill climbing from a few individuals, at a fixed interval of generations.

Book ChapterDOI
04 Sep 1995
TL;DR: This paper offers an introduction to evolutionary programming, and indicates its relationship to other methods of evolutionary computation, specifically genetic algorithms and evolution strategies.
Abstract: Evolutionary programming is a method for simulating evolution that has been investigated for over 30 years. This paper offers an introduction to evolutionary programming, and indicates its relationship to other methods of evolutionary computation, specifically genetic algorithms and evolution strategies. The original efforts that evolved finite state machines for predicting arbitrary time series, as well as specific recent efforts in combinatorial and continuous optimization are reviewed. Some areas of current investigation are mentioned, including empirical assessment of the optimization performance of the technique and extensions of the method to include mechanisms to self-adapt to the error surface being searched.