scispace - formally typeset
Search or ask a question

Showing papers in "Evolutionary Programming in 1997"


Book ChapterDOI
TL;DR: It is shown empirically that the new evolution strategy based on Cauchy mutation outperforms the classical evolution strategy on most of the 23 benchmark problems tested in this paper.
Abstract: Evolution strategies are a class of general optimisation algorithms which are applicable to functions that are multimodal, non-differentiable, or even discontinuous. Although recombination operators have been introduced into evolution strategies, their primary search operator is still mutation. Classical evolution strategies rely on Gaussian mutations. A new mutation operator based on the Cauchy distribution is proposed in this paper. It is shown empirically that the new evolution strategy based on Cauchy mutation outperforms the classical evolution strategy on most of the 23 benchmark problems tested in this paper. These results, along with those obtained by fast evolutionary programming

573 citations


Book ChapterDOI
TL;DR: This paper compares the on-line extrema tracking performance of an evolutionary program without self-adaptation against an evolutionary programs using a self- Adaptive Gaussian update rule over a number of dynamics applied to a simple static function.
Abstract: Typical applications of evolutionary optimization involve the off-line approximation of extrema of static multi-modal functions. Methods which use a variety of techniques to self-adapt mutation parameters have been shown to be more successful than methods which do not use self-adaptation. For dynamic functions, the interest is not to obtain the extrema but to follow it as closely as possible. This paper compares the on-line extrema tracking performance of an evolutionary program without self-adaptation against an evolutionary program using a self-adaptive Gaussian update rule over a number of dynamics applied to a simple static function. The experiments demonstrate that for some dynamic functions, self-adaptation is effective while for others it is detrimental.

169 citations


Book ChapterDOI
TL;DR: Family Competition Evolution Strategy is compared with other evolutionary algorithms on various benchmark problems and the results indicate that FCES is a powerful optimization technique.
Abstract: This paper applies family competition to evolution strategies to solve constrained optimization problems. The family competition of Family Competition Evolution Strategy (FCES) can be viewed as a local competition involving the children generated from the same parent, while the selection is a global competition among all of the members in the population. According to our experimental results, the self-adaptation of strategy parameters with deterministic elitist selection may trap ESs into local optima when they are applied to heavy constrained optimization problems. By controlling strategy parameters with non-self adaptive rule, FCES can reduce the computation time of self-adaptive Gaussian mutation, diminish the complexity of selection from (m+1) to (m+m), and avoid to be premature. Therefore, FCES is capable of obtaining better performance and saving the computation time. In this paper, FCES is compared with other evolutionary algorithms on various benchmark problems and the results indicate that FCES is a powerful optimization technique.

101 citations


Book ChapterDOI
TL;DR: Using the Giffler and Thompson algorithm, two new operators are created, THX crossover and mutation, which better transmit temporal relationships in the schedule, and appear to integrate successfully the advantages of coarse-grain and fine-grain GAs.
Abstract: This paper describes a GA for job shop scheduling problems. Using the Giffler and Thompson algorithm, we created two new operators, THX crossover and mutation, which better transmit temporal relationships in the schedule. The approach produced excellent results on standard benchmark job shop scheduling problems. We further tested many models and scales of parallel GAs in the context of job shop scheduling problems. In our experiments, the hybrid model consisting of coarse-grain GAs connected in a fine-grain-GA-style topology performed best, appearing to integrate successfully the advantages of coarse-grain and fine-grain GAs.

63 citations


Book ChapterDOI
TL;DR: The benefits of using polyploidy in Genetic Algorithms is explored by using a local chromosome to reflect dominance in diploid and tetraploid organisms, with and without evolving crossover points, added to provide linkage between chromosomes and the dominance control vector.
Abstract: By memorizing alleles that have been successful in the past, polyploidy has been found to be beneficial for adapting to changing environments. This paper explores the benefits of using polyploidy in Genetic Algorithms. Polyploidy is provided in our approach by using a local chromosome to reflect dominance in diploid and tetraploid organisms, with and without evolving crossover points, added to provide linkage between chromosomes and the dominance control vector. We compare our polyploid approach to a haploid implementation for a benchmark that involves a 0/1 knapsack problem with time varying weight constraints.

57 citations


Book ChapterDOI
TL;DR: Based on the theoretical analysis, an improved FEP (IFEP) is proposed, which combines the advantages of both Cauchy and Gaussian mutations in EP and performs better than both FEP and CEP for a number of benchmark problems.
Abstract: Evolutionary algorithms (EAs) can be regarded as algorithms based on neighbourhood search, where different search operators (such as crossover and mutation) determine different neighbourhood and step sizes This paper analyses the efficiency of various mutations in evolutionary programming (EP) by examining their neighbourhood and step sizes It shows analytically when and why Cauchy mutation-based fast EP (FEP) [1, 2] is better than Gaussian mutation-based classical EP (CEP) It also studies the relationship between the optimality of the solution and the time used to find the solution Based on the theoretical analysis, an improved FEP (IFEP) is proposed, which combines the advantages of both Cauchy and Gaussian mutations in EP Although IFEP is very simple and requires no extra parameters, it performs better than both FEP and CEP for a number of benchmark problems

49 citations


Book ChapterDOI
TL;DR: This paper addresses the general time-route assignment problem: One must find “optimal” route and slot allocation for each aircraft in a way that significantly reduces the peak of workload in the most congested sectors and in themost congested airports, during one day of traffic.
Abstract: This paper addresses the general time-route assignment problem: One considers an air transportation network and a fleet of aircraft with their associated route and slot of departure For each flight a set of alternative routes and a set of possible slots of departure are defined One must find “optimal” route and slot allocation for each aircraft in a way that significantly reduces the peak of workload in the most congested sectors and in the most congested airports, during one day of traffic

35 citations


Book ChapterDOI
TL;DR: This work addresses the problem of the performance overheads of evolving a large number of data structures and demonstrates how the evolutionary process can be achieved with much greater efficiency through the use of a formally-based representation and strong typing.
Abstract: Genetic Programming is increasing in popularity as the basis for a wide range of learning algorithms. However, the technique has to date only been successfully applied to modest tasks because of the performance overheads of evolving a large number of data structures, many of which do not correspond to a valid program. We address this problem directly and demonstrate how the evolutionary process can be achieved with much greater efficiency through the use of a formally-based representation and strong typing. We report initial experimental results which demonstrate that our technique exhibits significantly better performance than previous work.

33 citations


Book ChapterDOI
TL;DR: Through genetic programming, GP-based speculators with genetic programming approximate the consequences of “speculating about the speculations of others”, including the price volatility and the resulting welfare loss.
Abstract: In spirit of the earlier works done by Arthur (1992) and Palmer et al. (1993), this paper models speculators with genetic programming (GP) in a production economy (Muthian Economy). Through genetic programming, we approximate the consequences of “speculating about the speculations of others”, including the price volatility and the resulting welfare loss. Some of the patterns observed in our simulations are consistent with findings in experimental markets with human subjects. For example, we show that GP-based speculators can be noisy by nature. However, when appropriate financial regulations are imposed, GP-based speculators can also be more informative than noisy.

30 citations


Book ChapterDOI
TL;DR: A genetic algorithm augmented with a long term memory is used to design control strategies for a simulated robot, a mobile vehicle operating in a two-dimensional environment that quickly combines the basic behaviors and finds control Strategies for performing well in the more complex environment.
Abstract: We use a genetic algorithm augmented with a long term memory to design control strategies for a simulated robot, a mobile vehicle operating in a two-dimensional environment. The simulated robot has five touch sensors, two sound sensors, and two motors that drive locomotive tank tracks. A genetic algorithm trains the robot in several specially-designed simulation environments for evolving basic behaviors such as food approach, obstacle avoidance, and wall following. Control strategies for a more complex environment are then designed by selecting solutions from the stored strategies evolved for basic behaviors, ranking them according to their performance in the new complex environment and introducing them into a genetic algorithm's initial population. This augmented memory-based genetic algorithm quickly combines the basic behaviors and finds control strategies for performing well in the more complex environment.

29 citations


Book ChapterDOI
TL;DR: GPmuse is software which explores one connection between computation and creativity using a symbiosis-inspired genetic programming paradigm in which distinct agents collaborate to produce 16th-century counterpoint.
Abstract: GPmuse is software which explores one connection between computation and creativity using a symbiosis-inspired genetic programming paradigm in which distinct agents collaborate to produce 16th-century counterpoint.

Book ChapterDOI
TL;DR: The bitstring framework's Inductive Learning-based control of both crossover and mutation by means of Inductive Leaning is transposed to the control of mutation step-size in evolutionary parameter optimization, and the resulting algorithm is experimentally compared to the self-adaptive step- size of Evolution Strategies.
Abstract: The problem of setting the mutation step-size for real-coded evolutionary algorithms has received different answers: exogenous rules like the 1/5 rule, or endogenous factor like the self-adaptation of the step-size in the Gaussian mutation of modern Evolution Strategies. On the other hand, in the bitstring framework, the control of both crossover and mutation by means of Inductive Leaning has proven beneficial to evolution, mostly by recognizing — and forbidding — past errors (i.e. crossover or mutations leading to offspring that will not survive next selection step). This Inductive Learning-based control is transposed to the control of mutation step-size in evolutionary parameter optimization, and the resulting algorithm is experimentally compared to the self-adaptive step-size of Evolution Strategies.

Book ChapterDOI
TL;DR: The results indicate that the use of self-adaptation can yield statistically significantly improved solutions over the failure to use any self- Adaptation at all, although this improvement comes at the expense of greater computational effort.
Abstract: Self-adaptation is becoming a standard method for optimizing mutational parameters within evolutionary programming. The majority of these efforts have been applied to continuous optimization problems. This paper offers a preliminary investigation into the use of self-adaptation for discrete optimization using the traveling salesman problem. Two self-adaptive approaches are analyzed. The results indicate that the use of self-adaptation can yield statistically significantly improved solutions over the failure to use any self-adaptation at all. This improvement comes at the expense of greater computational effort.

Book ChapterDOI
TL;DR: This work presents examples of economic models which have been implemented in Swarm and discusses the advantages of the Object-Oriented simulation approach.
Abstract: Swarm is a library of software components developed by the Santa Fe Institute which allow researchers to construct discrete event simulations of complex systems with heterogeneous elements or agents. These libraries provide reusable objects for analyzing, displaying and controlling simulation experiments. Swarm is not based on any assumptions about the system that is being simulated, and is currently being used by a wide variety of researchers in the social and natural sciences. We present examples of economic models which have been implemented in Swarm and discuss the advantages of the Object-Oriented simulation approach.

Book ChapterDOI
TL;DR: This paper demonstrates that a design for a low-distortion high-gain 96 decibel (64,860 -to-1) operational amplifier (including both circuit topology and component sizing) can be evolved using genetic programming.
Abstract: This paper demonstrates that a design for a low-distortion high-gain 96 decibel (64,860 -to-1) operational amplifier (including both circuit topology and component sizing) can be evolved using genetic programming.

Book ChapterDOI
TL;DR: This study explores the use of both the Gaussian and the Cauchy operators along with a self-adaptive mechanism to select the appropriate operator for each individual in the population.
Abstract: Classical evolutionary programming uses Gaussian mutation as the primary search operator. Recent studies have shown that using a Cauchy random variable as the primary operator leads to faster convergence for certain function optimization problems. In this study we explore the use of both the Gaussian and the Cauchy operators along with a self-adaptive mechanism to select the appropriate operator for each individual in the population. Empirical studies of the dual-operator evolutionary programming are conducted using a limited set of test function optimization problems.

Book ChapterDOI
TL;DR: Fundamental theoretical questions about the utility of genetic algorithms are raised, indicating that genetic algorithms yield worse performance than any other (deterministic) optimization algorithm.
Abstract: Genetic algorithms are believed by some to be very efficient optimization and adaptation tools. So far, the efficacy of genetic algorithms has been described by empirical results, and yet theoretical approaches are far behind. This paper aims at raising fundamental theoretical questions about the utility of genetic algorithms. These questions originate from various existing theories and the no-free-lunch theorem, a theory that compares all possible optimization procedure with respect to an equal distribution of all possible objective functions. While these questions are open at least in part, they all indicate that genetic algorithms yield worse performance than any other (deterministic) optimization algorithm. Consequently, future research should answer the question of whether the real world (or another application domain) imposes a non-equal distribution for which genetic algorithms yield advantageous performance, or whether genetic algorithms should apply operators in a deterministic fashion.

Book ChapterDOI
TL;DR: Two forms of headless chicken crossover are investigated for manipulating parse trees and it is argued that these experiments support the hypothesis that the building block hypothesis is not descriptive of the operation of subtree crossover.
Abstract: In genetic programming, crossover swaps randomly selected subtrees between parents. Recent work in genetic algorithms ([13]) demonstrates that when one of the parents selected for crossover is replaced with a randomly generated parent, the algorithm performs as well or better than crossover for some problems. [13] termed this form of macromutation headless chicken crossover. The following paper investigates two forms of headless chicken crossover for manipulating parse trees and shows that both types of macromutation perform as well or better than standard subtree crossover. It is argued that these experiments support the hypothesis that the building block hypothesis is not descriptive of the operation of subtree crossover.

Book ChapterDOI
TL;DR: This paper presents a description of an evolutionary artificial neural network algorithm, EPNet and its extension taking advantage of a High Performance Computing Environment.
Abstract: This paper presents a description of an evolutionary artificial neural network algorithm, EPNet and its extension taking advantage of a High Performance Computing Environment. PEPNet, Parallel EPNet, implements four forms of parallelism and this paper describes two of those parallelisms. Experimental studies have shown promising results with better time and prediction performance.

Book ChapterDOI
TL;DR: The Cultural Algorithms with Evolution Programming were effective learning all of these plays within several hundred generations each, and defensive plays with active protagonists were easier to learn than those with passive protagonists.
Abstract: Cultural Algorithms have been previously used as a framework in which to evolve cooperative behavior within groups. Here they provide a framework within which to develop multiagent cooperation among a group of soccer players. The current system is used to learn several types of plays: offensive and defensive. In addition, plays can be learned without opposition, passive opposition, or active opposition. The Cultural Algorithms with Evolution Programming were effective learning all of these plays within several hundred generations each. In general, defensive plays were harder to learn than offensive ones. But, defensive plays with active protagonists were easier to learn than those with passive protagonists. This may be due to the fact that active protagonists provide additional information for the team members to use in formulating their plays. In addition, successful learning involved a coordination of individual adjustments among participating agents. A description of these adjustments in terms of the belief space for these agents is given.

Book ChapterDOI
TL;DR: An analysis of the dynamic behavior of Evolution Strategies applied to Traveling Salesman Problems is presented and a stochastic model of the optimization process is introduced.
Abstract: An analysis of the dynamic behavior of Evolution Strategies applied to Traveling Salesman Problems is presented. For a special class of Traveling Salesman Problems a stochastic model of the optimization process is introduced. Based on this model different features determining the optimization process of Evolution Strategies are analyzed.

Book ChapterDOI
Hyun Myung1, Jong-Hwan Kim1
TL;DR: Computer simulation results indicate that Evolian gives outperforming or at least reasonable results for multivariable heavily constrained function optimization as compared to other evolutionary computation-based methods.
Abstract: In this paper, an evolutionary optimization method, Evolian, is proposed for the general constrained optimization problem, which incorporates the concept of (1) a multi-phase optimization process and (2) constraint scaling techniques to resolve problem of ill-conditioning. In each phase of Evolian, the typical evolutionary programming (EP) is performed using an augmented Lagrangian objective function with a penalty parameter fixed. If there is no improvement in the best objective function in one phase, another phase of Evolian is performed after scaling the constraints and then updating the Lagrange multipliers and penalty parameter. This procedure is repeated until a satisfactory solution is obtained. Computer simulation results indicate that Evolian gives outperforming or at least reasonable results for multivariable heavily constrained function optimization as compared to other evolutionary computation-based methods.

Book ChapterDOI
TL;DR: Experimental results show that ‘gradient friendly’ error surfaces, corresponding to favorable realizations when using gradient based techniques, are not necessarily ‘EPfriendly’ and vice versa.
Abstract: Evolutionary programming (EP) has been used for the adaptation (optimization) of IIR filters. In a previous study [1], the rate of optimization using EP was shown to be dependent on the structure of the filter used during realization. Furthermore, this dependency changes with the filter order. In this paper, the reasons for such a dependence are investigated. Gradient-based algorithms are also affected by the filter realization, which determines the nature of the mean squared error surface. EP is robust to the presence of local minima and while ensuring the stability of the generated solution offers provable global convergence in the limit. The error surfaces, as seen by EP, while modeling these IIR filters in various realizations, namely, direct, cascade, parallel, and lattice form are analyzed. Experimental results show that ‘gradient friendly’ error surfaces, corresponding to favorable realizations when using gradient based techniques, are not necessarily ‘EP friendly’ and vice versa.

Book ChapterDOI
TL;DR: A mutation-rate strategy that is variable between individuals within a given generation based on the individual's relative performance for the purpose of function optimization is proposed.
Abstract: In Neo-Darwinism, mutation can be considered to be unaffected by selection pressure. This is the metaphor generally used by the genetic algorithm for its treatment of the mutation operation, which is usually regarded as a background operator. This metaphor, however, does not take into account the fact that mutation has been shown to be affected by external events. In this paper, we propose a mutation-rate strategy that is variable between individuals within a given generation based on the individual's relative performance for the purpose of function optimization.

Book ChapterDOI
TL;DR: Evolutionary programming can be used to optimize the behavior of simulated forces which learn tactical courses of action adaptively, and can operate at any specified level of intelligence.
Abstract: Attempts to optimize simulated behaviors have typically relied on heuristics A static set of if-then-else rules is derived and applied to the problem at hand This approach, while mimicking the previously discovered decisions of humans, does not allow for true, dynamic learning In contrast, evolutionary programming can be used to optimize the behavior of simulated forces which learn tactical courses of action adaptively Actions of Computer-Generated Forces are created on-the-fly by iterative evolution through the state space topography Tactical plans, in the form of a temporally linked set of task frames, are evolved independently for each entity in the simulation Prospective courses of action at each time step in the scenario are scored with respect to the assigned mission (expressed as a Valuated State Space and normalizing function) Evolutionary updates of the plans incorporate dynamic changes in the developing situation and the sensed environment This method can operate at any specified level of intelligence

Book ChapterDOI
TL;DR: A “softened” cellular automaton model is described that illustrates how different grouping responses can be evolved in cases simple enough to examine the entire test set.
Abstract: The capabilities built into a processing network control the manner in which it generalizes from a training set and therefore how it groups environmental patterns These capabilities are developed through learning, ultimately evolutionary learning, and therefore have an objective basis in so far as the grouping tendencies afford a selective advantage But for this development to occur it is necessary that the processing network in fact be able to evolve grouping tendencies that reflect selective pressures The extent to which this is possible depends on how wide a variety of grouping dynamics the processing network can support (its dynamic richness) and on whether its structure-function gradualism (evolutionary friendliness) is sufficient to provide access to these grouping responses through a variation-selection process We describe a “softened” cellular automaton model that illustrates how different grouping responses can be evolved in cases simple enough to examine the entire test set

Book ChapterDOI
TL;DR: These networks are described in terms of multiple criteria spanning trees computed by Cultural Algorithms with an evolutionary programming shell to explain changes in site location decision-making over time in the Valley of Oaxaca.
Abstract: In this paper Cultural Algorithms are used to generate networks of sites within a historical database of sites for the Valley of Oaxaca These networks are described in terms of multiple criteria spanning trees computed by Cultural Algorithms with an evolutionary programming shell The results are used to explain changes in site location decision-making over time in the valley

Book ChapterDOI
TL;DR: The results demonstrate that the evolution strategy dramatically accelerates the development process, which is of great practical relevance, since the fitness evaluation of each controller takes approximately one minute on a physical robot.
Abstract: This paper presents the application of the evolution strategy to the evolution of different controllers for autonomous agents. Autonomous agents are embodied systems that behave in the real world without any human control. Most of the pertinent research has employed genetic algorithms. Epistatic interaction between the parameters of the fitness function is a well-known problem, since it drastically slows down genetic algorithms. The evolution strategy, however, performs rotationally invariant, because it applies gaussian mutations with a probability pm=1 to all parameters per offspring. This paper investigates the scaling behavior of the evolution strategy when evolving different neuronal control architectures for autonomous agents. The results demonstrate that the evolution strategy dramatically accelerates the development process, which is of great practical relevance, since the fitness evaluation of each controller takes approximately one minute on a physical robot.

Book ChapterDOI
TL;DR: An evolution of the Hopfield model of associative memory is presented using evolutionary programming as a real-valued parameter optimization and it is shown that a network with random synaptic weights evolves eventually to store some number of patterns as fixed points.
Abstract: We apply evolutionary computations to Hopfield model of associative memory. Although there have been a lot of researches which apply evolutionary techniques to layered neural networks, their applications to Hopfield neural networks remain few so far. Previously we reported that a genetic algorithm using discrete encoding chromosomes evolves the Hebb-rule associative memory to enhance its storage capacity. We also reported that the genetic algorithm evolves a network with random synaptic weights eventually to store some number of patterns as fixed points. In this paper we present an evolution of the Hopfield model of associative memory using evolutionary programming as a real-valued parameter optimization.

Book ChapterDOI
TL;DR: In this paper, an agent-based computational model for studying the formation and evolution of trade networks in decentralized market economies is presented, populated by heterogeneous endogenously interacting traders with internalized data and modes of behavior.
Abstract: This paper describes an agent-based computational model for studying the formation and evolution of trade networks in decentralized market economies. A virtual economic world is constructed, populated by heterogeneous endogenously interacting traders with internalized data and modes of behavior. This virtual world can be used to study the formation and evolution of trade networks under alternatively specified market structures at three different levels of analysis: individual trade behavior; trade interaction patterns; and social welfare.