scispace - formally typeset
Search or ask a question

Showing papers in "Evolutionary Computation in 2001"


Journal ArticleDOI
TL;DR: This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation and reveals local and global search properties of the evolution strategy with and without covariance matrix adaptation.
Abstract: This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.

3,752 citations


Journal ArticleDOI
TL;DR: The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self- Adaptive GAs.
Abstract: Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs.

400 citations


Journal ArticleDOI
TL;DR: The results show that evolutionary adaptive controllers solve the task much faster and better than evolutionary standard fixed-weight controllers, that the method scales up well to large architectures, and that they can adapt to environmental changes that involve new sensory characteristics and new spatial relationships.
Abstract: This paper is concerned with adaptation capabilities of evolved neural controllers. We propose to evolve mechanisms for parameter self-organization instead of evolving the parameters themselves. The method consists of encoding a set of local adaptation rules that synapses follow while the robot freely moves in the environment. In the experiments presented here, the performance of the robot is measured in environments that are different in significant ways from those used during evolution. The results show that evolutionary adaptive controllers solve the task much faster and better than evolutionary standard fixed-weight controllers, that the method scales up well to large architectures, and that evolutionary adaptive controllers can adapt to environmental changes that involve new sensory characteristics (including transfer from simulation to reality and across different robotic platforms) and new spatial relationships.

109 citations


Journal ArticleDOI
TL;DR: The self-adaptive characteristics such as translation, enlargement, focusing, and directing of the distribution of children generated by the ES and the RCGA are examined through experiments.
Abstract: This paper discusses the self-adaptive mechanisms of evolution strategies (ES) and real-coded genetic algorithms (RCGA) for optimization in continuous search spaces. For multi-membered evolution strategies, a self-adaptive mechanism of mutation parameters has been proposed by Schwefel. It introduces parameters such as standard deviations of the normal distribution for mutation into the genetic code and lets them evolve by selection as well as the decision variables. In the RCGA, crossover or recombination is used mainly for search. It utilizes information on several individuals to generate novel search points, and therefore, it can generate offspring adaptively according to the distribution of parents without any adaptive parameters. The present paper discusses characteristics of these two self-adaptive mechanisms through numerical experiments. The self-adaptive characteristics such as translation, enlargement, focusing, and directing of the distribution of children generated by the ES and the RCGA are examined through experiments.

97 citations


Journal ArticleDOI
TL;DR: A new approach called the Constructive Genetic Algorithm (CGA), which allows for schemata evaluation and the provision of other new features to the GA, is introduced, which is applied to two clustering problems in graphs.
Abstract: Genetic algorithms (GAs) have recently been accepted as powerful approaches to solving optimization problems. It is also well-accepted that building block construction (schemata formation and conservation) has a positive influence on GA behavior. Schemata are usually indirectly evaluated through a derived structure. We introduce a new approach called the Constructive Genetic Algorithm (CGA), which allows for schemata evaluation and the provision of other new features to the GA. Problems are modeled as bi-objective optimization problems that consider the evaluation of two fitness functions. This double fitness process, called fg-fitness, evaluates schemata and structures in a common basis. Evolution is conducted considering an adaptive rejection threshold that contemplates both objectives and attributes a rank to each individual in population. The population is dynamic in size and composed of schemata and structures. Recombination preserves good schemata, and mutation is applied to structures to get population diversification. The CGA is applied to two clustering problems in graphs. Representation of schemata and structures use a binary digit alphabet and are based on assignment (greedy) heuristics that provide a clearly distinguished representation for the problems. The clustering problems studied are the classical p-median and the capacitated p-median. Good results are shown for problem instances taken from the literature.

95 citations


Journal ArticleDOI
TL;DR: The adaptation of evolutionary algorithms (EAs) to the structural optimization of chemical engineering plants is described, using rigorous process simulation combined with realistic costing procedures to calculate target function values.
Abstract: This paper describes the adaptation of evolutionary algorithms (EAs) to the structural optimization of chemical engineering plants, using rigorous process simulation combined with realistic costing procedures to calculate target function values.To represent chemical engineering plants, a network representation with typed vertices and variable structure will be introduced. For this representation, we introduce a technique on how to create problem specific search operators and apply them in stochastic optimization procedures. The applicability of the approach is demonstrated by a reference example.The design of the algorithms will be oriented at the systematic framework of metric-based evolutionary algorithms (MBEAs). MBEAs are a special class of evolutionary algorithms, fulfilling certain guidelines for the design of search operators, whose benefits have been proven in theory and practice. MBEAs rely upon a suitable definition of a metric on the search space. The definition of a metric for the graph representation will be one of the main issues discussed in this paper.Although this article deals with the problem domain of chemical plant optimization, the algorithmic design can be easily transferred to similar network optimization problems. A useful distance measure for variable dimensionality search spaces is suggested.

58 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of reliably setting genetic algorithm parameters for consistent labelling problems by proposing a robust empirical framework, based on the analysis of factorial experiments, which are shown to be robust under extrapolation to up to triple the problem size.
Abstract: This paper addresses the problem of reliably setting genetic algorithm parameters for consistent labelling problems. Genetic algorithm parameters are notoriously difficult to determine. This paper proposes a robust empirical framework, based on the analysis of factorial experiments. The use of a graeco-latin square permits an initial study of a wide range of parameter settings. This is followed by fully crossed factorial experiments with narrower ranges, which allow detailed analysis by logistic regression. The empirical models derived can be used to determine optimal algorithm parameters and to shed light on interactions between the parameters and their relative importance. Refined models are produced, which are shown to be robust under extrapolation to up to triple the problem size.

50 citations


Journal ArticleDOI
TL;DR: This paper demonstrates polynomial-time computability of such a representation of the evolutionary fitness function in gene expression by proposing a class of efficient algorithms and presents experimental results support-ing the theoretical performance of the proposed algorithms.
Abstract: The gene expression process in nature produces different proteins in different cells from different portions of the DNA. Since proteins control almost every important activity in a living organism, at an abstract level, gene expression can be viewed as a process that evaluates the merit or "fitness" of the DNA. This distributed evaluation of the DNA would not be possible without a decomposed representation of the fitness function defined over the DNAs. This paper argues that, unless the living body was provided with such a representation, we have every reason to believe that it must have an efficient mechanism to construct this distributed representation. This paper demonstrates polynomial-time computability of such a representation by proposing a class of efficient algorithms. The main contribution of this paper is two-fold. On the algorithmic side, it offers a way to scale up evolutionary search by detecting the underlying structure of the search space. On the biological side, it proves that the distributed representation of the evolutionary fitness function in gene expression can be computed in polynomial-time. It advances our understanding about the representation construction in gene expression from the perspective of computing. It also presents experimental results supporting the theoretical performance of the proposed algorithms.

46 citations


Journal ArticleDOI
TL;DR: It is shown that EPSAs can be cast as stochastic pattern search methods, and this observation is used to prove that EPS as well as EP-SAs have a probabilistic, weak stationary point convergence theory.
Abstract: We present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on Rn: evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. We show that EPSAs can be cast as stochastic pattern search methods, and we use this observation to prove that EPSAs have a probabilistic, weak stationary point convergence theory. This convergence theory is distinguished by the fact that the analysis does not approximate the stochastic process of EPSAs, and hence it exactly characterizes their convergence properties.

44 citations


Journal ArticleDOI
TL;DR: A modified version of the 1/5-success rule for self-adaptation in evolution strategies (ES) is proposed and preliminary tests indicate an ES with the modified self- Adaptation method compares favorably to both a non-adapted ES and a 1/ 5- success rule adapted ES.
Abstract: Evolutionary programs are capable of finding good solutions to difficult optimization problems. Previous analysis of their convergence properties has normally assumed the strategy parameters are kept constant, although in practice these parameters are dynamically altered. In this paper, we propose a modified version of the 1/5-success rule for self-adaptation in evolution strategies (ES). Formal proofs of the long-term behavior produced by our self-adaptation method are included. Both elitist and non-elitist ES variants are analyzed. Preliminary tests indicate an ES with our modified self-adaptation method compares favorably to both a non-adapted ES and a 1/5-success rule adapted ES.

35 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid approach incorporating a GA is presented, where the role of the GA is to derive a small selection of good shifts to seed a greedy schedule construction heuristic.
Abstract: Public transport driver scheduling problems are well known to be NP-hard. Although some mathematically based methods are being used in the transport industry, there is room for improvement. A hybrid approach incorporating a genetic algorithm (GA) is presented. The role of the GA is to derive a small selection of good shifts to seed a greedy schedule construction heuristic. A group of shifts called a relief chain is identified and recorded. The relief chain is then inherited by the offspring and used by the GA for schedule construction. The new approach has been tested using real-life data sets, some of which represent very large problem instances. The results are generally better than those compiled by experienced schedulers and are comparable to solutions found by integer linear programming (ILP). In some cases, solutions were obtained when the ILP failed within practical computational limits.

Journal ArticleDOI
TL;DR: It is shown that the problem of learning the Ising perceptron is reducible to a noisy version of ASP, and an algorithm is described the authors call Explicitly Parallel Search that succeeds.
Abstract: We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

Journal ArticleDOI
TL;DR: This paper formalizes recombinations with the probability density functions of stochastic variables represented as the parameters and describes the change of the probabilitydensity functions of chromosomes before and after recombination.
Abstract: This paper concerns recombinations which produce offspring from two parents. We assume an infinite population and regard recombinations as transformations of stochastic variables represented as chromosomes. We then formalize recombinations with the probability density functions of stochastic variables represented as the parameters and describe the change of the probability density functions of chromosomes before and after recombination. Our formalization includes various proposed recombinations, such as multi-point, uniform, and linear crossover, as well as BLX-α. We also derive certain properties of the operators, such as diversification and decorrelation.

Journal ArticleDOI
TL;DR: Under the new paradigm, the convergence of several mutation-adaptive algorithms is analyzed: a binary genetic algorithm, the 1/5 success rule evolution strategy, a continuous, respectively a dynamic (1+1) evolutionary algorithm.
Abstract: Adaptive evolutionary algorithms require a more sophisticated modeling than their static-parameter counterparts. Taking into account the current population is not enough when implementing parameter-adaptation rules based on success rates (evolution strategies) or on premature convergence (genetic algorithms). Instead of Markov chains, we use random systems with complete connections – accounting for a complete, rather than recent, history of the algorithm's evolution. Under the new paradigm, we analyze the convergence of several mutation-adaptive algorithms: a binary genetic algorithm, the 1/5 success rule evolution strategy, a continuous, respectively a dynamic (1+1) evolutionary algorithm.

Journal Article
TL;DR: It is shown that high market efficiency is generally attained and that market microstructure is strongly predictive for the relative market power of buyers and sellers, independently of the values set for the reinforcement learning parameters.
Abstract: This study reports experimental market power and efficiency outcomes for a computational wholesale electricity market operating in the short run under systematically varied concentration and capacity conditions. The pricing of electricity is determined by means of a clearinghouse double auction with discriminatory midpoint pricing. Buyers and sellers use a modified Roth-Erev individual reinforcement learning algorithm (1995) to determine their price and quantity offers in each auction round. It is shown that high market efficiency is generally attained and that market microstructure is strongly predictive for the relative market power of buyers and sellers, independently of the values set for the reinforcement learning parameters. Results are briefly compared against results from an earlier study in which buyers and sellers instead engage in social mimicry learning via genetic algorithms.

Journal ArticleDOI
TL;DR: A form invariance is established for the more general model of Vose et al. and the attendant machinery is used as a springboard for an interpretation and discussion of implicit parallelism.
Abstract: Holland's schema theorem (an inequality) may be viewed as an attempt to understand genetic search in terms of a coarse graining of the state space. Stephens and Waelbroeck developed that perspective, sharpening the schema theorem to an equality. Of particular interest is a "form invariance" of their equations; the form is unchanged by the degree of coarse graining. This paper establishes a similar form invariance for the more general model of Vose et al. and uses the attendant machinery as a springboard for an interpretation and discussion of implicit parallelism.

Journal ArticleDOI
TL;DR: Empirical results show that the proposed approach (EVOCK) is able to evolve heuristics in two planning domains (the blocks world and the logistics domain) that improve PRODIGY4.0 performance.
Abstract: Declarative problem solving, such as planning, poses interesting challenges for Genetic Programming (GP). There have been recent attempts to apply GP to planning that fit two approaches: (a) using GP to search in plan space or (b) to evolve a planner. In this article, we propose to evolve only the heuristics to make a particular planner more efficient. This approach is more feasible than (b) because it does not have to build a planner from scratch but can take advantage of already existing planning systems. It is also more efficient than (a) because once the heuristics have been evolved, they can be used to solve a whole class of different planning problems in a planning domain, instead of running GP for every new planning problem. Empirical results show that our approach (EVOCK) is able to evolve heuristics in two planning domains (the blocks world and the logistics domain) that improve PRODIGY4.0 performance. Additionally, we experiment with a new genetic operator - Instance-Based Crossover - that is able to use traces of the base planner as raw genetic material to be injected into the evolving population.

Journal ArticleDOI
TL;DR: An extension to the standard genetic algorithm, which is based on concepts of genetic engineering, provides some computational advantages as a tool for automatic generation of hierarchical genetic representations specifically tailored to suit certain classes of problems.
Abstract: We present an extension to the standard genetic algorithm (GA), which is based on concepts of genetic engineering. The motivation is to discover useful and harmful genetic materials and then execute an evolutionary process in such a way that the population becomes increasingly composed of useful genetic material and increasingly free of the harmful genetic material. Compared to the standard GA, it provides some computational advantages as well as a tool for automatic generation of hierarchical genetic representations specifically tailored to suit certain classes of problems.

Journal ArticleDOI
TL;DR: Today, it is widely accepted in the evolutionary computation community that the principle of self-adaptation of strategy parameters, as proposed by Schwefel (1992) is one of the most sophisticated methods to tackle the problem of adjusting the control parameters of an evolutionary algorithm during the course of the optimization process.
Abstract: Today, it is widely accepted in the evolutionary computation community that the principle of self-adaptation of strategy parameters, as proposed by Schwefel (1992) is one of the most sophisticated methods to tackle the problem of adjusting the control parameters (e.g., mutation rates or mutation step sizes) of an evolutionary algorithm during the course of the optimization process. Essentially, the distinguishing feature of self-adaptive parameter control mechanisms is that the control parameters (also called strategy parameters) are evolved by the evolutionary algorithm, rather than exogenously defined or modified according to some fixed schedule. Following classifications offered by Angeline (1995) and Hinterding et al. (1997), the existing approaches for strategy parameter control (as opposed to static parameter settings, i.e., using no control at all) in evolutionary algorithms can be classified as follows:

Journal ArticleDOI
TL;DR: This work employs the 2 measure as a tool for the analysis of the stochastic properties of the sampling and introduces a new sampling algorithm with adjustable accuracy and employing two-level test designs to further reveal the intrinsic correlation structures of well-known sampling algorithms.
Abstract: Viewing the selection process in a genetic algorithm as a two-step procedure consisting of the assignment of selection probabilities and the sampling according to this distribution, we employ the χ2 measure as a tool for the analysis of the stochastic properties of the sampling. We are thereby able to compare different selection schemes even in the case that their probability distributions coincide. Introducing a new sampling algorithm with adjustable accuracy and employing two-level test designs enables us to further reveal the intrinsic correlation structures of well-known sampling algorithms. Our methods apply well to integral methods like tournament selection and can be automated.

Journal ArticleDOI
TL;DR: An abstract normed vector space where the genetic operators are elements is used to define the disturbance of the generational operator g as the distance between the crossover and mutation operator (combined) and the identity.
Abstract: We define an abstract normed vector space where the genetic operators are elements. This is used to define the disturbance of the generational operator G as the distance between the crossover and mutation operator (combined) and the identity. This quantity appears in a bound on the variance of fixed-point populations, and in a bound on the force ||v - G(v)|| that applies to the optimal population v. When analyzed for the case of fixed-length binary strings, a connection is shown between these measures and the size of the search space. Guides for parameter settings are given, if population convergence is required as the string length tends to infinity.

Journal ArticleDOI
TL;DR: A robust evolutionary approach, called the Family Competition Evolutionary Algorithm (FCEA), is described for the synthesis of optical thin-film designs by integrating decreasing mutations and self-adaptive mutations.
Abstract: A robust evolutionary approach, called the Family Competition Evolutionary Algorithm (FCEA), is described for the synthesis of optical thin-film designs. Based on family competition and adaptive rules, the proposed approach consists of global and local strategies by integrating decreasing mutations and self-adaptive mutations. The method is applied to three different optical coating designs with complex spectral quantities. Numerical results indicate that the proposed approach performs very robustly and is very competitive with other approaches.

Journal ArticleDOI
TL;DR: This paper proposes a new genetic algorithm-based approach that can find a good next move by reserving the board evaluation values of new offspring in a partial game-search tree, and shows that solution accuracy and search speed are greatly improved by this algorithm.
Abstract: In this paper, we consider the problem of finding good next moves in two-player games. Traditional search algorithms, such as minimax and α-β pruning, suffer great temporal and spatial expansion when exploring deeply into search trees to find better next moves. The evolution of genetic algorithms with the ability to find global or near global optima in limited time seems promising, but they are inept at finding compound optima, such as the minimax in a game-search tree. We thus propose a new genetic algorithm-based approach that can find a good next move by reserving the board evaluation values of new offspring in a partial game-search tree. Experiments show that solution accuracy and search speed are greatly improved by our algorithm.

Journal ArticleDOI
TL;DR: This work investigates classifier systems' reward schemes by way of an example that highlights the interaction of local reward schemes and recombination and contrasts averaging schemes and maximizing schemes.
Abstract: We investigate classifier systems' reward schemes by way of an example that highlights the interaction of local reward schemes and recombination. We contrast averaging schemes and maximizing schemes. Our example illustrates a sense in which certain recombination operators mesh more gracefully with averaging schemes than with maximizing schemes.