scispace - formally typeset
Search or ask a question

Showing papers in "Evolutionary Computation in 2007"


Journal ArticleDOI
TL;DR: A single-objective, elitist, CMA-ES is introduced using plus-selection and step size control based on a success rule and a population of individuals that adapt their search strategy as in the elitists is maintained, subject to multi-objectives selection.
Abstract: The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.

767 citations


Journal ArticleDOI
TL;DR: An extended algorithm, GNP with Reinforcement Learning (GNPRL) is proposed which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments.
Abstract: This paper proposes a graph-based evolutionary algorithm called Genetic Network Programming (GNP). Our goal is to develop GNP, which can deal with dynamic environments efficiently and effectively, based on the distinguished expression ability of the graph (network) structure. The characteristics of GNP are as follows. 1) GNP programs are composed of a number of nodes which execute simple judgment/processing, and these nodes are connected by directed links to each other. 2) The graph structure enables GNP to re-use nodes, thus the structure can be very compact. 3) The node transition of GNP is executed according to its node connections without any terminal nodes, thus the past history of the node transition affects the current node to be used and this characteristic works as an implicit memory function. These structural characteristics are useful for dealing with dynamic environments. Furthermore, we propose an extended algorithm, “GNP with Reinforcement Learning (GNPRL)” which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments. In this paper, we applied GNP to the problem of determining agents' behavior to evaluate its effectiveness. Tileworld was used as the simulation environment. The results show some advantages for GNP over conventional methods.

329 citations


Journal ArticleDOI
TL;DR: This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS), and tries to identify the fundamental characteristics of this family of algorithms.
Abstract: This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

201 citations


Journal ArticleDOI
TL;DR: The proposed approach tries to overcome the main limitation of -dominance: the loss of several nondominated solutions from the hypergrid adopted in the archive because of the way in which solutions are selected within each box.
Abstract: Efficiency has become one of the main concerns in evolutionary multiobjective optimization during recent years. One of the possible alternatives to achieve a faster convergence is to use a relaxed form of Pareto dominance that allows us to regulate the granularity of the approximation of the Pareto front that we wish to achieve. One such relaxed forms of Pareto dominance that has become popular in the last few years is e-dominance, which has been mainly used as an archiving strategy in some multiobjective evolutionary algorithms. Despite its advantages, e-dominance has some limitations. In this paper, we propose a mechanism that can be seen as a variant of e-dominance, which we call Pareto-adaptive e-dominance (pae-dominance). Our proposed approach tries to overcome the main limitation of e-dominance: the loss of several nondominated solutions from the hypergrid adopted in the archive because of the way in which solutions are selected within each box.

144 citations


Journal ArticleDOI
TL;DR: It is shown that the types of generalizations evolved by XCSF can be influenced by the input range and that while all three approaches significantly improve X CSF, least squares approaches appear to be best performing and most robust.
Abstract: We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.

64 citations


Journal ArticleDOI
TL;DR: This paper rigorously defines difficulty measures in black-box optimization and proposes a classification and it is proven that predictive versions that run in polynomial time in general do not exist unless certain complexity-theoretical assumptions are wrong.
Abstract: Various methods have been defined to measure the hardness of a fitness function for evolutionary algorithms and other black-box heuristics. Examples include fitness landscape analysis, epistasis, fitness-distance correlations etc., all of which are relatively easy to describe. However, they do not always correctly specify the hardness of the function. Some measures are easy to implement, others are more intuitive and hard to formalize. This paper rigorously defines difficulty measures in black-box optimization and proposes a classification. Different types of realizations of such measures are studied, namely exact and approximate ones. For both types of realizations, it is proven that predictive versions that run in polynomial time in general do not exist unless certain complexity-theoretical assumptions are wrong.

63 citations


Journal ArticleDOI
TL;DR: The work demonstrates how the approximation of amonotonic algorithm can lead to algorithms that are sufficiently reliable in practice while offering better efficiency, and how archive sizes may be limited while still providing approximate reliability.
Abstract: Coevolution has already produced promising results, but its dynamic evaluation can lead to a variety of problems that preventmost algorithms from progressing monotonically. An important open question therefore is how progress towards a chosen solution concept can be achieved. A general solution concept for coevolution is obtained by viewing opponents or tests as objectives. In this setup known as Pareto-coevolution, the desired solution is the Pareto-optimal set. We present an archive that guarantees monotonicity for this solution concept. The algorithm is called the Incremental Pareto-Coevolution Archive (IPCA), and is based on Evolutionary Multi-Objective Optimization (EMOO). By virtue of its monotonicity, IPCA avoids regress even when combined with a highly explorative generator. This capacity is demonstrated on a challenging test problem requiring both exploration and reliability. IPCA maintains a highly specific selection of tests, but the size of the test archive nonetheless grows unboundedly. We therefore furthermore investigate how archive sizes may be limited while still providing approximate reliability. The LAyered Pareto-Coevolution Archive (LAPCA) maintains a limited number of layers of candidate solutions and tests, and thereby permits a trade-off between archive size and reliability. The algorithm is compared in experiments, and found to be more efficient than IPCA. The work demonstrates how the approximation of amonotonic algorithm can lead to algorithms that are sufficiently reliable in practice while offering better efficiency.

62 citations


Journal ArticleDOI
TL;DR: This work investigates the effect of an asymmetric mutation operator in evolutionary algorithms with respect to the runtime behavior and presents a lower bound for the general case which shows that the asymmetric operator speeds up computation by at least a linear factor.
Abstract: Successful applications of evolutionary algorithms show that certain variation operators can lead to good solutions much faster than other ones. We examine this behavior observed in practice from a theoretical point of view and investigate the effect of an asymmetric mutation operator in evolutionary algorithms with respect to the runtime behavior. Considering the Eulerian cycle problem we present runtime bounds for evolutionary algorithms using an asymmetric operator which are much smaller than the best upper bounds for a more general one. In our analysis it turns out that a plateau which both algorithms have to cope with changes its structure in a way that allows the algorithm to obtain an improvement much faster. In addition, we present a lower bound for the general case which shows that the asymmetric operator speeds up computation by at least a linear factor.

50 citations


Journal ArticleDOI
TL;DR: This work proposes a new geometric crossover for graph partitioning based on a labeling-independent distance that filters out the redundancy of the encoding and combines it with the labeling- independent crossover to obtain a much superior geometric crossover inheriting both advantages.
Abstract: Geometric crossover is a representation-independent generalization of the traditional crossover defined using the distance of the solution space. By choosing a distance firmly rooted in the syntax of the solution representation as a basis for geometric crossover, one can design new crossovers for any representation. Using a distance tailored to the problem at hand, the formal definition of geometric crossover allows us to design new problem-specific crossovers that embed problem-knowledge in the search. The standard encoding for multiway graph partitioning is highly redundant: each solution has a number of representations, one for each way of labeling the represented partition. Traditional crossover does not perform well on redundant encodings. We propose a new geometric crossover for graph partitioning based on a labeling-independent distance that filters out the redundancy of the encoding. A correlation analysis of the fitness landscape based on this distance shows that it is well suited to graph partitioning. A second difficulty with designing a crossover for multiway graph partitioning is that of feasibility: in general recombining feasible partitions does not lead to feasible offspring partitions. We design a new geometric crossover for permutations with repetitions that naturally suits partition problems and we test it on the graph partitioning problem. We then combine it with the labeling-independent crossover and obtain a much superior geometric crossover inheriting both advantages.

46 citations


Journal ArticleDOI
TL;DR: It is shown that the convergence rate of all comparison-based multi-objective algorithms, for the Hausdorff distance, is not much better than the converge rate of the random search under certain conditions.
Abstract: It has been empirically established that multiobjective evolutionary algorithms do not scale well with the number of conflicting objectives. This paper shows that the convergence rate of all comparison-based multi-objective algorithms, for the Hausdorff distance, is not much better than the convergence rate of the random search under certain conditions. The number of objectives must be very moderate and the framework should hold the following assumptions: the objectives are conflicting and the computational cost is lower bounded by the number of comparisons is a good model. Our conclusions are: (i) the number of conflicting objectives is relevant (ii) the criteria based on comparisons with random-search for multi-objective optimization is also relevant (iii) having more than 3-objectives optimization is very hard. Furthermore, we provide some insight into cross-over operators.

41 citations


Journal ArticleDOI
TL;DR: The use of mathematical models to characterise the selection pressures arising in a selection-only environment is explored and the practical relevance of these indicators as predictors for algorithms' relative performance in terms of optimisation time and reliability is examined.
Abstract: Steady State models of Evolutionary Algorithms are widely used, yet surprisingly little attention has been paid to the effects arising from different replacement strategies. This paper explores the use of mathematical models to characterise the selection pressures arising in a selection-only environment. The first part brings together models for the behaviour of seven different replacement mechanisms and provides expressions for various proposed indicators of Evolutionary Algorithm behaviour. Some of these have been derived elsewhere, and are included for completeness, but the majority are new to this paper. These theoretical indicators are used to compare the behaviour of the different strategies. The second part of this paper examines the practical relevance of these indicators as predictors for algorithms' relative performance in terms of optimisation time and reliability. It is not the intention of this paper to come up with a “one size fits all” recommendation for choice of replacement strategy. Although some strategies may have little to recommend them, the relative ranking of others is shown to depend on the intended use of the algorithm to be implemented, as reflected in the choice of performance metrics.

Journal ArticleDOI
TL;DR: A new bio-inspired algorithm (FClust) that dynamically creates and visualizes groups of data that helps the domain expert to understand the underlying structure of the data set.
Abstract: This paper presents a new bio-inspired algorithm (FClust) that dynamically creates and visualizes groups of data. This algorithm uses the concepts of a flock of agents that move together in a complex manner with simple local rules. Each agent represents one data. The agents move together in a 2D environment with the aim of creating homogeneous groups of data. These groups are visualized in real time, and help the domain expert to understand the underlying structure of the data set, like for example a realistic number of classes, clusters of similar data, isolated data. We also present several extensions of this algorithm, which reduce its computational cost, and make use of a 3D display. This algorithm is then tested on artificial and real-world data, and a heuristic algorithm is used to evaluate the relevance of the obtained partitioning.

Journal ArticleDOI
TL;DR: The two main results are that comparison-based algorithms are the best algorithms for some robustness criteria and that introducing randomness in the choice of offspring improves the anytime behavior of the algorithm.
Abstract: Randomized search heuristics (e.g., evolutionary algorithms, simulated annealing etc.) are very appealing to practitioners, they are easy to implement and usually provide good performance. The theoretical analysis of these algorithms usually focuses on convergence rates. This paper presents a mathematical study of randomized search heuristics which use comparison based selection mechanism. The two main results are that comparison-based algorithms are the best algorithms for some robustness criteria and that introducing randomness in the choice of offspring improves the anytime behavior of the algorithm. An original Estimation of Distribution Algorithm combining both results is proposed and successfully experimented.

Journal ArticleDOI
TL;DR: This paper derives a new epistasis measure called entropic epistasis from Shannon's information theory, and provides experimental results verifying the measure and showing how it can be used for designing efficient evolutionary algorithms.
Abstract: In optimization problems, the contribution of a variable to fitness often depends on the states of other variables. This phenomenon is referred to as epistasis or linkage. In this paper, we show that a new theory of epistasis can be established on the basis of Shannon's information theory. From this, we derive a new epistasis measure called entropic epistasis and some theoretical results. We also provide experimental results verifying the measure and showing how it can be used for designing efficient evolutionary algorithms.

Journal ArticleDOI
TL;DR: It is shown that the biologically-inspired model of genotype editing can be used to both facilitate understanding of the evolutionary role of RNA regulation based on genotypes editing in biology, and advance the current state of research in Evolutionary Computation.
Abstract: Evolutionary algorithms rarely deal with ontogenetic, non-inherited alteration of genetic information because they are based on a direct genotype-phenotype mapping. In contrast, several processes have been discovered in nature which alter genetic information encoded in DNA before it is translated into amino-acid chains. Ontogenetically altered genetic information is not inherited but extensively used in regulation and development of phenotypes, giving organisms the ability to, in a sense, re-program their genotypes according to environmental cues. An example of post-transcriptional alteration of gene-encoding sequences is the process of RNA Editing. Here we introduce a novel Agent-based model of genotype editing and a computational study of its evolutionary performance in static and dynamic environments. This model builds on our previous Genetic Algorithm with Editing, but presents a fundamentally novel architecture in which coding and non-coding genetic components are allowed to co-evolve. Our goals are: (1) to study the role of RNA Editing regulation in the evolutionary process, (2) to understand how genotype editing leads to a different, and novel evolutionary search algorithm, and (3) the conditions under which genotype editing improves the optimization performance of traditional evolutionary algorithms. We show that genotype editing allows evolving agents to perform better in several classes of fitness functions, both in static and dynamic environments. We also present evidence that the indirect genotype/phenotype mapping resulting from genotype editing leads to a better exploration/exploitation compromise of the search process. Therefore, we show that our biologically-inspired model of genotype editing can be used to both facilitate understanding of the evolutionary role of RNA regulation based on genotype editing in biology, and advance the current state of research in Evolutionary Computation.

Journal ArticleDOI
TL;DR: It is shown empirically how fitness databases can improve the performance of GP and how mapping graphs to a canonical form can increase these improvements by saving considerable evaluation time.
Abstract: In this paper we describe the genetic programming system GGP operating on graphs and introduce the notion of graph isomorphisms to explain how they influence the dynamics of GP. It is shown empirically how fitness databases can improve the performance of GP and how mapping graphs to a canonical form can increase these improvements by saving considerable evaluation time.

Journal ArticleDOI
TL;DR: It is speculated that during stable phases, crossover's operation on the persistently heterogeneous gene pool enhances the survival of useful building blocks, thus sustaining long-range temporal correlations in the evolving population, and empirical support for this conjecture is found in the extended tails of probability distribution functions for stable phase lifetimes.
Abstract: We examine the role played by crossover in a series of genetic algorithm-based evolutionary simulations of the iterated prisoner's dilemma The simulations are characterized by extended periods of stability, during which evolutionarily meta-stable strategies remain more or less fixed in the population, interrupted by transient, unstable episodes triggered by the appearance of adaptively targeted predators This leads to a global evolutionary pattern whereby the population shifts from one of a few evolutionarily metastable strategies to another to evade emerging predator strategies While crossover is not particularly helpful in producing better average scores, it markedly enhances overall evolutionary stability We show that crossover achieves this by (1) impeding the appearance and spread of targeted predator strategies during stable phases, and (2) greatly reducing the duration of unstable epochs, presumably by efficient recombination of building blocks to rediscover prior metastable strategies We also speculate that during stable phases, crossover's operation on the persistently heterogeneous gene pool enhances the survival of useful building blocks, thus sustaining long-range temporal correlations in the evolving population Empirical support for this conjecture is found in the extended tails of probability distribution functions for stable phase lifetimes

Journal ArticleDOI
TL;DR: A covariant form for the dynamics of a canonical GA of arbitrary cardinality is presented, showing how each genetic operator can be uniquely represented by a mathematical object a tensor that transforms simply under a general linear coordinate transformation.
Abstract: We present a covariant form for the dynamics of a canonical GA of arbitrary cardinality, showing how each genetic operator can be uniquely represented by a mathematical object ---a tensor ---that transforms simply under a general linear coordinate transformation. For mutation and recombination these tensors can be written as tensor products of the analogous tensors for one-bit strings thus giving a greatly simplified formulation of the dynamics. We analyze the three most well known coordinate systems ---string, Walsh and Building Block ---discussing their relative advantages and disadvantages with respect to the different operators, showing how one may transform from one to the other, and that the associated coordinate transformation matrices can be written as a tensor product of the corresponding one-bit matrices. We also show that in the Building Block basis the dynamical equations for all Building Blocks can be generated from the equation for the most fine-grained block (string) by a certain projection (“zapping”).