scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2007"


Journal ArticleDOI
TL;DR: Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjectives optimization problems.
Abstract: Decomposition is a basic strategy in traditional multiobjective optimization. However, it has not yet been widely used in multiobjective evolutionary optimization. This paper proposes a multiobjective evolutionary algorithm based on decomposition (MOEA/D). It decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Each subproblem is optimized by only using information from its several neighboring subproblems, which makes MOEA/D have lower computational complexity at each generation than MOGLS and nondominated sorting genetic algorithm II (NSGA-II). Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjective optimization problems. It has been shown that MOEA/D using objective normalization can deal with disparately-scaled objectives, and MOEA/D with an advanced decomposition method can generate a set of very evenly distributed solutions for 3-objective test instances. The ability of MOEA/D with small population, the scalability and sensitivity of MOEA/D have also been experimentally investigated in this paper.

6,657 citations


Journal ArticleDOI
TL;DR: The mechanism of Intelligent Adaptive Curiosity is presented, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress, thus permitting autonomous mental development.
Abstract: Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology

1,134 citations


Journal ArticleDOI
TL;DR: The framework of multiobjective optimization is used to tackle the unsupervised learning problem, data clustering, following a formulation first proposed in the statistics literature and an evolutionary approach to the problem is developed.
Abstract: The framework of multiobjective optimization is used to tackle the unsupervised learning problem, data clustering, following a formulation first proposed in the statistics literature. The conceptual advantages of the multiobjective formulation are discussed and an evolutionary approach to the problem is developed. The resulting algorithm, multiobjective clustering with automatic k-determination, is compared with a number of well-established single-objective clustering algorithms, a modern ensemble technique, and two methods of model selection. The experiments demonstrate that the conceptual advantages of multiobjective clustering translate into practical and scalable performance benefits

657 citations


Journal ArticleDOI
TL;DR: This paper provides an overview of previous ant-based approaches to the classification task and compares them with state-of-the-art classification techniques, such as C4.5, RIPPER, and support vector machines in a benchmark study, and proposes a new AntMiner+.
Abstract: Ant colony optimization (ACO) can be applied to the data mining field to extract rule-based classifiers. The aim of this paper is twofold. On the one hand, we provide an overview of previous ant-based approaches to the classification task and compare them with state-of-the-art classification techniques, such as C4.5, RIPPER, and support vector machines in a benchmark study. On the other hand, a new ant-based classification technique is proposed, named AntMiner+. The key differences between the proposed AntMiner+ and previous AntMiner versions are the usage of the better performing MAX-MIN ant system, a clearly defined and augmented environment for the ants to walk through, with the inclusion of the class variable to handle multiclass problems, and the ability to include interval rules in the rule list. Furthermore, the commonly encountered problem in ACO of setting system parameters is dealt with in an automated, dynamic manner. Our benchmarking experiments show an AntMiner+ accuracy that is superior to that obtained by the other AntMiner versions, and competitive or better than the results achieved by the compared classification techniques.

427 citations


Journal ArticleDOI
TL;DR: A broad survey of the various paradigms of cognition, addressing cognitivist (physical symbol systems) approaches, emergent systems approaches, encompassing connectionist, dynamical, and enactive systems, and also efforts to combine the two in hybrid systems.
Abstract: This survey presents an overview of the autonomous development of mental capabilities in computational agents. It does so based on a characterization of cognitive systems as systems which exhibit adaptive, anticipatory, and purposive goal-directed behavior. We present a broad survey of the various paradigms of cognition, addressing cognitivist (physical symbol systems) approaches, emergent systems approaches, encompassing connectionist, dynamical, and enactive systems, and also efforts to combine the two in hybrid systems. We then review several cognitive architectures drawn from these paradigms. In each of these areas, we highlight the implications and attendant problems of adopting a developmental approach, both from phylogenetic and ontogenetic points of view. We conclude with a summary of the key architectural features that systems capable of autonomous development of mental capabilities should exhibit

423 citations


Journal ArticleDOI
TL;DR: This study explores the utility of multiobjective evolutionary algorithms (using standard Pareto ranking and diversity-promoting selection mechanisms) for solving optimization tasks with many conflicting objectives.
Abstract: This study explores the utility of multiobjective evolutionary algorithms (using standard Pareto ranking and diversity-promoting selection mechanisms) for solving optimization tasks with many conflicting objectives. Optimizer behavior is assessed for a grid of mutation and recombination operator configurations. Performance maps are obtained for the dual aims of proximity to, and distribution across, the optimal tradeoff surface. Performance sweet-spots for both variation operators are observed to contract as the number of objectives is increased. Classical settings for recombination are shown to be suitable for small numbers of objectives but correspond to very poor performance for higher numbers of objectives, even when large population sizes are used. Explanations for this behavior are offered via the concepts of dominance resistance and active diversity promotion.

415 citations


Journal ArticleDOI
TL;DR: A ranking procedure that exploits the definition of preference ordering (PO) is proposed, along with two strategies that make different use of the conditions of efficiency provided, and it is compared with a more traditional Pareto dominance-based ranking scheme within the framework of NSGA-II.
Abstract: It may be generalized that all Evolutionary Algorithms (EA) draw their strength from two sources: exploration and exploitation. Surprisingly, within the context of multiobjective (MO) optimization, the impact of fitness assignment on the exploration-exploitation balance has drawn little attention. The vast majority of multiobjective evolutionary algorithms (MOEAs) presented to date resort to Pareto dominance classification as a fitness assignment methodology. However, the proportion of Pareto optimal elements in a set P grows with the dimensionality of P. Therefore, when the number of objectives of a multiobjective problem (MOP) is large, Pareto dominance-based ranking procedures become ineffective in sorting out the quality of solutions. This paper investigates the potential of using preference order-based approach as an optimality criterion in the ranking stage of MOEAs. A ranking procedure that exploits the definition of preference ordering (PO) is proposed, along with two strategies that make different use of the conditions of efficiency provided, and it is compared with a more traditional Pareto dominance-based ranking scheme within the framework of NSGA-II. A series of extensive experiments is performed on seven widely applied test functions, namely, DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ5, DTLZ6, and DTLZ7, for up to eight objectives. The results are analyzed through a suite of five performance metrics and indicate that the ranking procedure based on PO enables NSGA-II to achieve better scalability properties compared with the standard ranking scheme and suggest that the proposed methodology could be successfully extended to other MOEAs

273 citations


Journal ArticleDOI
TL;DR: The use of fuzzy logic to adaptively adjust the values of px and pm in GA is presented and the effectiveness of the fuzzy-controlled crossover and mutation probabilities is demonstrated by optimizing eight multidimensional mathematical functions.
Abstract: Research into adjusting the probabilities of crossover and mutation pm in genetic algorithms (GAs) is one of the most significant and promising areas in evolutionary computation. px and pm greatly determine whether the algorithm will find a near-optimum solution or whether it will find a solution efficiently. Instead of using fixed values of px and pm , this paper presents the use of fuzzy logic to adaptively adjust the values of px and pm in GA. By applying the K-means algorithm, distribution of the population in the search space is clustered in each generation. A fuzzy system is used to adjust the values of px and pm. It is based on considering the relative size of the cluster containing the best chromosome and the one containing the worst chromosome. The proposed method has been applied to optimize a buck regulator that requires satisfying several static and dynamic operational requirements. The optimized circuit component values, the regulator's performance, and the convergence rate in the training are favorably compared with the GA using fixed values of px and pm. The effectiveness of the fuzzy-controlled crossover and mutation probabilities is also demonstrated by optimizing eight multidimensional mathematical functions

270 citations


Journal ArticleDOI
TL;DR: The proposed immune algorithm inspired by the clonal selection principle for the protein structure prediction problem (PSP) employs two special mutation operators, hypermutation and hypermacromutation to allow effective searching, and an aging mechanism which is a new immune inspired operator that is devised to enforce diversity in the population during evolution.
Abstract: We present an immune algorithm (IA) inspired by the clonal selection principle, which has been designed for the protein structure prediction problem (PSP). The proposed IA employs two special mutation operators, hypermutation and hypermacromutation to allow effective searching, and an aging mechanism which is a new immune inspired operator that is devised to enforce diversity in the population during evolution. When cast as an optimization problem, the PSP can be seen as discovering a protein conformation with minimal energy. The proposed IA was tested on well-known PSP lattice models, the HP model in two-dimensional and three-dimensional square lattices', and the functional model protein, which is a more realistic biological model. Our experimental results demonstrate that the proposed IA is very competitive with the existing state-of-art algorithms for the PSP on lattice models

220 citations


Journal ArticleDOI
TL;DR: Three noise-handling features are proposed based upon the analysis of empirical results, including an experiential learning directed perturbation operator that adapts the magnitude and direction of variation according to past experiences for fast convergence and a possibilistic archiving model based on the concept of possibility and necessity measures to deal with problem of uncertainties.
Abstract: In addition to satisfying several competing objectives, many real-world applications are also characterized by a certain degree of noise, manifesting itself in the form of signal distortion or uncertain information. In this paper, extensive studies are carried out to examine the impact of noisy environments in evolutionary multiobjective optimization. Three noise-handling features are then proposed based upon the analysis of empirical results, including an experiential learning directed perturbation operator that adapts the magnitude and direction of variation according to past experiences for fast convergence, a gene adaptation selection strategy that helps the evolutionary search in escaping from local optima or premature convergence, and a possibilistic archiving model based on the concept of possibility and necessity measures to deal with problem of uncertainties. In addition, the performances of various multiobjective evolutionary algorithms in noisy environments, as well as the robustness and effectiveness of the proposed features are examined based upon five benchmark problems characterized by different difficulties in local optimality, nonuniformity, discontinuity, and nonconvexity

176 citations


Journal ArticleDOI
TL;DR: The feasibility of using artificial neural networks in combination with genetic algorithms to optimize the diesel engine settings to find settings that complied with the increasingly stringent emission regulations while also maintaining, or even reducing the fuel consumption is studied.
Abstract: Diesel engines are fuel efficient which benefits the reduction of CO2 released to the atmosphere compared with gasoline engines, but still result in negative environmental impact related to their emissions. As new degrees of freedom are created, due to advances in technology, the complicated processes of emission formation are difficult to assess. This paper studies the feasibility of using artificial neural networks (ANNs) in combination with genetic algorithms (GAs) to optimize the diesel engine settings. The objective of the optimization was to find settings that complied with the increasingly stringent emission regulations while also maintaining, or even reducing the fuel consumption. A large database of stationary engine tests, covering a wide range of experimental conditions was used for this analysis. The ANNs were used as a simulation tool, receiving as inputs the engine operating parameters, and producing as outputs the resulting emission levels and fuel consumption. The ANN outputs were then used to evaluate the objective function of the optimization process, which was performed with a GA approach. The combination of ANN and GA for the optimization of two different engine operating conditions was analyzed and important reductions in emissions and fuel consumption were reached, while also keeping the computational times low

Journal ArticleDOI
TL;DR: An extensive critical review of the current literature on AIS for data mining, focusing on the data mining tasks of classification and anomaly detection and several important lessons to be taken from the natural immune system are discussed.
Abstract: This paper advocates a problem-oriented approach for the design of artificial immune systems (AIS) for data mining. By problem-oriented approach we mean that, in real-world data mining applications the design of an AIS should take into account the characteristics of the data to be mined together with the application domain: the components of the AIS - such as its representation, affinity function, and immune process - should be tailored for the data and the application. This is in contrast with the majority of the literature, where a very generic AIS algorithm for data mining is developed and there is little or no concern in tailoring the components of the AIS for the data to be mined or the application domain. To support this problem-oriented approach, we provide an extensive critical review of the current literature on AIS for data mining, focusing on the data mining tasks of classification and anomaly detection. We discuss several important lessons to be taken from the natural immune system to design new AIS that are considerably more adaptive than current AIS. Finally, we conclude this paper with a summary of seven limitations of current AIS for data mining and ten suggested research directions.

Journal ArticleDOI
TL;DR: A new kind of genetic representation called analog genetic encoding (AGE) is described, aimed at the evolutionary synthesis and reverse engineering of circuits and networks such as analog electronic circuits, neural networks, and genetic regulatory networks.
Abstract: This paper describes a new kind of genetic representation called analog genetic encoding (AGE). The representation is aimed at the evolutionary synthesis and reverse engineering of circuits and networks such as analog electronic circuits, neural networks, and genetic regulatory networks. AGE permits the simultaneous evolution of the topology and sizing of the networks. The establishment of the links between the devices that form the network is based on an implicit definition of the interaction between different parts of the genome. This reduces the amount of information that must be carried by the genome, relatively to a direct encoding of the links. The application of AGE is illustrated with examples of analog electronic circuit and neural network synthesis. The performance of the representation and the quality of the results obtained with AGE are compared with those produced by genetic programming.

Journal ArticleDOI
TL;DR: This study investigates the development of a new ldquodynamicrdquo GP model that is specifically tailored for forecasting in nonstatic environments and highlights the DyFor GP's potential as an adaptive, nonlinear model for real-world forecasting applications.
Abstract: Several studies have applied genetic programming (GP) to the task of forecasting with favorable results. However, these studies, like those applying other techniques, have assumed a static environment, making them unsuitable for many real-world time series which are generated by varying processes. This study investigates the development of a new ldquodynamicrdquo GP model that is specifically tailored for forecasting in nonstatic environments. This dynamic forecasting genetic program (DyFor GP) model incorporates features that allow it to adapt to changing environments automatically as well as retain knowledge learned from previously encountered environments. The DyFor GP model is tested for forecasting efficacy on both simulated and actual time series including the U.S. Gross Domestic Product and Consumer Price Index Inflation. Results show that the performance of the DyFor GP model improves upon that of benchmark models for all experiments. These findings highlight the DyFor GP's potential as an adaptive, nonlinear model for real-world forecasting applications and suggest further investigations.

Journal ArticleDOI
TL;DR: The level-set evolution is exploited in the design of a novel evolutionary algorithm for global optimization by successively evolving the level set of the objective function such that it becomes smaller and smaller until all of its points are optimal solutions.
Abstract: In this paper, the level-set evolution is exploited in the design of a novel evolutionary algorithm (EA) for global optimization. An application of Latin squares leads to a new and effective crossover operator. This crossover operator can generate a set of uniformly scattered offspring around their parents, has the ability to search locally, and can explore the search space efficiently. To compute a globally optimal solution, the level set of the objective function is successively evolved by crossover and mutation operators so that it gradually approaches the globally optimal solution set. As a result, the level set can be efficiently improved. Based on these skills, a new EA is developed to solve a global optimization problem by successively evolving the level set of the objective function such that it becomes smaller and smaller until all of its points are optimal solutions. Furthermore, we can prove that the proposed algorithm converges to a global optimizer with probability one. Numerical simulations are conducted for 20 standard test functions. The performance of the proposed algorithm is compared with that of eight EAs that have been published recently and the Monte Carlo implementation of the mean-value-level-set method. The results indicate that the proposed algorithm is effective and efficient.

Journal ArticleDOI
TL;DR: In this article, the authors use evolutionary computation to automatically find problems which demonstrate the strength and weaknesses of modern search heuristics, such as particle swarm optimization, differential evolution, and covariance matrix adaptation-evolution strategy (CMA-ES).
Abstract: We use evolutionary computation (EC) to automatically find problems which demonstrate the strength and weaknesses of modern search heuristics. In particular, we analyze particle swarm optimization (PSO), differential evolution (DE), and covariance matrix adaptation-evolution strategy (CMA-ES). Each evolutionary algorithm is contrasted with the others and with a robust nonstochastic gradient follower (i.e., a hill climber) based on Newton-Raphson. The evolved benchmark problems yield insights into the operation of PSOs, illustrate benefits and drawbacks of different population sizes, velocity limits, and constriction (friction) coefficients. The fitness landscapes made by genetic programming reveal new swarm phenomena, such as deception, thereby explaining how they work and allowing us to devise better extended particle swarm systems. The method could be applied to any type of optimizer.

Journal ArticleDOI
TL;DR: A mechanism of FS, i.e., dynamic fitness sharing, which allows an explicit, dynamic identification of the species discovered at each generation, their localization on the fitness landscape, the application of the sharing mechanism to each species separately, and a species elitist strategy is presented.
Abstract: The problem of locating all the optima within a multimodal fitness landscape has been widely addressed in evolutionary computation, and many solutions, based on a large variety of different techniques, have been proposed in the literature. Among them, fitness sharing (FS) is probably the best known and the most widely used. The main criticisms to FS concern both the lack of an explicit mechanism for identifying or providing any information about the location of the peaks in the fitness landscape, and the definition of species implicitly assumed by FS. We present a mechanism of FS, i.e., dynamic fitness sharing, which has been devised in order to overcome these limitations. The proposed method allows an explicit, dynamic identification of the species discovered at each generation, their localization on the fitness landscape, the application of the sharing mechanism to each species separately, and a species elitist strategy. The proposed method has been tested on a set of standard functions largely adopted in the literature to assess the performance of evolutionary algorithms on multimodal functions. Experimental results confirm that our method performs significantly better than FS and other methods proposed in the literature without requiring any further assumption on the fitness landscape than those assumed by the FS itself.

Journal ArticleDOI
TL;DR: It is formally proved that the ACO internal state - commonly referred to as the pheromone - indeed depends on the scale of the problem at hand, but this does not affect the sequence of solutions produced by the three most widely adopted algorithms belonging to theACO family.
Abstract: Ant colony optimization (ACO) is a promising metaheuristic and a great amount of research has been devoted to its empirical and theoretical analysis. Recently, with the introduction of the hypercube framework, Blum and Dorigo have explicitly raised the issue of the invariance of ACO algorithms to transformation of units. They state (Blum and Dorigo, 2004) that the performance of ACO depends on the scale of the problem instance under analysis. In this paper, we show that the ACO internal state - commonly referred to as the pheromone - indeed depends on the scale of the problem at hand. Nonetheless, we formally prove that this does not affect the sequence of solutions produced by the three most widely adopted algorithms belonging to the ACO family: ant system, MAX-MIN ant system, and ant colony system. For these algorithms, the sequence of solutions does not depend on the scale of the problem instance under analysis. Moreover, we introduce three new ACO algorithms, the internal state of which is independent of the scale of the problem instance considered. These algorithms are obtained as minor variations of ant system, MAX-MIN ant system, and ant colony system. We formally show that these algorithms are functionally equivalent to their original counterparts. That is, for any given instance, these algorithms produce the same sequence of solutions as the original ones.

Journal ArticleDOI
TL;DR: An interactive evolutionary computation (EC) fitting method is proposed that applies interactive EC to hearing aid fitting and the method is evaluated using a hearing aid simulator with human subjects and shows significantly better results than either the conventional method or the unprocessed case in terms of both speech intelligibility and speech quality.
Abstract: An interactive evolutionary computation (EC) fitting method is proposed that applies interactive EC to hearing aid fitting and the method is evaluated using a hearing aid simulator with human subjects. The advantages of the method are that it can optimize a hearing aid based on how a user hears and that it realizes whatever+whenever+wherever (W3) fitting. Conventional fitting methods are based on the user's partially measured auditory characteristics, the fitting engineer's experience, and the user's linguistic explanation of his or her hearing. These conventional methods, therefore, suffer from the fundamental problem that no one can experience another person's hearing. However, as interactive EC fitting uses EC to optimize a hearing aid based on the user's evaluation of his or her hearing, this problem is addressed. Moreover, whereas conventional fitting methods must use pure tones and bandpass noise for measuring hearing characteristics, our proposed method has no such restrictions. Evaluating the proposed method using speech sources, we demonstrate that it shows significantly better results than either the conventional method or the unprocessed case in terms of both speech intelligibility and speech quality. We also evaluate our method using musical sources, unusable for evaluation by conventional methods, and demonstrate that its sound quality is preferable to the unprocessed case

Journal ArticleDOI
TL;DR: This paper describes the applicability of the so-called "grouping genetic algorithm" to a well-known version of the university course timetabling problem and looks at how such an algorithm might be improved, through the introduction of a number of different fitness functions and the use of an additional stochastic local-search operator.
Abstract: This paper describes the applicability of the so-called "grouping genetic algorithm" to a well-known version of the university course timetabling problem. We note that there are, in fact, various scaling up issues surrounding this sort of algorithm and, in particular, see that it behaves in quite different ways with different sized problem instances. As a by-product of these investigations, we introduce a method for measuring population diversities and distances between individuals with the grouping representation. We also look at how such an algorithm might be improved: first, through the introduction of a number of different fitness functions and, second, through the use of an additional stochastic local-search operator (making in effect a grouping memetic algorithm). In many cases, we notice that the best results are actually returned when the grouping genetic operators are removed altogether, thus highlighting many of the issues that are raised in the study

Journal ArticleDOI
TL;DR: This manuscript surveys computational modeling efforts by researchers in developmental psychology and finds many developmental features of typical and atypical perception, cognition, and language have been modeled using connectionist methods.
Abstract: This manuscript surveys computational modeling efforts by researchers in developmental psychology. Developmental psychology is ready to blossom into a modern science that focuses on causal mechanistic explanations of development rather than just describing and classifying behaviors. Computational modeling is the key to this process. However, to be effective, models must not only mimic observed data. They must also be transparent, grounded, and plausible to be accepted by the developmental psychology community. Connectionist model provides one such example. Many developmental features of typical and atypical perception, cognition, and language have been modeled using connectionist methods. Successful models are closely tied to the details of existing empirical studies and make concrete testable predictions. The success of such a project relies on the close collaboration of computational scientists with empirical psychologists

Journal ArticleDOI
TL;DR: Results indicate that inclusion of a rule migration mechanism inspired by parallel genetic algorithms is an effective way to improve learning speed in comparison to equivalent single systems.
Abstract: This paper presents an investigation into exploiting the population-based nature of learning classifier systems (LCSs) for their use within highly parallel systems. In particular, the use of simple payoff and accuracy-based LCSs within the ensemble machine approach is examined. Results indicate that inclusion of a rule migration mechanism inspired by parallel genetic algorithms is an effective way to improve learning speed in comparison to equivalent single systems. Presentation of a mechanism which exploits the underlying niche-based generalization mechanism of accuracy-based systems is then shown to further improve their performance, particularly, as task complexity increases. This is not found to be the case for payoff-based systems. Finally, considerably better than linear speedup is demonstrated with the accuracy-based systems on a version of the well-known Boolean logic benchmark task used throughout.

Journal ArticleDOI
TL;DR: A novel method for learning complex concepts/hypotheses directly from raw training data that handles the complexity of the learning task by applying cooperative coevolution to decompose the problem automatically at the genotype level.
Abstract: In this paper, we present a novel method for learning complex concepts/hypotheses directly from raw training data. The task addressed here concerns data-driven synthesis of recognition procedures for real-world object recognition. The method uses linear genetic programming to encode potential solutions expressed in terms of elementary operations, and handles the complexity of the learning task by applying cooperative coevolution to decompose the problem automatically at the genotype level. The training coevolves feature extraction procedures, each being a sequence of elementary image processing and computer vision operations applied to input images. Extensive experimental results show that the approach attains competitive performance for three-dimensional object recognition in real synthetic aperture radar imagery.

Journal ArticleDOI
TL;DR: This paper describes Christiansen grammar evolution (CGE), a new evolutionary automatic programming algorithm that extends standard grammar evolution by replacing context-free grammars by Christiansen Grammars.
Abstract: This paper describes Christiansen grammar evolution (CGE), a new evolutionary automatic programming algorithm that extends standard grammar evolution (GE) by replacing context-free grammars by Christiansen grammars. GE only takes into account syntactic restrictions to generate valid individuals. CGE adds semantics to ensure that both semantically and syntactically valid individuals are generated. It is empirically shown that our approach improves GE performance and even allows the solution of some problems are difficult to tackle by GE

Journal ArticleDOI
TL;DR: The role of penalty coefficients in EAs in terms of time complexity is analyzed and it is shown that in some examples, EAs benefit greatly from higher penalty coefficients, while in other examples,EAs benefit from lower penalty coefficients.
Abstract: Although there are many evolutionary algorithms (EAs) for solving constrained optimization problems, there are few rigorous theoretical analyses. This paper presents a time complexity analysis of EAs for solving constrained optimization. It is shown when the penalty coefficient is chosen properly, direct comparison between pairs of solutions using penalty fitness function is equivalent to that using the criteria ldquosuperiority of feasible pointrdquo or ldquosuperiority of objective function value.rdquo This paper analyzes the role of penalty coefficients in EAs in terms of time complexity. The results show that in some examples, EAs benefit greatly from higher penalty coefficients, while in other examples, EAs benefit from lower penalty coefficients. This paper also investigates the runtime of EAs for solving the 0-1 knapsack problem and the results indicate that the mean first hitting times ranges from a polynomial-time to an exponential time when different penalty coefficients are used.

Journal ArticleDOI
TL;DR: A systematic method for incorporating the tradeoff wisdom inspired by the circuit domain knowledge in the formulation of the composite cost function is proposed and results show significant improvement in both the chosen design problems.
Abstract: Typical analog and radio frequency (RF) circuit sizing optimization problems are computationally hard and require the handling of several conflicting cost criteria. Many researchers have used sequential stochastic refinement methods to solve them, where the different cost criteria can either be combined into a single-objective function to find a unique solution, or they can be handled by multiobjective optimization methods to produce tradeoff solutions on the Pareto front. This paper presents a method for solving the problem by the former approach. We propose a systematic method for incorporating the tradeoff wisdom inspired by the circuit domain knowledge in the formulation of the composite cost function. Key issues have been identified and the problem has been divided into two parts: a) normalization of objective functions and b) assignment of weights to objectives in the cost function. A nonlinear, parameterized normalization strategy has been proposed and has been shown to be better than traditional linear normalization functions. Further, the designers' problem specific knowledge is assembled in the form of a partially ordered set, which is used to construct a hierarchical cost graph for the problem. The scalar cost function is calculated based on this graph. Adaptive mechanisms have been introduced to dynamically change the structure of the graph to improve the chances of reaching the near-optimal solution. A correlated double sampling offset-compensated switched capacitor analog integrator circuit and an RF low-noise amplifier in an industry-standard 0.18mum CMOS technology have been chosen for experimental study. Optimization results have been shown for both the traditional and the proposed methods. The results show significant improvement in both the chosen design problems

Journal ArticleDOI
TL;DR: It is argued that the standard evolutionary language game framework cannot explain the emergence of compositional codes-communication codes that preserve neighborhood relationships by mapping similar signals into similar meanings-even though use of those codes would result in a much higher payoff in the case that signals are noisy.
Abstract: Evolutionary language games have proved a useful tool to study the evolution of communication codes in communities of agents that interact among themselves by transmitting and interpreting a fixed repertoire of signals. Most studies have focused on the emergence of Saussurean codes (i.e., codes characterized by an arbitrary one-to-one correspondence between meanings and signals). In this contribution, we argue that the standard evolutionary language game framework cannot explain the emergence of compositional codes-communication codes that preserve neighborhood relationships by mapping similar signals into similar meanings-even though use of those codes would result in a much higher payoff in the case that signals are noisy. We introduce an alternative evolutionary setting in which the meanings are assimilated sequentially and show that the gradual building of the meaning-signal mapping leads to the emergence of mappings with the desired compositional property.

Journal ArticleDOI
TL;DR: Although both direct encoding and NetKeys are shown to give better results on the test problems used in this paper, the Dandelion code should be considered as a strong alternative, particularly for very large networks.
Abstract: There are many applications where it is necessary to find an optimal spanning tree. For several of these, recent research has suggested the use of genetic algorithms (GAs). Historically, the Pruumlfer code has been one of the most popular representations for spanning trees, due largely to the bijective mapping between genotype and phenotype. However, it is has attracted much criticism for its low locality, a primary reason for its poor performance in GAs. Other representations such as direct encoding and network random keys have been shown to be far more effective. In 2001, an alternative called the Blob code was identified and adapted for use in GAs. It was shown to exhibit significantly higher locality than the Pruumlfer code. For a simple test problem, a GA using the Blob code was found to substantially outperform one using the Pruumlfer code. This paper suggests an alternative called the Dandelion code, which is more efficient and exhibits yet higher locality. Although both direct encoding and NetKeys are shown to give better results on the test problems used in this paper, the Dandelion code should be considered as a strong alternative, particularly for very large networks

Journal ArticleDOI
TL;DR: This paper describes the implementation of fault tolerant features that address error detection and recovery through dynamic routing, reconfiguration, and on-chip reprogramming in a novel application specific integrated circuit.
Abstract: Fault tolerance is a crucial operational aspect of biological systems and the self-repair capabilities of complex organisms far exceeds that of even the most advanced electronic devices While many of the processes used by nature to achieve fault tolerance cannot easily be applied to silicon-based systems, in this paper we show that mechanisms loosely inspired by the operation of multicellular organisms can be transported to electronic systems to provide self-repair capabilities Features such as dynamic routing, reconfiguration, and on-chip reprogramming can be invaluable for the realization of adaptive hardware systems and for the design of highly complex systems based on the kind of unreliable components that are likely to be introduced in the not-too-distant future In this paper, we describe the implementation of fault tolerant features that address error detection and recovery through dynamic routing, reconfiguration, and on-chip reprogramming in a novel application specific integrated circuit We take inspiration from three biological models: phylogenesis, ontogenesis, and epigenesis (hence the POE in POEtic) As in nature, our approach is based on a set of separate and complementary techniques that exploit the novel mechanisms provided by our device in the particular context of fault tolerance

Journal ArticleDOI
TL;DR: The SVLC algorithm recurrently "glues" or synapses homogenous genetic subsequences together in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences.
Abstract: The synapsing variable-length crossover (SVLC) algorithm provides a biologically inspired method for performing meaningful crossover between variable-length genomes. In addition to providing a rationale for variable-length crossover, it also provides a genotypic similarity metric for variable-length genomes, enabling standard niche formation techniques to be used with variable-length genomes. Unlike other variable-length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses nonrandom crossover points based on the common parental sequence similarity. The SVLC algorithm recurrently "glues" or synapses homogenous genetic subsequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable-length test problem, the SVLC algorithm compares favorably with current variable-length crossover techniques. The variable-length approach is further advocated by demonstrating how a variable-length genetic algorithm (GA) can obtain a high fitness solution in fewer iterations than a traditional fixed-length GA in a two-dimensional vector approximation task