scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2008"


Journal ArticleDOI
TL;DR: This paper discusses natural biogeography and its mathematics, and then discusses how it can be used to solve optimization problems, and sees that BBO has features in common with other biology-based optimization methods, such as GAs and particle swarm optimization (PSO).
Abstract: Biogeography is the study of the geographical distribution of biological organisms. Mathematical equations that govern the distribution of organisms were first discovered and developed during the 1960s. The mindset of the engineer is that we can learn from nature. This motivates the application of biogeography to optimization problems. Just as the mathematics of biological genetics inspired the development of genetic algorithms (GAs), and the mathematics of biological neurons inspired the development of artificial neural networks, this paper considers the mathematics of biogeography as the basis for the development of a new field: biogeography-based optimization (BBO). We discuss natural biogeography and its mathematics, and then discuss how it can be used to solve optimization problems. We see that BBO has features in common with other biology-based optimization methods, such as GAs and particle swarm optimization (PSO). This makes BBO applicable to many of the same types of problems that GAs and PSO are used for, namely, high-dimension problems with multiple local optima. However, BBO also has some features that are unique among biology-based optimization methods. We demonstrate the performance of BBO on a set of 14 standard benchmarks and compare it with seven other biology-based optimization algorithms. We also demonstrate BBO on a real-world sensor selection problem for aircraft engine health estimation.

3,418 citations


Journal ArticleDOI
TL;DR: This paper presents a detailed overview of the basic concepts of PSO and its variants, and provides a comprehensive survey on the power system applications that have benefited from the powerful nature ofPSO as an optimization technique.
Abstract: Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale nonlinear optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.

2,147 citations


Journal ArticleDOI
TL;DR: This paper presents a novel algorithm to accelerate the differential evolution (DE), which employs opposition-based learning (OBL) for population initialization and also for generation jumping and results confirm that the ODE outperforms the original DE and FADE in terms of convergence speed and solution accuracy.
Abstract: Evolutionary algorithms (EAs) are well-known optimization approaches to deal with nonlinear and complex problems. However, these population-based algorithms are computationally expensive due to the slow nature of the evolutionary process. This paper presents a novel algorithm to accelerate the differential evolution (DE). The proposed opposition-based DE (ODE) employs opposition-based learning (OBL) for population initialization and also for generation jumping. In this work, opposite numbers have been utilized to improve the convergence rate of DE. A comprehensive set of 58 complex benchmark functions including a wide range of dimensions is employed for experimental verification. The influence of dimensionality, population size, jumping rate, and various mutation strategies are also investigated. Additionally, the contribution of opposite numbers is empirically verified. We also provide a comparison of ODE to fuzzy adaptive DE (FADE). Experimental results confirm that the ODE outperforms the original DE and FADE in terms of convergence speed and solution accuracy.

1,419 citations


Journal ArticleDOI
TL;DR: A simulated annealing based multiobjective optimization algorithm that incorporates the concept of archive in order to provide a set of tradeoff solutions for the problem under consideration that is found to be significantly superior for many objective test problems.
Abstract: This paper describes a simulated annealing based multiobjective optimization algorithm that incorporates the concept of archive in order to provide a set of tradeoff solutions for the problem under consideration. To determine the acceptance probability of a new solution vis-a-vis the current solution, an elaborate procedure is followed that takes into account the domination status of the new solution with the current solution, as well as those in the archive. A measure of the amount of domination between two solutions is also used for this purpose. A complexity analysis of the proposed algorithm is provided. An extensive comparative study of the proposed algorithm with two other existing and well-known multiobjective evolutionary algorithms (MOEAs) demonstrate the effectiveness of the former with respect to five existing performance measures, and several test problems of varying degrees of difficulty. In particular, the proposed algorithm is found to be significantly superior for many objective test problems (e.g., 4, 5, 10, and 15 objective problems), while recent studies have indicated that the Pareto ranking-based MOEAs perform poorly for such problems. In a part of the investigation, comparison of the real-coded version of the proposed algorithm is conducted with a very recent multiobjective simulated annealing algorithm, where the performance of the former is found to be generally superior to that of the latter.

764 citations


Journal ArticleDOI
TL;DR: It is demonstrated that, compared with GDE3, RM-MEDA is not sensitive to algorithmic parameters, and has good scalability to the number of decision variables in the case of nonlinear variable linkages.
Abstract: Under mild conditions, it can be induced from the Karush-Kuhn-Tucker condition that the Pareto set, in the decision space, of a continuous multiobjective optimization problem is a piecewise continuous (m - 1)-D manifold, where m is the number of objectives. Based on this regularity property, we propose a regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA) for continuous multiobjective optimization problems with variable linkages. At each generation, the proposed algorithm models a promising area in the decision space by a probability distribution whose centroid is a (m - 1)-D piecewise continuous manifold. The local principal component analysis algorithm is used for building such a model. New trial solutions are sampled from the model thus built. A nondominated sorting-based selection is used for choosing solutions for the next generation. Systematic experiments have shown that, overall, RM-MEDA outperforms three other state-of-the-art algorithms, namely, GDE3, PCX-NSGA-II, and MIDEA, on a set of test instances with variable linkages. We have demonstrated that, compared with GDE3, RM-MEDA is not sensitive to algorithmic parameters, and has good scalability to the number of decision variables in the case of nonlinear variable linkages. A few shortcomings of RM-MEDA have also been identified and discussed in this paper.

660 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed new version of DE, with the adaptive LS, performs better, or at least comparably, to classic DE algorithm.
Abstract: We propose a crossover-based adaptive local search (LS) operation for enhancing the performance of standard differential evolution (DE) algorithm. Incorporating LS heuristics is often very useful in designing an effective evolutionary algorithm for global optimization. However, determining a single LS length that can serve for a wide range of problems is a critical issue. We present a LS technique to solve this problem by adaptively adjusting the length of the search, using a hill-climbing heuristic. The emphasis of this paper is to demonstrate how this LS scheme can improve the performance of DE. Experimenting with a wide range of benchmark functions, we show that the proposed new version of DE, with the adaptive LS, performs better, or at least comparably, to classic DE algorithm. Performance comparisons with other LS heuristics and with some other well-known evolutionary algorithms from literature are also presented.

597 citations


Journal ArticleDOI
TL;DR: A fuzzy clustering-based particle swarm (FCPSO) algorithm has been proposed to solve the highly constrained EED problem involving conflicting objectives and was able to provide a satisfactory compromise solution in almost all the trials, validating the efficacy and applicability of the proposed approach over the real-world multiobjective optimization problems.
Abstract: Economic dispatch is a highly constrained optimization problem encompassing interaction among decision variables. Environmental concerns that arise due to the operation of fossil fuel fired electric generators, transforms the classical problem into multiobjective environmental/economic dispatch (EED). In this paper, a fuzzy clustering-based particle swarm (FCPSO) algorithm has been proposed to solve the highly constrained EED problem involving conflicting objectives. FCPSO uses an external repository to preserve nondominated particles found along the search process. The proposed fuzzy clustering technique, manages the size of the repository within limits without destroying the characteristics of the Pareto front. Niching mechanism has been incorporated to direct the particles towards lesser explored regions of the Pareto front. To avoid entrapment into local optima and enhance the exploratory capability of the particles, a self-adaptive mutation operator has been proposed. In addition, the algorithm incorporates a fuzzy-based feedback mechanism and iteratively uses the information to determine the compromise solution. The algorithm's performance has been examined over the standard IEEE 30 bus six-generator test system, whereby it generated a uniformly distributed Pareto front whose optimality has been authenticated by benchmarking against the epsiv -constraint method. Results also revealed that the proposed approach obtained high-quality solutions and was able to provide a satisfactory compromise solution in almost all the trials, thereby validating the efficacy and applicability of the proposed approach over the real-world multiobjective optimization problems.

296 citations


Journal ArticleDOI
TL;DR: The result is a hybrid metaheuristic algorithm called Archive-Based hYbrid Scatter Search (AbYSS), which follows the scatter search structure but uses mutation and crossover operators from evolutionary algorithms, which outperforms the other two algorithms as regards the diversity of the solutions.
Abstract: We propose the use of a new algorithm to solve multiobjective optimization problems. Our proposal adapts the well-known scatter search template for single-objective optimization to the multiobjective domain. The result is a hybrid metaheuristic algorithm called Archive-Based hYbrid Scatter Search (AbYSS), which follows the scatter search structure but uses mutation and crossover operators from evolutionary algorithms. AbYSS incorporates typical concepts from the multiobjective field, such as Pareto dominance, density estimation, and an external archive to store the nondominated solutions. We evaluate AbYSS with a standard benchmark including both unconstrained and constrained problems, and it is compared with two state-of-the-art multiobjective optimizers, NSGA-II and SPEA2. The results obtained indicate that, according to the benchmark and parameter settings used, AbYSS outperforms the other two algorithms as regards the diversity of the solutions, and it obtains very competitive results according to the convergence to the true Pareto fronts and the hypervolume metric.

280 citations


Journal ArticleDOI
TL;DR: The empirical results suggest that the new adaptive tradeoff model (ATM) outperforms or performs similarly to other state-of-the-art techniques referred to in this paper in terms of the quality of the resulting solutions.
Abstract: In this paper, an adaptive tradeoff model (ATM) is proposed for constrained evolutionary optimization. In this model, three main issues are considered: (1) the evaluation of infeasible solutions when the population contains only infeasible individuals; (2) balancing feasible and infeasible solutions when the population consists of a combination of feasible and infeasible individuals; and (3) the selection of feasible solutions when the population is composed of feasible individuals only. These issues are addressed in this paper by designing different tradeoff schemes during different stages of a search process to obtain an appropriate tradeoff between objective function and constraint violations. In addition, a simple evolutionary strategy (ES) is used as the search engine. By integrating ATM with ES, a generic constrained optimization evolutionary algorithm (ATMES) is derived. The new method is tested on 13 well-known benchmark test functions, and the empirical results suggest that it outperforms or performs similarly to other state-of-the-art techniques referred to in this paper in terms of the quality of the resulting solutions.

272 citations


Journal ArticleDOI
TL;DR: A dynamic environment generator that can systematically generate dynamic environments of different difficulty with respect to memory schemes is proposed that is efficient for PBILs in dynamic environments and also indicates that different interactions exist between the memory scheme and random immigrants, multipopulation schemes for Pbils in different dynamic environments.
Abstract: In recent years, interest in studying evolutionary algorithms (EAs) for dynamic optimization problems (DOPs) has grown due to its importance in real-world applications. Several approaches, such as the memory and multiple population schemes, have been developed for EAs to address dynamic problems. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of EAs, for DOPs. A PBIL-specific associative memory scheme, which stores best solutions as well as corresponding environmental information in the memory, is investigated to improve its adaptability in dynamic environments. In this paper, the interactions between the memory scheme and random immigrants, multipopulation, and restart schemes for PBILs in dynamic environments are investigated. In order to better test the performance of memory schemes for PBILs and other EAs in dynamic environments, this paper also proposes a dynamic environment generator that can systematically generate dynamic environments of different difficulty with respect to memory schemes. Using this generator, a series of dynamic environments are generated and experiments are carried out to compare the performance of investigated algorithms. The experimental results show that the proposed memory scheme is efficient for PBILs in dynamic environments and also indicate that different interactions exist between the memory scheme and random immigrants, multipopulation schemes for PBILs in different dynamic environments.

259 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that coevolves fitness predictors, optimized for the solution population, which reduce fitness evaluation cost and frequency, while maintaining evolutionary progress, which demonstrates that fitness prediction can also reduce solution bloat and find solutions more reliably.
Abstract: We present an algorithm that coevolves fitness predictors, optimized for the solution population, which reduce fitness evaluation cost and frequency, while maintaining evolutionary progress. Fitness predictors differ from fitness models in that they may or may not represent the objective fitness, opening opportunities to adapt selection pressures and diversify solutions. The use of coevolution addresses three fundamental challenges faced in past fitness approximation research: 1) the model learning investment; 2) the level of approximation of the model; and 3) the loss of accuracy. We discuss applications of this approach and demonstrate its impact on the symbolic regression problem. We show that coevolved predictors scale favorably with problem complexity on a series of randomly generated test problems. Finally, we present additional empirical results that demonstrate that fitness prediction can also reduce solution bloat and find solutions more reliably.

Journal ArticleDOI
TL;DR: A generalization of the graph- based genetic programming technique known as Cartesian genetic programming (CGP) by utilizing automatic module acquisition, evolution, and reuse, which shows the new modular method evolves solutions quicker than the original nonmodular method, and the speedup is more pronounced on larger problems.
Abstract: This paper presents a generalization of the graph- based genetic programming (GP) technique known as Cartesian genetic programming (CGP). We have extended CGP by utilizing automatic module acquisition, evolution, and reuse. To benchmark the new technique, we have tested it on: various digital circuit problems, two symbolic regression problems, the lawnmower problem, and the hierarchical if-and-only-if problem. The results show the new modular method evolves solutions quicker than the original nonmodular method, and the speedup is more pronounced on larger problems. Also, the new modular method performs favorably when compared with other GP methods. Analysis of the evolved modules shows they often produce recognizable functions. Prospects for further improvements to the method are discussed.

Journal ArticleDOI
TL;DR: A multiobjective simulated annealer utilizing the relative dominance of a solution as the system energy for optimization, eliminating problems associated with composite objective functions is proposed and a method for choosing perturbation scalings promoting search both towards and across the Pareto front is proposed.
Abstract: Simulated annealing is a provably convergent optimizer for single-objective problems. Previously proposed multiobjective extensions have mostly taken the form of a single-objective simulated annealer optimizing a composite function of the objectives. We propose a multiobjective simulated annealer utilizing the relative dominance of a solution as the system energy for optimization, eliminating problems associated with composite objective functions. We also propose a method for choosing perturbation scalings promoting search both towards and across the Pareto front. We illustrate the simulated annealer's performance on a suite of standard test problems and provide comparisons with another multiobjective simulated annealer and the NSGA-II genetic algorithm. The new simulated annealer is shown to promote rapid convergence to the true Pareto front with a good coverage of solutions across it comparing favorably with the other algorithms. An application of the simulated annealer to an industrial problem, the optimization of a code-division-multiple access (CDMA) mobile telecommunications network's air interface, is presented and the simulated annealer is shown to generate nondominated solutions with an even and dense coverage that outperforms single objective genetic algorithm optimizers.

Journal ArticleDOI
TL;DR: It is argued that EDAs are an efficient alternative for many instances of the protein structure prediction problem and are indeed appropriate for a theoretical analysis of search procedures in lattice models.
Abstract: Simplified lattice models have played an important role in protein structure prediction and protein folding problems. These models can be useful for an initial approximation of the protein structure, and for the investigation of the dynamics that govern the protein folding process. Estimation of distribution algorithms (EDAs) are efficient evolutionary algorithms that can learn and exploit the search space regularities in the form of probabilistic dependencies. This paper introduces the application of different variants of EDAs to the solution of the protein structure prediction problem in simplified models, and proposes their use as a simulation tool for the analysis of the protein folding process. We develop new ideas for the application of EDAs to the bidimensional and tridimensional (2-d and 3-d) simplified protein folding problems. This paper analyzes the rationale behind the application of EDAs to these problems, and elucidates the relationship between our proposal and other population-based approaches proposed for the protein folding problem. We argue that EDAs are an efficient alternative for many instances of the protein structure prediction problem and are indeed appropriate for a theoretical analysis of search procedures in lattice models. All the algorithms introduced are tested on a set of difficult 2-d and 3-d instances from lattice models. Some of the results obtained with EDAs are superior to the ones obtained with other well-known population-based optimization algorithms.

Journal ArticleDOI
TL;DR: A system for the registration of computed tomography and 3-D intraoperative ultrasound images is presented and it is demonstrated that precise registration is possible within a realistic range of initial misalignment.
Abstract: A system for the registration of computed tomography and 3-D intraoperative ultrasound images is presented. Three gradient-based methods and one evolutionary algorithm are compared with regard to their suitability to solve this image registration problem. The system has been developed for pedicle screw insertion during spinal surgery. With clinical preoperative and intraoperative data, it is demonstrated that precise registration is possible within a realistic range of initial misalignment. Significant differences can be observed between the optimization methods. The covariance matrix adaptation evolution strategy shows the best overall performance, only four of 12 000 registration trials with patient data failed to register correctly.

Journal ArticleDOI
TL;DR: Performance comparisons with other, heuristic function approximation techniques show that XCSF yields competitive or even superior noise-robust performance, and a novel closest classifier matching mechanism for the efficient compaction of XCS's final problem solution.
Abstract: An important strength of learning classifier systems (LCSs) lies in the combination of genetic optimization techniques with gradient-based approximation techniques. The chosen approximation technique develops locally optimal approximations, such as accurate classification estimates, Q-value predictions, or linear function approximations. The genetic optimization technique is designed to distribute these local approximations efficiently over the problem space. Together, the two components develop a distributed, locally optimized problem solution in the form of a population of expert rules, often called classifiers. In function approximation problems, the XCSF classifier system develops a problem solution in the form of overlapping, piecewise linear approximations. This paper shows that XCSF performance on function approximation problems additively benefits from: 1) improved representations; 2) improved genetic operators; and 3) improved approximation techniques. Additionally, this paper introduces a novel closest classifier matching mechanism for the efficient compaction of XCS's final problem solution. The resulting compaction mechanism can boil the population size down by 90% on average, while decreasing prediction accuracy only marginally. Performance evaluations show that the additional mechanisms enable XCSF to reliably, accurately, and compactly approximate even seven dimensional functions. Performance comparisons with other, heuristic function approximation techniques show that XCSF yields competitive or even superior noise-robust performance.

Journal ArticleDOI
TL;DR: To overcome some problems related to the binary encoding schemes adopted in most cGAs, a new variant based on a real-valued solution coding is proposed, which achieves final solutions of the same quality as those found by binary cGs, with a significantly reduced computational cost.
Abstract: Recent research on compact genetic algorithms (cGAs) has proposed a number of evolutionary search methods with reduced memory requirements. In cGAs, the evolution of populations is emulated by processing a probability vector with specific update rules. This paper considers the implementation of cGAs in microcontroller-based control platforms. In particular, to overcome some problems related to the binary encoding schemes adopted in most cGAs, this paper also proposes a new variant based on a real-valued solution coding. The presented variant achieves final solutions of the same quality as those found by binary cGAs, with a significantly reduced computational cost. The potential of the proposed approach is assessed by means of an extensive comparative study, which includes numerical results on benchmark functions, simulated and experimental microcontroller design problems.

Journal ArticleDOI
TL;DR: It is shown that the complexity of the quantum genetic optimization algorithm (QGOA) is in terms of number of oracle calls in the selection procedure, which is confirmed by the simulations of the algorithm.
Abstract: The complexity of the selection procedure of a genetic algorithm that requires reordering, if we restrict the class of the possible fitness functions to varying fitness functions, is , where is the size of the population. The quantum genetic optimization algorithm (QGOA) exploits the power of quantum computation in order to speed up genetic procedures. In QGOA, the classical fitness evaluation and selection procedures are replaced by a single quantum procedure. While the quantum and classical genetic algorithms use the same number of generations, the QGOA requires fewer operations to identify the high-fitness subpopulation at each generation. We show that the complexity of our QGOA is in terms of number of oracle calls in the selection procedure. Such theoretical results are confirmed by the simulations of the algorithm.

Journal ArticleDOI
TL;DR: This paper provides a novel complementary design framework that is rooted in multiobjective optimization, genetic algorithms, and interactive user evaluation, and shows promising results in fitness convergence, design diversity, and user satisfaction metrics.
Abstract: This paper emphasizes the necessity of formally bringing qualitative and quantitative criteria of ergonomic design together, and provides a novel complementary design framework with this aim. Within this framework, different design criteria are viewed as optimization objectives, and design solutions are iteratively improved through the cooperative efforts of computer and user. The framework is rooted in multiobjective optimization, genetic algorithms, and interactive user evaluation. Three different algorithms based on the framework are developed, and tested with an ergonomic chair design problem. The parallel and multiobjective approaches show promising results in fitness convergence, design diversity, and user satisfaction metrics.

Journal ArticleDOI
TL;DR: The outcome from this test indicates that the JG paradigm is a very competitive scheme for multiobjective optimization and also a compatible evolutionary computing scheme when speed in convergence, diversity, and accuracy are simultaneously required.
Abstract: A new evolutionary computing algorithm on the basis of the ldquojumping genesrdquo (JG) phenomenon is proposed in this paper. It emulates the gene transposition in the genome that was discovered by Nobel Laureate, Barbara McClintock, in her work on the corn plants. The principle of JGs that is adopted for evolutionary computing is outlined. The procedures for executing the computational optimization are provided. A large number of constrained and unconstrained test functions have been utilized to verify this new scheme. Its performances on convergence and diversity have been statistically examined and comparisons with other evolutionary algorithms are carried out. It has been discovered that this new scheme is robust and able to provide outcomes quickly and accurately. A stringent measure of binary-indicator is also applied for algorithm classification. The outcome from this test indicates that the JG paradigm is a very competitive scheme for multiobjective optimization and also a compatible evolutionary computing scheme when speed in convergence, diversity, and accuracy are simultaneously required.

Journal ArticleDOI
TL;DR: An algorithm, IHSO, is described that quickly determines a solution's contribution and heuristics that reorder objectives to minimize the work required for IHSo to calculate a solution’s contribution are described.
Abstract: When hypervolume is used as part of the selection or archiving process in a multiobjective evolutionary algorithm, it is necessary to determine which solutions contribute the least hypervolume to a front. Little focus has been placed on algorithms that quickly determine these solutions and there are no fast algorithms designed specifically for this purpose. We describe an algorithm, IHSO, that quickly determines a solution's contribution. Furthermore, we describe and analyse heuristics that reorder objectives to minimize the work required for IHSO to calculate a solution's contribution. Lastly, we describe and analyze search techniques that reduce the amount of work required for solutions other than the least contributing one. Combined, these techniques allow multiobjective evolutionary algorithms to calculate hypervolume inline in increasingly complex and large fronts in many objectives.

Journal ArticleDOI
TL;DR: This paper attempts to validate a traffic light cycles evolutionary optimization architecture designed and successfully tested with a real-world test case in Santa Cruz de Tenerife, Canary islands, using its optimized traffic light cycle times in a simulated environment.
Abstract: In previous research, we have designed and successfully tested a traffic light cycles evolutionary optimization architecture. In this paper, we attempt to validate those results with a real-world test case. For a wide area in Santa Cruz de Tenerife city - Canary islands - we have improved traffic behavior, using our optimized traffic light cycle times in a simulated environment. Throughout this paper, we present some of the experiences, knowledge, and problems encountered.

Journal ArticleDOI
TL;DR: A fuzzy data-mining algorithm for extracting both association rules and membership functions from quantitative transactions from quantitative transaction data is proposed and a genetic algorithm (GA)-based framework for finding membership functions suitable for mining problems is proposed.
Abstract: Data mining is most commonly used in attempts to induce association rules from transaction data. Most previous studies focused on binary-valued transaction data. Transaction data in real-world applications, however, usually consist of quantitative values. This paper, thus, proposes a fuzzy data-mining algorithm for extracting both association rules and membership functions from quantitative transactions. A genetic algorithm (GA)-based framework for finding membership functions suitable for mining problems is proposed. The fitness of each set of membership functions is evaluated by the fuzzy-supports of the linguistic terms in the large 1-itemsets and by the suitability of the derived membership functions. The evaluation by the fuzzy supports of large 1-itemsets is much faster than that when considering all itemsets or interesting association rules. It can also help divide-and-conquer the derivation process of the membership functions for different items. The proposed GA framework, thus, maintains multiple populations, each for one item's membership functions. The final best sets of membership functions in all the populations are then gathered together to be used for mining fuzzy association rules. Experiments are conducted to analyze different fitness functions and set different fitness functions and setting different supports and confidences. Experiments are also conducted to compare the proposed algorithm, the one with uniform fuzzy partition, and the existing one without divide-and-conquer, with results validating the performance of the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper summarizes past results and introduces the following new results: Fingerprints for four new probe strategies are introduced, generalizing previous work in which tit-for-tat is the sole probe strategy.
Abstract: Fingerprinting is a technique for generating a representation-independent functional signature for a game playing agent. Fingerprints can be used to compare agents across representations in an automatic fashion. The theory of fingerprints is developed for software agents that play the iterated prisoner's dilemma. Examples of the technique for computing fingerprints are given. This paper summarizes past results and introduces the following new results. Fingerprints of prisoner's dilemma strategies that are represented as finite-state machines must be rational functions. An example of a strategy that does not have a finite-state representation and which does not have a rational fingerprint function is given: the majority strategy. It is shown that the AllD- and AllC-based fingerprints can be derived from the tit-for-tat fingerprint by a simple substitution. Fingerprints for four new probe strategies are introduced, generalizing previous work in which tit-for-tat is the sole probe strategy. A trial comparison is made of evolved prisoner's dilemma strategies across three representations: finite-state machines, feedforward neural nets, and lookup tables. Fingerprinting demonstrates that all three representations sample the strategy space in a radically different manner, even though the neural net's and lookup table's parameters are alternate encodings of the same strategy space. This space of strategies is also a subset of those encoded by the finite-state representation. Shortcomings of the fingerprint technique are outlined, with illustrative examples, and possible paths to overcome these shortcomings are given.

Journal ArticleDOI
TL;DR: A particle swarm optimization (PSO)-based learning rate adjustment method is proposed for BSS, and a simple decision-making method is introduced for how the learning rate should be applied in the current time slot.
Abstract: Blind source separation (BSS) is a technique used to recover a set of source signals without prior information on the transformation matrix or the probability distributions of the source signals. In previous works on BSS, the choice of the learning rate would result in a competition between stability and speed of convergence. In this paper, a particle swarm optimization (PSO)-based learning rate adjustment method is proposed for BSS, and a simple decision-making method is introduced for how the learning rate should be applied in the current time slot. In the experiments, samples of four and ten source signals were mixed and separated and the results were compared with other related approaches. The proposed approach exhibits rapid convergence, and produces more efficient and more stable independent component analysis algorithms, than other related approaches.

Journal ArticleDOI
TL;DR: The proposed GA is shown to perform better than NSGA-II and SPEA-2 on standard benchmarks, as well as for the optimization of a genetic model for flowering time control in rice, and tradeoffs between accuracy in gene activity levels versus in the plant traits that they influence suggest that data mining the Pareto front may be useful in bioinformatics.
Abstract: This paper describes genetic and hybrid approaches for multiobjective optimization using a numerical measure called fuzzy dominance. Fuzzy dominance is used when implementing tournament selection within the genetic algorithm (GA). In the hybrid version, it is also used to carry out a Nelder-Mead simplex-based local search. The proposed GA is shown to perform better than NSGA-II and SPEA-2 on standard benchmarks, as well as for the optimization of a genetic model for flowering time control in rice. Adding the local search achieves faster convergence, an important feature in computationally intensive optimization of gene networks. The hybrid version also compares well with ParEGO on a few other benchmarks. The proposed hybrid algorithm is then applied to estimate the parameters of an elaborate gene network model of flowering time control in Arabidopsis. Overall solution quality is quite good by biological standards. Tradeoffs are discussed between accuracy in gene activity levels versus in the plant traits that they influence. These tradeoffs suggest that data mining the Pareto front may be useful in bioinformatics.

Journal ArticleDOI
TL;DR: Experimental results with two well-known microarray datasets indicate that the proposed method produces ensembles that are superior to individual classifiers, as well as other ensemble optimized by random and greedy strategies.
Abstract: In general, the analysis of microarray data requires two steps: feature selection and classification. From a variety of feature selection methods and classifiers, it is difficult to find optimal ensembles composed of any feature-classifier pairs. This paper proposes a novel method based on the evolutionary algorithm (EA) to form sophisticated ensembles of features and classifiers that can be used to obtain high classification performance. In spite of the exponential number of possible ensembles of individual feature-classifier pairs, an EA can produce the best ensemble in a reasonable amount of time. The chromosome is encoded with real values to decide the weight for each feature-classifier pair in an ensemble. Experimental results with two well-known microarray datasets in terms of time and classification rate indicate that the proposed method produces ensembles that are superior to individual classifiers, as well as other ensembles optimized by random and greedy strategies.

Journal ArticleDOI
TL;DR: This study proposes a new program evolution algorithm employing a Bayesian network for generating new individuals that employs a special chromosome called the expanded parse tree, which significantly reduces the size of the conditional probability table (CPT).
Abstract: Genetic programming (GP) is a powerful optimization algorithm that has been applied to a variety of problems. This algorithm can, however, suffer from problems arising from the fact that a crossover, which is a main genetic operator in GP, randomly selects crossover points, and so building blocks may be destroyed by the action of this operator. In recent years, evolutionary algorithms based on probabilistic techniques have been proposed in order to overcome this problem. In the present study, we propose a new program evolution algorithm employing a Bayesian network for generating new individuals. It employs a special chromosome called the expanded parse tree , which significantly reduces the size of the conditional probability table (CPT). Prior prototype tree-based approaches have been faced with the problem of huge CPTs, which not only require significant memory resources, but also many samples in order to construct the Bayesian network. By applying the present approach to three distinct computational experiments, the effectiveness of this new approach for dealing with deceptive problems is demonstrated.

Journal ArticleDOI
TL;DR: This paper presents a theoretical framework for generalization, the first time that generalization is defined and analyzed rigorously in coevolutionary learning, and shows that a small sample of test strategies can be used to estimate the generalization performance.
Abstract: Coevolutionary learning involves a training process where training samples are instances of solutions that interact strategically to guide the evolutionary (learning) process. One main research issue is with the generalization performance, i.e., the search for solutions (e.g., input-output mappings) that best predict the required output for any new input that has not been seen during the evolutionary process. However, there is currently no such framework for determining the generalization performance in coevolutionary learning even though the notion of generalization is well-understood in machine learning. In this paper, we introduce a theoretical framework to address this research issue. We present the framework in terms of game-playing although our results are more general. Here, a strategy's generalization performance is its average performance against all test strategies. Given that the true value may not be determined by solving analytically a closed-form formula and is computationally prohibitive, we propose an estimation procedure that computes the average performance against a small sample of random test strategies instead. We perform a mathematical analysis to provide a statistical claim on the accuracy of our estimation procedure, which can be further improved by performing a second estimation on the variance of the random variable. For game-playing, it is well-known that one is more interested in the generalization performance against a biased and diverse sample of "good" test strategies. We introduce a simple approach to obtain such a test sample through the multiple partial enumerative search of the strategy space that does not require human expertise and is generally applicable to a wide range of domains. We illustrate the generalization framework on the coevolutionary learning of the iterated prisoner's dilemma (IPD) games. We investigate two definitions of generalization performance for the IPD game based on different performance criteria, e.g., in terms of the number of wins based on individual outcomes and in terms of average payoff. We show that a small sample of test strategies can be used to estimate the generalization performance. We also show that the generalization performance using a biased and diverse set of "good" test strategies is lower compared to the unbiased case for the IPD game. This is the first time that generalization is defined and analyzed rigorously in coevolutionary learning. The framework allows the evaluation of the generalization performance of any coevolutionary learning system quantitatively.

Journal ArticleDOI
TL;DR: This paper presents a first attempt at incorporating some of the basic structural properties of complex biological systems which are believed to be necessary preconditions for system qualities such as robustness into nature inspired optimization algorithms.
Abstract: Over the last decade, significant progress has been made in understanding complex biological systems, however, there have been few attempts at incorporating this knowledge into nature inspired optimization algorithms. In this paper, we present a first attempt at incorporating some of the basic structural properties of complex biological systems which are believed to be necessary preconditions for system qualities such as robustness. In particular, we focus on two important conditions missing in evolutionary algorithm populations; a self-organized definition of locality and interaction epistasis. We demonstrate that these two features, when combined, provide algorithm behaviors not observed in the canonical evolutionary algorithm (EA) or in EAs with structured populations such as the cellular genetic algorithm. The most noticeable change in algorithm behavior is an unprecedented capacity for sustainable coexistence of genetically distinct individuals within a single population. This capacity for sustained genetic diversity is not imposed on the population but instead emerges as a natural consequence of the dynamics of the system.