scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2015"


Journal ArticleDOI
TL;DR: A unified paradigm, which combines dominance- and decomposition-based approaches, for many-objective optimization, is suggested, which shows highly competitive performance on all the constrained optimization problems.
Abstract: Achieving balance between convergence and diversity is a key issue in evolutionary multiobjective optimization. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in many-objective optimization. This paper suggests a unified paradigm, which combines dominance- and decomposition-based approaches, for many-objective optimization. Our major purpose is to exploit the merits of both dominance- and decomposition-based approaches to balance the convergence and diversity of the evolutionary process. The performance of our proposed method is validated and compared with four state-of-the-art algorithms on a number of unconstrained benchmark problems with up to 15 objectives. Empirical results fully demonstrate the superiority of our proposed method on all considered test instances. In addition, we extend this method to solve constrained problems having a large number of objectives. Compared to two other recently proposed constrained optimizers, our proposed method shows highly competitive performance on all the constrained optimization problems.

900 citations


Journal ArticleDOI
TL;DR: A knee point-driven EA to solve MaOPs by showing that knee points are naturally most preferred among nondominated solutions if no explicit user preferences are given and enhancing the convergence performance in many-objective optimization.
Abstract: Evolutionary algorithms (EAs) have shown to be promising in solving many-objective optimization problems (MaOPs), where the performance of these algorithms heavily depends on whether solutions that can accelerate convergence toward the Pareto front and maintaining a high degree of diversity will be selected from a set of nondominated solutions. In this paper, we propose a knee point-driven EA to solve MaOPs. Our basic idea is that knee points are naturally most preferred among nondominated solutions if no explicit user preferences are given. A bias toward the knee points in the nondominated solutions in the current population is shown to be an approximation of a bias toward a large hypervolume, thereby enhancing the convergence performance in many-objective optimization. In addition, as at most one solution will be identified as a knee point inside the neighborhood of each solution in the nondominated front, no additional diversity maintenance mechanisms need to be introduced in the proposed algorithm, considerably reducing the computational complexity compared to many existing multiobjective EAs for many-objective optimization. Experimental results on 16 test problems demonstrate the competitiveness of the proposed algorithm in terms of both solution quality and computational efficiency.

624 citations


Journal ArticleDOI
TL;DR: More than a decade after the first extensive overview on parameter control, this work revisits the field and presents a survey of the state-of-the-art.
Abstract: More than a decade after the first extensive overview on parameter control, we revisit the field and present a survey of the state-of-the-art. We briefly summarize the development of the field and discuss existing work related to each major parameter or component of an evolutionary algorithm. Based on this overview, we observe trends in the area, identify some (methodological) shortcomings, and give recommendations for future research.

428 citations


Journal ArticleDOI
TL;DR: The experimental results show that Two_Arch2 can cope with ManyOPs with satisfactory convergence, diversity, and complexity, and a new Lp-norm-based (p <; 1) diversity maintenance scheme for Manyops in Two-Arch2 is proposed.
Abstract: Many-objective optimization problems (ManyOPs) refer, usually, to those multiobjective problems (MOPs) with more than three objectives. Their large numbers of objectives pose challenges to multiobjective evolutionary algorithms (MOEAs) in terms of convergence, diversity, and complexity. Most existing MOEAs can only perform well in one of those three aspects. In view of this, we aim to design a more balanced MOEA on ManyOPs in all three aspects at the same time. Among the existing MOEAs, the two-archive algorithm (Two_Arch) is a low-complexity algorithm with two archives focusing on convergence and diversity separately. Inspired by the idea of Two_Arch, we propose a significantly improved two-archive algorithm (i.e., Two_Arch2) for ManyOPs in this paper. In our Two_Arch2, we assign different selection principles (indicator-based and Pareto-based) to the two archives. In addition, we design a new ${L} _{\mathbf {p}}$ -norm-based ( ${p}~\boldsymbol { ) diversity maintenance scheme for ManyOPs in Two_Arch2. In order to evaluate the performance of Two_Arch2 on ManyOPs, we have compared it with several MOEAs on a wide range of benchmark problems with different numbers of objectives. The experimental results show that Two_Arch2 can cope with ManyOPs (up to 20 objectives) with satisfactory convergence, diversity, and complexity.

406 citations


Journal ArticleDOI
TL;DR: In this paper, a novel, computationally efficient approach to nondominated sorting is proposed, termed efficient nondominated sort (ENS), where a solution to be assigned to a front needs to be compared only with those that have already been assigned toA front, thereby avoiding many unnecessary dominance comparisons.
Abstract: Evolutionary algorithms have been shown to be powerful for solving multiobjective optimization problems, in which nondominated sorting is a widely adopted technique in selection. This technique, however, can be computationally expensive, especially when the number of individuals in the population becomes large. This is mainly because in most existing nondominated sorting algorithms, a solution needs to be compared with all other solutions before it can be assigned to a front. In this paper we propose a novel, computationally efficient approach to nondominated sorting, termed efficient nondominated sort (ENS). In ENS, a solution to be assigned to a front needs to be compared only with those that have already been assigned to a front, thereby avoiding many unnecessary dominance comparisons. Based on this new approach, two nondominated sorting algorithms have been suggested. Both theoretical analysis and empirical results show that the ENS-based sorting algorithms are computationally more efficient than the state-of-the-art nondominated sorting methods.

378 citations


Journal ArticleDOI
TL;DR: This paper introduces a decomposition-based evolutionary algorithm wherein uniformly distributed reference points are generated via systematic sampling, balance between convergence and diversity is maintained using two independent distance measures, and a simple preemptive distance comparison scheme is used for association.
Abstract: Decomposition-based evolutionary algorithms have been quite successful in solving optimization problems involving two and three objectives. Recently, there have been some attempts to exploit the strengths of decomposition-based approaches to deal with many objective optimization problems. Performance of such approaches are largely dependent on three key factors: 1) means of reference point generation; 2) schemes to simultaneously deal with convergence and diversity; and 3) methods to associate solutions to reference directions. In this paper, we introduce a decomposition-based evolutionary algorithm wherein uniformly distributed reference points are generated via systematic sampling, balance between convergence and diversity is maintained using two independent distance measures, and a simple preemptive distance comparison scheme is used for association. In order to deal with constraints, an adaptive epsilon formulation is used. The performance of the algorithm is evaluated using standard benchmark problems, i.e., DTLZ1-DTLZ4 for 3, 5, 8, 10, and 15 objectives, WFG1-WFG9, the car side impact problem, the water resource management problem, and the constrained ten-objective general aviation aircraft design problem. Results of problems involving redundant objectives and disconnected Pareto fronts are also included in this paper to illustrate the capability of the algorithm. The study clearly highlights that the proposed algorithm is better or at par with recent reference direction-based approaches for many objective optimization.

317 citations


Journal ArticleDOI
TL;DR: It is shown that NSGA-II outperforms the other algorithms when objectives are highly correlated, and MOEA/D shows totally different search behavior depending on the choice of a scalarizing function and its parameter value.
Abstract: We examine the behavior of three classes of evolutionary multiobjective optimization (EMO) algorithms on many-objective knapsack problems. They are Pareto dominance-based, scalarizing function-based, and hypervolume-based algorithms. NSGA-II, MOEA/D, SMS-EMOA, and HypE are examined using knapsack problems with 2–10 objectives. Our test problems are generated by randomly specifying coefficients (i.e., profits) in objectives. We also generate other test problems by combining two objectives to create a dependent or correlated objective. Experimental results on randomly generated many-objective knapsack problems are consistent with well-known performance deterioration of Pareto dominance-based algorithms. That is, NSGA-II is outperformed by the other algorithms. However, it is also shown that NSGA-II outperforms the other algorithms when objectives are highly correlated. MOEA/D shows totally different search behavior depending on the choice of a scalarizing function and its parameter value. Some MOEA/D variants work very well only on two-objective problems while others work well on many-objective problems with 4–10 objectives. We also obtain other interesting observations such as the performance improvement by similar parent recombination and the necessity of diversity improvement for many-objective knapsack problems.

256 citations


Journal ArticleDOI
TL;DR: This paper proposes a new model-based method for representing and searching nondominated solutions that is able to alleviate the requirement on solution diversity and in principle, as many solutions as needed can be generated.
Abstract: To approximate the Pareto front, most existing multiobjective evolutionary algorithms store the nondominated solutions found so far in the population or in an external archive during the search. Such algorithms often require a high degree of diversity of the stored solutions and only a limited number of solutions can be achieved. By contrast, model-based algorithms can alleviate the requirement on solution diversity and in principle, as many solutions as needed can be generated. This paper proposes a new model-based method for representing and searching nondominated solutions. The main idea is to construct Gaussian process-based inverse models that map all found nondominated solutions from the objective space to the decision space. These inverse models are then used to create offspring by sampling the objective space. To facilitate inverse modeling, the multivariate inverse function is decomposed into a group of univariate functions, where the number of inverse models is reduced using a random grouping technique. Extensive empirical simulations demonstrate that the proposed algorithm exhibits robust search performance on a variety of medium to high dimensional multiobjective optimization test problems. Additional nondominated solutions are generated a posteriori using the constructed models to increase the density of solutions in the preferred regions at a low computational cost.

248 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel variant of DE with an individual-dependent mechanism that includes an Individual-dependent parameter (IDP) setting and anindividual-dependent mutation (IDM) strategy that is extensively evaluated on a suite of the 28 latest benchmark functions developed for the 2013 Congress on Evolutionary Computation special session.
Abstract: Differential evolution (DE) is a well-known optimization algorithm that utilizes the difference of positions between individuals to perturb base vectors and thus generate new mutant individuals. However, the difference between the fitness values of individuals, which may be helpful to improve the performance of the algorithm, has not been used to tune parameters and choose mutation strategies. In this paper, we propose a novel variant of DE with an individual-dependent mechanism that includes an individual-dependent parameter (IDP) setting and an individual-dependent mutation (IDM) strategy. In the IDP setting, control parameters are set for individuals according to the differences in their fitness values. In the IDM strategy, four mutation operators with different searching characteristics are assigned to the superior and inferior individuals, respectively, at different stages of the evolution process. The performance of the proposed algorithm is then extensively evaluated on a suite of the 28 latest benchmark functions developed for the 2013 Congress on Evolutionary Computation special session. Experimental results demonstrate the algorithm’s outstanding performance.

226 citations


Journal ArticleDOI
TL;DR: It is shown that the genetic improvement of programs (GIP) can scale by evolving increased performance in a widely-used and highly complex 50000 line system.
Abstract: We show that the genetic improvement of programs (GIP) can scale by evolving increased performance in a widely-used and highly complex 50000 line system. Genetic improvement of software for multiple objective exploration (GISMOE) found code that is 70 times faster (on average) and yet is at least as good functionally. Indeed, it even gives a small semantic gain.

209 citations


Journal ArticleDOI
TL;DR: A novel method, named parallel cell coordinate system (PCCS), is proposed to assess the evolutionary environment including density, rank, and diversity indicators based on the measurements of parallel cell distance, potential, and distribution entropy, respectively.
Abstract: Managing convergence and diversity is essential in the design of multiobjective particle swarm optimization (MOPSO) in search of an accurate and well distributed approximation of the true Pareto-optimal front. Largely due to its fast convergence, particle swarm optimization incurs a rapid loss of diversity during the evolutionary process. Many mechanisms have been proposed in existing MOPSOs in terms of leader selection, archive maintenance, and perturbation to tackle this deficiency. However, few MOPSOs are designed to dynamically adjust the balance in exploration and exploitation according to the feedback information detected from the evolutionary environment. In this paper, a novel method, named parallel cell coordinate system (PCCS), is proposed to assess the evolutionary environment including density, rank, and diversity indicators based on the measurements of parallel cell distance, potential, and distribution entropy, respectively. Based on PCCS, strategies proposed for selecting global best and personal best, maintaining archive, adjusting flight parameters, and perturbing stagnation are integrated into a self-adaptive MOPSO (pccsAMOPSO). The comparative experimental results show that the proposed pccsAMOPSO outperforms the other eight state-of-the-art competitors on ZDT and DTLZ test suites in terms of the chosen performance metrics. An additional experiment for density estimation in MOPSO illustrates that the performance of PCCS is superior to that of adaptive grid and crowding distance in terms of convergence and diversity.

Journal ArticleDOI
TL;DR: A visualization method that uses prosection (projection of a section) to visualize 4-D approximation sets is proposed that reproduces the shape, range, and distribution of vectors in the observed approximation sets well and can handle multiple large approximation sets while being robust and computationally inexpensive.
Abstract: In evolutionary multiobjective optimization, it is very important to be able to visualize approximations of the Pareto front (called approximation sets) that are found by multiobjective evolutionary algorithms. While scatter plots can be used for visualizing 2-D and 3-D approximation sets, more advanced approaches are needed to handle four or more objectives. This paper presents a comprehensive review of the existing visualization methods used in evolutionary multiobjective optimization, showing their outcomes on two novel 4-D benchmark approximation sets. In addition, a visualization method that uses prosection (projection of a section) to visualize 4-D approximation sets is proposed. The method reproduces the shape, range, and distribution of vectors in the observed approximation sets well and can handle multiple large approximation sets while being robust and computationally inexpensive. Even more importantly, for some vectors, the visualization with prosections preserves the Pareto dominance relation and relative closeness to reference points. The method is analyzed theoretically and demonstrated on several approximation sets.

Journal ArticleDOI
TL;DR: The proposed eigenvector-based crossover operator utilizes eigenvectors of covariance matrix of individual solutions, which makes the crossover rotationally invariant, and can be applied to any crossover strategy with minimal changes.
Abstract: Differential evolution has been shown to be an effective methodology for solving optimization problems over continuous space. In this paper, we propose an eigenvector-based crossover operator. The proposed operator utilizes eigenvectors of covariance matrix of individual solutions, which makes the crossover rotationally invariant. More specifically, the donor vectors during crossover are modified, by projecting each donor vector onto the eigenvector basis that provides an alternative coordinate system. The proposed operator can be applied to any crossover strategy with minimal changes. The experimental results show that the proposed operator significantly improves DE performance on a set of 54 test functions in CEC 2011, BBOB 2012, and CEC 2013 benchmark sets.

Journal ArticleDOI
TL;DR: This paper proposes a hybrid multiobjective evolutionary algorithm integrating these two different strategies for combinatorial optimization problems with two or three objectives that outperforms other approaches.
Abstract: Domination-based sorting and decomposition are two basic strategies used in multiobjective evolutionary optimization. This paper proposes a hybrid multiobjective evolutionary algorithm integrating these two different strategies for combinatorial optimization problems with two or three objectives. The proposed algorithm works with an internal (working) population and an external archive. It uses a decomposition-based strategy for evolving its working population and uses a domination-based sorting for maintaining the external archive. Information extracted from the external archive is used to decide which search regions should be searched at each generation. In such a way, the domination-based sorting and the decomposition strategy can complement each other. In our experimental studies, the proposed algorithm is compared with a domination-based approach, a decomposition-based one, and one of its enhanced variants on two well-known multiobjective combinatorial optimization problems. Experimental results show that our proposed algorithm outperforms other approaches. The effects of the external archive in the proposed algorithm are also investigated and discussed.

Journal ArticleDOI
TL;DR: An effective and efficient successful-parent-selecting framework is proposed to improve the performance of differential evolution by providing an alternative for the selection of parents during mutation and crossover.
Abstract: An effective and efficient successful-parent-selecting framework is proposed to improve the performance of differential evolution (DE) by providing an alternative for the selection of parents during mutation and crossover. The proposed method adapts the selection of parents by storing successful solutions into an archive, and the parents are selected from the archive when a solution is continuously not updated for an unacceptable amount of time. The proposed framework provides more promising solutions to guide the evolution and effectively helps DE escaping the situation of stagnation. The simulation results show that the proposed framework significantly improves the performance of two original DEs and six state-of-the-art algorithms in four real-world optimization problems and 30 benchmark functions.

Journal ArticleDOI
TL;DR: A study on evolutionary memetic computing paradigm that is capable of learning and evolving knowledge meme that traverses different but related problem domains, for greater search efficiency is presented.
Abstract: In recent decades, a plethora of dedicated evolutionary algorithms (EAs) have been crafted to solve domain-specific complex problems more efficiently. Many advanced EAs have relied on the incorporation of domain-specific knowledge as inductive biases that is deemed to fit the problem of interest well. As such, the embedment of domain knowledge about the underlying problem within the search algorithms is becoming an established mode of enhancing evolutionary search performance. In this paper, we present a study on evolutionary memetic computing paradigm that is capable of learning and evolving knowledge meme that traverses different but related problem domains, for greater search efficiency. Focusing on combinatorial optimization as the area of study, a realization of the proposed approach is investigated on two NP-hard problem domains (i.e., capacitated vehicle routing problem and capacitated arc routing problem). Empirical studies on well-established routing problems and their respective state-of-the-art optimization solvers are presented to study the potential benefits of leveraging knowledge memes that are learned from different but related problem domains on future evolutionary search.

Journal ArticleDOI
TL;DR: The experimental results indicate that the proposed NSLS is able to find a better spread of solutions and a better convergence to the true Pareto-optimal front compared to the other four algorithms.
Abstract: In this paper, a new multiobjective optimization framework based on nondominated sorting and local search (NSLS) is introduced. The NSLS is based on iterations. At each iteration, given a population ${P}$ , a simple local search method is used to get a better population $P{'}$ , and then the nondominated sorting is adopted on $P \cup P{'}$ to obtain a new population for the next iteration. Furthermore, the farthest-candidate approach is combined with the nondominated sorting to choose the new population for improving the diversity. Additionally, another version of NSLS (NSLS-C) is used for comparison, which replaces the farthest-candidate method with the crowded comparison mechanism presented in the nondominated sorting genetic algorithm II (NSGA-II). The proposed method (NSLS) is compared with NSLS-C and the other three classic algorithms: NSGA-II, MOEA/D-DE, and MODEA on a set of seventeen bi-objective and three tri-objective test problems. The experimental results indicate that the proposed NSLS is able to find a better spread of solutions and a better convergence to the true Pareto-optimal front compared to the other four algorithms. Furthermore, the sensitivity of NSLS is also experimentally investigated in this paper.

Journal ArticleDOI
TL;DR: Evidence is provided that lexicase selection maintains higher levels of population diversity than other selection methods, which may partially explain its utility as a parent selection algorithm in the context of uncompromising problems.
Abstract: We describe a broad class of problems, called “uncompromising problems,” which are characterized by the requirement that solutions must perform optimally on each of many test cases. Many of the problems that have long motivated genetic programming research, including the automation of many traditional programming tasks, are uncompromising. We describe and analyze the recently proposed “lexicase” parent selection algorithm and show that it can facilitate the solution of uncompromising problems by genetic programming. Unlike most traditional parent selection techniques, lexicase selection does not base selection on a fitness value that is aggregated over all test cases; rather, it considers test cases one at a time in random order. We present results comparing lexicase selection to more traditional parent selection methods, including standard tournament selection and implicit fitness sharing, on four uncompromising problems: 1) finding terms in finite algebras; 2) designing digital multipliers; 3) counting words in files; and 4) performing symbolic regression of the factorial function. We provide evidence that lexicase selection maintains higher levels of population diversity than other selection methods, which may partially explain its utility as a parent selection algorithm in the context of uncompromising problems.

Journal ArticleDOI
TL;DR: An improved information-sharing mechanism among the individuals of an evolutionary algorithm for inducing efficient niching behavior is presented and how population diversity is preserved by modifying the basic perturbation (mutation) scheme through the use of random individuals selected probabilistically is shown.
Abstract: In practical situations, it is very often desirable to detect multiple optimally sustainable solutions of an optimization problem. The population-based evolutionary multimodal optimization algorithms can be very helpful in such cases. They detect and maintain multiple optimal solutions during the run by incorporating specialized niching operations to aid the parallel localized convergence of population members around different basins of attraction. This paper presents an improved information-sharing mechanism among the individuals of an evolutionary algorithm for inducing efficient niching behavior. The mechanism can be integrated with stochastic real-parameter optimizers relying on differential perturbation of the individuals (candidate solutions) based on the population distribution. Various real-coded genetic algorithms (GAs), particle swarm optimization (PSO), and differential evolution (DE) fit the example of such algorithms. The main problem arising from differential perturbation is the unequal attraction toward the different basins of attraction that is detrimental to the objective of parallel convergence to multiple basins of attraction. We present our study through DE algorithm owing to its highly random nature of mutation and show how population diversity is preserved by modifying the basic perturbation (mutation) scheme through the use of random individuals selected probabilistically. By integrating the proposed technique with DE framework, we present three improved versions of well-known DE-based niching methods. Through an extensive experimental analysis, a statistically significant improvement in the overall performance has been observed upon integrating of our technique with the DE-based niching methods.

Journal ArticleDOI
TL;DR: A many-objective evolutionary algorithm (MaOEA) based on directional diversity (DD) and favorable convergence (FC) and the enhancement of two selection schemes to facilitate both convergence and diversity is proposed.
Abstract: Multiobjective evolutionary algorithms have become prevalent and efficient approaches for solving multiobjective optimization problems. However, their performances deteriorate severely when handling many-objective optimization problems (MaOPs) due to the loss of selection pressure to drive the search toward the Pareto front and the ineffective design in diversity maintenance mechanism. This paper proposes a many-objective evolutionary algorithm (MaOEA) based on directional diversity (DD) and favorable convergence (FC). The main features are the enhancement of two selection schemes to facilitate both convergence and diversity. In the algorithm, a mating selection based on FC is applied to strengthen selection pressure while an environmental selection based on DD and FC is designed to balance diversity and convergence. The proposed algorithm is tested on 64 instances of 16 MaOPs with diverse characteristics and compared with seven state-of-the-art algorithms. Experimental results show that the proposed MaOEA performs competitively with respect to chosen state-of-the-art designs.

Journal ArticleDOI
TL;DR: A heuristic seeding mechanism is introduced to CGP which allows for improving not only the quality of evolved circuits, but also reducing the time of evolution and the efficiency of the proposed method is evaluated.
Abstract: In approximate computing, the requirement of perfect functional behavior can be relaxed because some applications are inherently error resilient. Approximate circuits, which fall into the approximate computing paradigm, are designed in such a way that they do not fully implement the logic behavior given by the specification and, hence, their accuracy can be exchanged for lower area, delay or power consumption. In order to automate the design process, we propose to evolve approximate digital circuits that show a minimal error for a supplied amount of resources. The design process, which is based on Cartesian genetic programming (CGP), can be repeated many times in order to obtain various tradeoffs between the accuracy and area. A heuristic seeding mechanism is introduced to CGP, which allows for improving not only the quality of evolved circuits, but also reducing the time of evolution. The efficiency of the proposed method is evaluated for the gate as well as the functional level evolution. In particular, approximate multipliers and median circuits that show very good parameters in comparison with other available implementations were constructed by means of the proposed method.

Journal ArticleDOI
TL;DR: Experimental results indicate that the new approach can improve the performance of single operator-based methods in the majority of the functions.
Abstract: It is well known that in evolutionary algorithms (EAs), different reproduction operators may be suitable for different problems or in different running stages. To improve the algorithm performance, the ensemble of multiple operators has become popular. Most ensemble techniques achieve this goal by choosing an operator according to a probability learned from the previous experience. In contrast to these ensemble techniques, in this paper we propose a cheap surrogate model-based multioperator search strategy for evolutionary optimization. In our approach, a set of candidate offspring solutions are generated by using the multiple offspring reproduction operators, and the best one according to the surrogate model is chosen as the offspring solution. Two major advantages of this approach are: 1) each operator can generate a solution for competition compared to the probability-based approaches and 2) the surrogate model building is relatively cheap compared to that in the surrogate-assisted EAs. The model is used to implement multioperator ensemble in two popular EAs, that is, differential evolution and particle swarm optimization. Thirty benchmark functions and the functions presented in the CEC 2013 are chosen as the test suite to evaluate our approach. Experimental results indicate that the new approach can improve the performance of single operator-based methods in the majority of the functions.

Journal ArticleDOI
TL;DR: A gene expression programming algorithm is proposed to automatically generate, during the instance-solving process, the high-level heuristic of the hyper-heuristic framework, which generalizes well across all domains and achieves competitive, if not superior, results for several instances on all domains.
Abstract: Hyper-heuristic approaches aim to automate heuristic design in order to solve multiple problems instead of designing tailor-made methodologies for individual problems. Hyper-heuristics accomplish this through a high-level heuristic (heuristic selection mechanism and an acceptance criterion). This automates heuristic selection, deciding whether to accept or reject the returned solution. The fact that different problems, or even instances, have different landscape structures and complexity, the design of efficient high-level heuristics can have a dramatic impact on hyper-heuristic performance. In this paper, instead of using human knowledge to design the high-level heuristic, we propose a gene expression programming algorithm to automatically generate, during the instance-solving process, the high-level heuristic of the hyper-heuristic framework. The generated heuristic takes information (such as the quality of the generated solution and the improvement made) from the current problem state as input and decides which low-level heuristic should be selected and the acceptance or rejection of the resultant solution. The benefit of this framework is the ability to generate, for each instance, different high-level heuristics during the problem-solving process. Furthermore, in order to maintain solution diversity, we utilize a memory mechanism that contains a population of both high-quality and diverse solutions that is updated during the problem-solving process. The generality of the proposed hyper-heuristic is validated against six well-known combinatorial optimization problems, with very different landscapes, provided by the HyFlex software. Empirical results, comparing the proposed hyper-heuristic with state-of-the-art hyper-heuristics, conclude that the proposed hyper-heuristic generalizes well across all domains and achieves competitive, if not superior, results for several instances on all domains.

Journal ArticleDOI
TL;DR: This paper introduces a robust information content-based method for continuous fitness landscapes, which addresses limitations and generates four measures related to the landscape features and demonstrates the practical relevance of the new measures using them as class predictors on a machine learning model, which classifies the benchmark functions into five groups.
Abstract: Data-driven analysis methods, such as the information content of a fitness sequence, characterize a discrete fitness landscape by quantifying its smoothness, ruggedness, or neutrality. However, enhancements to the information content method are required when dealing with continuous fitness landscapes. One typically employed adaptation is to sample the fitness landscape using random walks with variable step size. However, this adaptation has significant limitations: random walks may produce biased samples, and uncertainty is added because the distance between observations is not accounted for. In this paper, we introduce a robust information content-based method for continuous fitness landscapes, which addresses these limitations. Our method generates four measures related to the landscape features. Numerical simulations are used to evaluate the efficacy of the proposed method. We calculate the Pearson correlation coefficient between the new measures and other well-known exploratory landscape analysis measures. Significant differences on the measures between benchmark functions are subsequently identified. We then demonstrate the practical relevance of the new measures using them as class predictors on a machine learning model, which classifies the benchmark functions into five groups. Classification accuracy greater than 90% was obtained, with computational costs bounded between 1% and 10% of the maximum function evaluation budget. The results demonstrate that our method provides relevant information, at a low cost in terms of function evaluations.

Journal ArticleDOI
TL;DR: It is indicated that semantic backpropagation helps evolution to identify the desired intermediate computation states and makes the search process more efficient.
Abstract: In genetic programming, a search algorithm is expected to produce a program that achieves the desired final computation state (desired output). To reach that state, an executing program needs to traverse certain intermediate computation states. An evolutionary search process is expected to autonomously discover such states. This can be difficult for nontrivial tasks that require long programs to be solved. The semantic backpropagation algorithm proposed in this paper heuristically inverts the execution of evolving programs to determine the desired intermediate computation states. Two search operators, random desired operator and approximately geometric semantic crossover, use the intermediate states determined by semantic backpropagation to define subtasks of the original programming task, which are then solved using an exhaustive search. The operators outperform the standard genetic search operators and other semantic-aware operators when compared on a suite of symbolic regression and Boolean benchmarks. This result and additional analysis conducted in this paper indicate that semantic backpropagation helps evolution to identify the desired intermediate computation states and makes the search process more efficient.

Journal ArticleDOI
TL;DR: An algorithm for many-objective optimization problems, which will work more quickly than existing ones, while offering competitive performance, and a new form of elitism so as to restrict the number of higher ranked solutions that are selected in the next population is proposed.
Abstract: In this paper we have developed an algorithm for many-objective optimization problems, which will work more quickly than existing ones, while offering competitive performance. The algorithm periodically reorders the objectives based on their conflict status and selects a subset of conflicting objectives for further processing. We have taken differential evolution multiobjective optimization (DEMO) as the underlying metaheuristic evolutionary algorithm, and implemented the technique of selecting a subset of conflicting objectives using a correlation-based ordering of objectives. The resultant method is called $\alpha $ -DEMO, where $\alpha $ is a parameter determining the number of conflicting objectives to be selected. We have also proposed a new form of elitism so as to restrict the number of higher ranked solutions that are selected in the next population. The $\alpha $ -DEMO with the revised elitism is referred to as $\alpha $ -DEMO-revised. Extensive results of the five DTLZ functions show that the number of objective computations required in the proposed algorithm is much less compared to the existing algorithms, while the convergence measures are competitive or often better. Statistical significance testing is also performed. A real-life application on structural optimization of factory shed truss is demonstrated.

Journal ArticleDOI
TL;DR: This paper presents a simple and generic transformation technique based on multiobjective optimization for nonlinear equation systems that outperforms another state-of-the-art multiobjectives optimization based transformation technique and four single-objective optimized approaches on a set of test instances.
Abstract: Nonlinear equation systems may have multiple optimal solutions. The main task of solving nonlinear equation systems is to simultaneously locate these optimal solutions in a single run. When solving nonlinear equation systems by evolutionary algorithms, usually a nonlinear equation system should be transformed into a kind of optimization problem. At present, various transformation techniques have been proposed. This paper presents a simple and generic transformation technique based on multiobjective optimization for nonlinear equation systems. Unlike the previous work, our transformation technique transforms a nonlinear equation system into a biobjective optimization problem that can be decomposed into two parts. The advantages of our transformation technique are twofold: 1) all the optimal solutions of a nonlinear equation system are the Pareto optimal solutions of the transformed problem, which are mapped into diverse points in the objective space, and 2) multiobjective evolutionary algorithms can be directly applied to handle the transformed problem. In order to verify the effectiveness of our transformation technique, it has been integrated with nondominated sorting genetic algorithm II to solve nonlinear equation systems. The experimental results have demonstrated that, overall, our transformation technique outperforms another state-of-the-art multiobjective optimization based transformation technique and four single-objective optimization based approaches on a set of test instances. The influence of the types of Pareto front on the performance of our transformation technique has been investigated empirically. Moreover, the limitation of our transformation technique has also been identified and discussed in this paper.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors extended the set-based PSO, an existing discrete PSO (DPSO) method, to cover array generation, and two auxiliary strategies (particle reinitialization and additional evaluation of gbest ) are proposed to improve performance, and thus a novel DPSO for covering array generation is developed.
Abstract: Software behavior depends on many factors. Combinatorial testing (CT) aims to generate small sets of test cases to uncover defects caused by those factors and their interactions. Covering array generation, a discrete optimization problem, is the most popular research area in the field of CT. Particle swarm optimization (PSO), an evolutionary search-based heuristic technique, has succeeded in generating covering arrays that are competitive in size. However, current PSO methods for covering array generation simply round the particle’s position to an integer to handle the discrete search space. Moreover, no guidelines are available to effectively set PSOs parameters for this problem. In this paper, we extend the set-based PSO, an existing discrete PSO (DPSO) method, to covering array generation. Two auxiliary strategies (particle reinitialization and additional evaluation of gbest ) are proposed to improve performance, and thus a novel DPSO for covering array generation is developed. Guidelines for parameter settings both for conventional PSO (CPSO) and for DPSO are developed systematically here. Discrete extensions of four existing PSO variants are developed, in order to further investigate the effectiveness of DPSO for covering array generation. Experiments show that CPSO can produce better results using the guidelines for parameter settings, and that DPSO can generate smaller covering arrays than CPSO and other existing evolutionary algorithms. DPSO is a promising improvement on PSO for covering array generation.

Journal ArticleDOI
TL;DR: An interactive multiobjective evolutionary algorithm that attempts to learn a value function capturing the users' true preferences, and empirically compare different ways to identify the value function that seems to be the most representative with respect to the given preference information.
Abstract: This paper proposes an interactive multiobjective evolutionary algorithm (MOEA) that attempts to learn a value function capturing the users’ true preferences. At regular intervals, the user is asked to rank a single pair of solutions. This information is used to update the algorithm’s internal value function model, and the model is used in subsequent generations to rank solutions incomparable according to dominance. This speeds up evolution toward the region of the Pareto front that is most desirable to the user. We take into account the most general additive value function as a preference model and we empirically compare different ways to identify the value function that seems to be the most representative with respect to the given preference information, different types of user preferences, and different ways to use the learned value function in the MOEA. Results on a number of different scenarios suggest that the proposed algorithm works well over a range of benchmark problems and types of user preferences.

Journal ArticleDOI
TL;DR: The state-of-the-art applications of different meta-heuristic algorithms in engine management systems are reviewed including evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system.
Abstract: Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system.