scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2014"


Journal ArticleDOI
TL;DR: A reference-point-based many-objective evolutionary algorithm that emphasizes population members that are nondominated, yet close to a set of supplied reference points is suggested that is found to produce satisfactory results on all problems considered in this paper.
Abstract: Having developed multiobjective optimization algorithms using evolutionary optimization methods and demonstrated their niche on various practical problems involving mostly two and three objectives, there is now a growing need for developing evolutionary multiobjective optimization (EMO) algorithms for handling many-objective (having four or more objectives) optimization problems. In this paper, we recognize a few recent efforts and discuss a number of viable directions for developing a potential EMO algorithm for solving many-objective optimization problems. Thereafter, we suggest a reference-point-based many-objective evolutionary algorithm following NSGA-II framework (we call it NSGA-III) that emphasizes population members that are nondominated, yet close to a set of supplied reference points. The proposed NSGA-III is applied to a number of many-objective test problems with three to 15 objectives and compared with two versions of a recently suggested EMO algorithm (MOEA/D). While each of the two MOEA/D methods works well on different classes of problems, the proposed NSGA-III is found to produce satisfactory results on all problems considered in this paper. This paper presents results on unconstrained problems, and the sequel paper considers constrained and other specialties in handling many-objective optimization problems.

3,906 citations


Journal ArticleDOI
TL;DR: This paper extends NSGA-III to solve generic constrained many-objective optimization problems and suggests three types of constrained test problems that are scalable to any number of objectives and provide different types of challenges to a many- objective optimizer.
Abstract: In the precursor paper, a many-objective optimization method (NSGA-III), based on the NSGA-II framework, was suggested and applied to a number of unconstrained test and practical problems with box constraints alone. In this paper, we extend NSGA-III to solve generic constrained many-objective optimization problems. In the process, we also suggest three types of constrained test problems that are scalable to any number of objectives and provide different types of challenges to a many-objective optimizer. A previously suggested MOEA/D algorithm is also extended to solve constrained problems. Results using constrained NSGA-III and constrained MOEA/D show an edge of the former, particularly in solving problems with a large number of objectives. Furthermore, the NSGA-III algorithm is made adaptive in updating and including new reference points on the fly. The resulting adaptive NSGA-III is shown to provide a denser representation of the Pareto-optimal front, compared to the original NSGA-III with an identical computational effort. This, and the original NSGA-III paper, together suggest and amply test a viable evolutionary many-objective optimization algorithm for handling constrained and unconstrained problems. These studies should encourage researchers to use and pay further attention in evolutionary many-objective optimization.

1,247 citations


Journal ArticleDOI
TL;DR: This letter argues that population diversity is more important than convergence in multiobjective evolutionary algorithms for dealing with some MOPs and proposes MOEA/D-M2M, a new version of multiobjectives optimization evolutionary algorithm-based decomposition.
Abstract: This letter suggests an approach for decomposing a multiobjective optimization problem (MOP) into a set of simple multiobjective optimization subproblems. Using this approach, it proposes MOEA/D-M2M, a new version of multiobjective optimization evolutionary algorithm-based decomposition. This proposed algorithm solves these subproblems in a collaborative way. Each subproblem has its own population and receives computational effort at each generation. In such a way, population diversity can be maintained, which is critical for solving some MOPs. Experimental studies have been conducted to compare MOEA/D-M2M with classic MOEA/D and NSGA-II. This letter argues that population diversity is more important than convergence in multiobjective evolutionary algorithms for dealing with some MOPs. It also explains why MOEA/D-M2M performs better.

612 citations


Journal ArticleDOI
TL;DR: An automatic decomposition strategy called differential grouping is proposed that can uncover the underlying interaction structure of the decision variables and form subcomponents such that the interdependence between them is kept to a minimum and greatly improve the solution quality on large-scale global optimization problems.
Abstract: Cooperative co-evolution has been introduced into evolutionary algorithms with the aim of solving increasingly complex optimization problems through a divide-and-conquer paradigm. In theory, the idea of co-adapted subcomponents is desirable for solving large-scale optimization problems. However, in practice, without prior knowledge about the problem, it is not clear how the problem should be decomposed. In this paper, we propose an automatic decomposition strategy called differential grouping that can uncover the underlying interaction structure of the decision variables and form subcomponents such that the interdependence between them is kept to a minimum. We show mathematically how such a decomposition strategy can be derived from a definition of partial separability. The empirical studies show that such near-optimal decomposition can greatly improve the solution quality on large-scale global optimization problems. Finally, we show how such an automated decomposition allows for a better approximation of the contribution of various subcomponents, leading to a more efficient assignment of the computational budget to various subcomponents.

573 citations


Journal ArticleDOI
TL;DR: The application of SDE in three popular Pareto-based algorithms demonstrates its usefulness in handling many-objective problems and an extensive comparison with five state-of-the-art EMO algorithms reveals its competitiveness in balancing convergence and diversity of solutions.
Abstract: It is commonly accepted that Pareto-based evolutionary multiobjective optimization (EMO) algorithms encounter difficulties in dealing with many-objective problems. In these algorithms, the ineffectiveness of the Pareto dominance relation for a high-dimensional space leads diversity maintenance mechanisms to play the leading role during the evolutionary process, while the preference of diversity maintenance mechanisms for individuals in sparse regions results in the final solutions distributed widely over the objective space but distant from the desired Pareto front. Intuitively, there are two ways to address this problem: 1) modifying the Pareto dominance relation and 2) modifying the diversity maintenance mechanism in the algorithm. In this paper, we focus on the latter and propose a shift-based density estimation (SDE) strategy. The aim of our study is to develop a general modification of density estimation in order to make Pareto-based algorithms suitable for many-objective optimization. In contrast to traditional density estimation that only involves the distribution of individuals in the population, SDE covers both the distribution and convergence information of individuals. The application of SDE in three popular Pareto-based algorithms demonstrates its usefulness in handling many-objective problems. Moreover, an extensive comparison with five state-of-the-art EMO algorithms reveals its competitiveness in balancing convergence and diversity of solutions. These findings not only show that SDE is a good alternative to tackle many-objective problems, but also present a general extension of Pareto-based algorithms in many-objective optimization.

466 citations


Journal ArticleDOI
TL;DR: This two-part paper has surveyed different multiobjective evolutionary algorithms for clustering, association rule mining, and several other data mining tasks, and provided a general discussion on the scopes for future research in this domain.
Abstract: The aim of any data mining technique is to build an efficient predictive or descriptive model of a large amount of data. Applications of evolutionary algorithms have been found to be particularly useful for automatic processing of large quantities of raw noisy data for optimal parameter setting and to discover significant and meaningful information. Many real-life data mining problems involve multiple conflicting measures of performance, or objectives, which need to be optimized simultaneously. Under this context, multiobjective evolutionary algorithms are gradually finding more and more applications in the domain of data mining since the beginning of the last decade. In this two-part paper, we have made a comprehensive survey on the recent developments of multiobjective evolutionary algorithms for data mining problems. In this paper, Part I, some basic concepts related to multiobjective optimization and data mining are provided. Subsequently, various multiobjective evolutionary approaches for two major data mining tasks, namely feature selection and classification, are surveyed. In Part II of this paper, we have surveyed different multiobjective evolutionary algorithms for clustering, association rule mining, and several other data mining tasks, and provided a general discussion on the scopes for future research in this domain.

406 citations


Journal ArticleDOI
TL;DR: A new framework is developed and used in GPEME, which carefully coordinates the surrogate modeling and the evolutionary search, so that the search can focus on a small promising area and is supported by the constructed surrogate model.
Abstract: Surrogate model assisted evolutionary algorithms (SAEAs) have recently attracted much attention due to the growing need for computationally expensive optimization in many real-world applications. Most current SAEAs, however, focus on small-scale problems. SAEAs for medium-scale problems (i.e., 20-50 decision variables) have not yet been well studied. In this paper, a Gaussian process surrogate model assisted evolutionary algorithm for medium-scale computationally expensive optimization problems (GPEME) is proposed and investigated. Its major components are a surrogate model-aware search mechanism for expensive optimization problems when a high-quality surrogate model is difficult to build and dimension reduction techniques for tackling the “curse of dimensionality.” A new framework is developed and used in GPEME, which carefully coordinates the surrogate modeling and the evolutionary search, so that the search can focus on a small promising area and is supported by the constructed surrogate model. Sammon mapping is introduced to transform the decision variables from tens of dimensions to a few dimensions, in order to take advantage of Gaussian process surrogate modeling in a low-dimensional space. Empirical studies on benchmark problems with 20, 30, and 50 variables and a real-world power amplifier design automation problem with 17 variables show the high efficiency and effectiveness of GPEME. Compared to three state-of-the-art SAEAs, better or similar solutions can be obtained with 12% to 50% exact function evaluations.

369 citations


Journal ArticleDOI
TL;DR: This paper proposes a bandit-based AOS method, fitness-rate-rank-based multiarmed bandit (FRRMAB), which uses a sliding window to record the recent fitness improvement rates achieved by the operators, while employing a decaying mechanism to increase the selection probability of the best operator.
Abstract: Adaptive operator selection (AOS) is used to determine the application rates of different operators in an online manner based on their recent performances within an optimization process. This paper proposes a bandit-based AOS method, fitness-rate-rank-based multiarmed bandit (FRRMAB). In order to track the dynamics of the search process, it uses a sliding window to record the recent fitness improvement rates achieved by the operators, while employing a decaying mechanism to increase the selection probability of the best operator. Not much work has been done on AOS in multiobjective evolutionary computation since it is very difficult to measure the fitness improvements quantitatively in most Pareto-dominance-based multiobjective evolutionary algorithms. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Thus, it is natural and feasible to use AOS in MOEA/D. We investigate several important issues in using FRRMAB in MOEA/D. Our experimental results demonstrate that FRRMAB is robust and its operator selection is reasonable. Comparison experiments also indicate that FRRMAB can significantly improve the performance of MOEA/D.

343 citations


Journal ArticleDOI
TL;DR: Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem and the decomposition mechanism is adopted.
Abstract: The field of complex network clustering has been very active in the past several years. In this paper, a discrete framework of the particle swarm optimization algorithm is proposed. Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem. The decomposition mechanism is adopted. A problem-specific population initialization method based on label propagation and a turbulence operator are introduced. In the proposed method, two evaluation objectives termed as kernel k-means and ratio cut are to be minimized. However, the two objectives can only be used to handle unsigned networks. In order to deal with signed networks, they have been extended to the signed version. The clustering performances of the proposed algorithm have been validated on signed networks and unsigned networks. Extensive experimental studies compared with ten state-of-the-art approaches prove that the proposed algorithm is effective and promising.

342 citations


Journal ArticleDOI
TL;DR: This paper advocate the use of a simple and effective stable matching (STM) model to coordinate the selection process in MOEA/D and demonstrated that user-preference information can be readily used in the proposed algorithm to find a region that decision makers are interested in.
Abstract: Multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem into a set of scalar optimization subproblems and optimizes them in a collaborative manner. Subproblems and solutions are two sets of agents that naturally exist in MOEA/D. The selection of promising solutions for subproblems can be regarded as a matching between subproblems and solutions. Stable matching, proposed in economics, can effectively resolve conflicts of interests among selfish agents in the market. In this paper, we advocate the use of a simple and effective stable matching (STM) model to coordinate the selection process in MOEA/D. In this model, subproblem agents can express their preferences over the solution agents, and vice versa. The stable outcome produced by the STM model matches each subproblem with one single solution, and it tradeoffs convergence and diversity of the evolutionary search. Comprehensive experiments have shown the effectiveness and competitiveness of our MOEA/D algorithm with the STM model. We have also demonstrated that user-preference information can be readily used in our proposed algorithm to find a region that decision makers are interested in.

292 citations


Journal ArticleDOI
TL;DR: A new fitness evaluation mechanism to continuously differentiate individuals into different degrees of optimality beyond the classification of the original Pareto dominance is introduced, and the concept of fuzzy logic is adopted to define a fuzzy Pare to domination relation.
Abstract: Evolutionary algorithms have been effectively used to solve multiobjective optimization problems with a small number of objectives, two or three in general. However, when problems with many objectives are encountered, nearly all algorithms perform poorly due to loss of selection pressure in fitness evaluation solely based upon the Pareto optimality principle. In this paper, we introduce a new fitness evaluation mechanism to continuously differentiate individuals into different degrees of optimality beyond the classification of the original Pareto dominance. The concept of fuzzy logic is adopted to define a fuzzy Pareto domination relation. As a case study, the fuzzy concept is incorporated into the designs of NSGA-II and SPEA2. Experimental results show that the proposed methods exhibit better performance in both convergence and diversity than the original ones for solving many-objective optimization problems.

Journal ArticleDOI
TL;DR: An improved differential evolution algorithm with a real-coded matrix representation for each individual of the population, a two-step method for generating the initial population, and a new mutation strategy are proposed to solve the SCC scheduling problem.
Abstract: This paper studies a challenging problem of dynamic scheduling in steelmaking-continuous casting (SCC) production. The problem is to re-optimize the assignment, sequencing, and timetable of a set of existing and new jobs among various production stages for the new environment when unforeseen changes occur in the production system. We model the problem considering the constraints of the practical technological requirements and the dynamic nature. To solve the SCC scheduling problem, we propose an improved differential evolution (DE) algorithm with a real-coded matrix representation for each individual of the population, a two-step method for generating the initial population, and a new mutation strategy. To further improve the efficiency and effectiveness of the solution process for dynamic use, an incremental mechanism is proposed to generate a new initial population for the DE whenever a real-time event arises, based on the final population in the last DE solution process. Computational experiments on randomly generated instances and the practical production data show that the proposed improved algorithm can obtain better solutions compared to other algorithms.

Journal ArticleDOI
TL;DR: A DE algorithm is proposed that uses a new mechanism to dynamically select the best performing combinations of parameters for a problem during the course of a single run and shows better performance over the state-of-the-art algorithms.
Abstract: Over the last few decades, a number of differential evolution (DE) algorithms have been proposed with excellent performance on mathematical benchmarks. However, like any other optimization algorithm, the success of DE is highly dependent on the search operators and control parameters that are often decided a priori. The selection of the parameter values is itself a combinatorial optimization problem. Although a considerable number of investigations have been conducted with regards to parameter selection, it is known to be a tedious task. In this paper, a DE algorithm is proposed that uses a new mechanism to dynamically select the best performing combinations of parameters (amplification factor, crossover rate, and the population size) for a problem during the course of a single run. The performance of the algorithm is judged by solving three well known sets of optimization test problems (two constrained and one unconstrained). The results demonstrate that the proposed algorithm not only saves the computational time, but also shows better performance over the state-of-the-art algorithms. The proposed mechanism can easily be applied to other population-based algorithms.

Journal ArticleDOI
TL;DR: Four new multi-objective genetic programming-based hyperheuristic (MO-GPHH) methods for automatic design of scheduling policies, including dispatching rules and due-date assignment rules in job shop environments are developed.
Abstract: A scheduling policy strongly influences the performance of a manufacturing system. However, the design of an effective scheduling policy is complicated and time consuming due to the complexity of each scheduling decision, as well as the interactions among these decisions. This paper develops four new multi-objective genetic programming-based hyperheuristic (MO-GPHH) methods for automatic design of scheduling policies, including dispatching rules and due-date assignment rules in job shop environments. In addition to using three existing search strategies, nondominated sorting genetic algorithm II, strength Pareto evolutionary algorithm 2, and harmonic distance-based multi-objective evolutionary algorithm, to develop new MO-GPHH methods, a new approach called diversified multi-objective cooperative evolution (DMOCC) is also proposed. The novelty of these MO-GPHH methods is that they are able to handle multiple scheduling decisions simultaneously. The experimental results show that the evolved Pareto fronts represent effective scheduling policies that can dominate scheduling policies from combinations of existing dispatching rules with dynamic/regression-based due-date assignment rules. The evolved scheduling policies also show dominating performance on unseen simulation scenarios with different shop settings. In addition, the uniformity of the scheduling policies obtained from the proposed method of DMOCC is better than those evolved by other evolutionary approaches.

Journal ArticleDOI
TL;DR: A recently developed discrete firefly algorithm is extended to solve hybrid flowshop scheduling problems with two objectives and shows that the proposed algorithm outperforms many other metaheuristics in the literature.
Abstract: Hybrid flowshop scheduling problems include the generalization of flowshops with parallel machines in some stages. Hybrid flowshop scheduling problems are known to be NP-hard. Hence, researchers have proposed many heuristics and metaheuristic algorithms to tackle such challenging tasks. In this letter, a recently developed discrete firefly algorithm is extended to solve hybrid flowshop scheduling problems with two objectives. Makespan and mean flow time are the objective functions considered. Computational experiments are carried out to evaluate the performance of the proposed algorithm. The results show that the proposed algorithm outperforms many other metaheuristics in the literature.

Journal ArticleDOI
TL;DR: An ant colony optimization (ACO) algorithm that extends the ACOR algorithm for continuous optimization to tackle mixed-variable optimization problems, and a novel procedure to generate artificial, mixed- variable benchmark functions that is used to automatically tune ACOMV's parameters.
Abstract: In this paper, we introduce ACO MV : an ant colony optimization (ACO) algorithm that extends the ACO R algorithm for continuous optimization to tackle mixed-variable optimization problems. In ACO MV , the decision variables of an optimization problem can be explicitly declared as continuous, ordinal, or categorical, which allows the algorithm to treat them adequately. ACO MV includes three solution generation mechanisms: a continuous optimization mechanism (ACO R ), a continuous relaxation mechanism (ACO MV -o) for ordinal variables, and a categorical optimization mechanism (ACO MV -c) for categorical variables. Together, these mechanisms allow ACO MV to tackle mixed-variable optimization problems. We also define a novel procedure to generate artificial, mixed-variable benchmark functions, and we use it to automatically tune ACO MV 's parameters. The tuned ACO MV is tested on various real-world continuous and mixed-variable engineering optimization problems. Comparisons with results from the literature demonstrate the effectiveness and robustness of ACO MV on mixed-variable optimization problems.

Journal ArticleDOI
TL;DR: The proposed method is much better than a traditional EP, a surrogate-assisted penalty-based EP, stochastic ranking evolution strategy, scatter search, and CMODE, and it is competitive with ConstrLMSRBF on the problems used.
Abstract: This paper develops a surrogate-assisted evolutionary programming (EP) algorithm for constrained expensive black-box optimization that can be used for high-dimensional problems with many black-box inequality constraints. The proposed method does not use a penalty function and it builds surrogates for the objective and constraint functions. Each parent generates a large number of trial offspring in each generation. Then, the surrogate functions are used to identify the trial offspring that are predicted to be feasible with the best predicted objective function values or those with the minimum number of predicted constraint violations. The objective and constraint functions are then evaluated only on the most promising trial offspring from each parent, and the method proceeds in the same way as in a standard EP. In the numerical experiments, the type of surrogate used to model the objective and each of the constraint functions is a cubic radial basis function (RBF) augmented by a linear polynomial. The resulting RBF-assisted EP is applied to 18 benchmark problems and to an automotive problem with 124 decision variables and 68 black-box inequality constraints. The proposed method is much better than a traditional EP, a surrogate-assisted penalty-based EP, stochastic ranking evolution strategy, scatter search, and CMODE, and it is competitive with ConstrLMSRBF on the problems used.

Journal ArticleDOI
TL;DR: This paper introduces an ensemble method to compare MOEAs by combining a number of performance metrics using double elimination tournament selection and shows that the proposed metric ensemble can provide a more comprehensive comparison among variousMOEAs than what could be obtained from a single performance metric alone.
Abstract: Evolutionary algorithms have been successfully exploited to solve multiobjective optimization problems. In the literature, a heuristic approach is often taken. For a chosen benchmark problem with specific problem characteristics, the performance of multiobjective evolutionary algorithms (MOEAs) is evaluated via some heuristic chosen performance metrics. The conclusion is then drawn based on statistical findings given the preferable choices of performance metrics. The conclusion, if any, is often indecisive and reveals no insight pertaining to which specific problem characteristics the underlying MOEA could perform the best. In this paper, we introduce an ensemble method to compare MOEAs by combining a number of performance metrics using double elimination tournament selection. The double elimination design allows characteristically poor performance of a quality algorithm to still be able to win it all. Experimental results show that the proposed metric ensemble can provide a more comprehensive comparison among various MOEAs than what could be obtained from a single performance metric alone. The end result is a ranking order among all chosen MOEAs, but not quantifiable measures pertaining to the underlying MOEAs.

Journal ArticleDOI
TL;DR: Autonomous scaling is, for the first time, shown to be possible in learning classifier systems and improves effectiveness and reduces the number of training instances required in large problems, but requires more time due to its sequential build-up of knowledge.
Abstract: Evolutionary computation techniques have had limited capabilities in solving large-scale problems due to the large search space demanding large memory and much longer training times. In the work presented here, a genetic programming like rich encoding scheme has been constructed to identify building blocks of knowledge in a learning classifier system. The fitter building blocks from the learning system trained against smaller problems have been utilized in a higher complexity problem in the domain to achieve scalable learning. The proposed system has been examined and evaluated on four different Boolean problem domains: 1) multiplexer, 2) majority-on, 3) carry, and 4) even-parity problems. The major contribution of this paper is to successfully extract useful building blocks from smaller problems and reuse them to learn more complex large-scale problems in the domain, e.g., 135-bit multiplexer problem, where the number of possible instances is 2 135 ≈ 4 × 10 40 , is solved by reusing the extracted knowledge from the learned lower level solutions in the domain. Autonomous scaling is, for the first time, shown to be possible in learning classifier systems. It improves effectiveness and reduces the number of training instances required in large problems, but requires more time due to its sequential build-up of knowledge.

Journal ArticleDOI
Lin Li1, Xin Yao2, Rustam Stolkin2, Maoguo Gong1, Shan He2 
TL;DR: A new soft-thresholding evolutionary multiobjective algorithm (StEMO) is presented, which uses a soft-Thresholding technique to incorporate two additional heuristics: one with greater chance to increase speed of convergence toward the PF, and another with higher probability to improve the spread of solutions along thePF, enabling an optimal solution to be found in the knee region.
Abstract: This paper addresses the problem of finding sparse solutions to linear systems. Although this problem involves two competing cost function terms (measurement error and a sparsity-inducing term), previous approaches combine these into a single cost term and solve the problem using conventional numerical optimization methods. In contrast, the main contribution of this paper is to use a multiobjective approach. The paper begins by investigating the sparse reconstruction problem, and presents data to show that knee regions do exist on the Pareto front (PF) for this problem and that optimal solutions can be found in these knee regions. Another contribution of the paper, a new soft-thresholding evolutionary multiobjective algorithm (StEMO), is then presented, which uses a soft-thresholding technique to incorporate two additional heuristics: one with greater chance to increase speed of convergence toward the PF, and another with higher probability to improve the spread of solutions along the PF, enabling an optimal solution to be found in the knee region. Experiments are presented, which show that StEMO significantly outperforms five other well known techniques that are commonly used for sparse reconstruction. Practical applications are also demonstrated to fundamental problems of recovering signals and images from noisy data.

Journal ArticleDOI
TL;DR: A hybrid approach consisting of the new estimation of distribution algorithm and a variable neighborhood search is proposed and it is suggested that the successful performance of the introduced approach is due to the ability of the generalized Mallows estimation of Distribution algorithm to discover promising regions in the search space.
Abstract: The aim of this paper is two-fold. First, we introduce a novel general estimation of distribution algorithm to deal with permutation-based optimization problems. The algorithm is based on the use of a probabilistic model for permutations called the generalized Mallows model. In order to prove the potential of the proposed algorithm, our second aim is to solve the permutation flowshop scheduling problem. A hybrid approach consisting of the new estimation of distribution algorithm and a variable neighborhood search is proposed. Conducted experiments demonstrate that the proposed algorithm is able to outperform the state-of-the-art approaches. Moreover, from the 220 benchmark instances tested, the proposed hybrid approach obtains new best known results in 152 cases. An in-depth study of the results suggests that the successful performance of the introduced approach is due to the ability of the generalized Mallows estimation of distribution algorithm to discover promising regions in the search space.

Journal ArticleDOI
TL;DR: A divide-and-conquer approach is proposed to solve the large-scale capacitated arc routing problem (LSCARP) more effectively, which adopts the cooperative coevolution framework to decompose it into smaller ones and solve them separately.
Abstract: In this paper, a divide-and-conquer approach is proposed to solve the large-scale capacitated arc routing problem (LSCARP) more effectively. Instead of considering the problem as a whole, the proposed approach adopts the cooperative coevolution (CC) framework to decompose it into smaller ones and solve them separately. An effective decomposition scheme called the route distance grouping (RDG) is developed to decompose the problem. Its merit is twofold. First, it employs the route information of the best-so-far solution, so that the quality of the decomposition is upper bounded by that of the best-so-far solution. Thus, it can keep improving the decomposition by updating the best-so-far solution during the search. Second, it defines a distance between routes, based on which the potentially better decompositions can be identified. Therefore, RDG is able to obtain promising decompositions and focus the search on the promising regions of the vast solution space. Experimental studies verified the efficacy of RDG on the instances with a large number of tasks and tight capacity constraints, where it managed to obtain significantly better results than its counterpart without decomposition in a much shorter time. Furthermore, the best-known solutions of the EGL-G LSCARP instances are much improved.

Journal ArticleDOI
TL;DR: An effective multiobjective particle swarm optimization method for population classification in fire evacuation operations, which simultaneously optimizes the precision and recall measures of the classification rules.
Abstract: In an emergency evacuation operation, accurate classification of the evacuee population can provide important information to support the responders in decision making; and therefore, makes a great contribution in protecting the population from potential harm. However, real-world data of fire evacuation is often noisy, incomplete, and inconsistent, and the response time of population classification is very limited. In this paper, we propose an effective multiobjective particle swarm optimization method for population classification in fire evacuation operations, which simultaneously optimizes the precision and recall measures of the classification rules. We design an effective approach for encoding classification rules, and use a comprehensive learning strategy for evolving particles and maintaining diversity of the swarm. Comparative experiments show that the proposed method performs better than some state-of-the-art methods for classification rule mining, especially on the real-world fire evacuation dataset. This paper also reports a successful application of our method in a real-world fire evacuation operation that recently occurred in China. The method can be easily extended to many other multiobjective rule mining problems.

Journal ArticleDOI
TL;DR: The algorithm is tested on the set of CEC09 problems, where the results show that multiobjective optimization based on joint model estimation is able to obtain considerably better fronts for some of the problems compared with the search based on conventional genetic operators in the state-of-the-art multi objective evolutionary algorithms.
Abstract: This paper proposes a new multiobjective estimation of distribution algorithm (EDA) based on joint probabilistic modeling of objectives and variables. This EDA uses the multidimensional Bayesian network as its probabilistic model. In this way, it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learned between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm find better tradeoff solutions to the multiobjective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multiobjective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is first applied to the set of walking fish group problems, and its optimization performance is compared with a standard multiobjective evolutionary algorithm and another competitive multiobjective EDA. The experimental results show that on several of these problems, and for different objective space dimensions, the proposed algorithm performs significantly better and on some others achieves comparable results when compared with the other two algorithms. The algorithm is then tested on the set of CEC09 problems, where the results show that multiobjective optimization based on joint model estimation is able to obtain considerably better fronts for some of the problems compared with the search based on conventional genetic operators in the state-of-the-art multiobjective evolutionary algorithms.

Journal ArticleDOI
TL;DR: This paper proposes MOPNAR, a new multiobjective evolutionary algorithm, in order to mine a reduced set of positive and negative quantitative association rules with low computational cost and maximizes three objectives-comprehensibility, interestingness, and performance-in order to obtain rules that are interesting, easy to understand, and provide good coverage of the dataset.
Abstract: Most of the algorithms for mining quantitative association rules focus on positive dependencies without paying particular attention to negative dependencies. The latter may be worth taking into account, however, as they relate the presence of certain items to the absence of others. The algorithms used to extract such rules usually consider only one evaluation criterion in measuring the quality of generated rules. Recently, some researchers have framed the process of extracting association rules as a multiobjective problem, allowing us to jointly optimize several measures that can present different degrees of trade-off depending on the dataset used. In this paper, we propose MOPNAR, a new multiobjective evolutionary algorithm, in order to mine a reduced set of positive and negative quantitative association rules with low computational cost. To accomplish this, our proposal extends a recent multiobjective evolutionary algorithm based on decomposition to perform an evolutionary learning of the intervals of the attributes and a condition selection for each rule, while introducing an external population and a restarting process to store all the nondominated rules found and to improve the diversity of the rule set obtained. Moreover, this proposal maximizes three objectives-comprehensibility, interestingness, and performance-in order to obtain rules that are interesting, easy to understand, and provide good coverage of the dataset. The effectiveness of the proposed approach is validated over several real-world datasets.

Journal ArticleDOI
TL;DR: This paper develops a two-step approach to evolving ensembles using genetic programming (GP) for unbalanced data and proposes a novel ensemble selection approach using GP to automatically find/choose the best individuals for the ensemble.
Abstract: Classification algorithms can suffer from performance degradation when the class distribution is unbalanced. This paper develops a two-step approach to evolving ensembles using genetic programming (GP) for unbalanced data. The first step uses multiobjective (MO) GP to evolve a Pareto-approximated front of GP classifiers to form the ensemble by trading-off the minority and the majority class against each other during learning. The MO component alleviates the reliance on sampling to artificially rebalance the data. The second step, which is the focus this paper, proposes a novel ensemble selection approach using GP to automatically find/choose the best individuals for the ensemble. This new GP approach combines multiple Pareto-approximated front members into a single composite genetic program solution to represent the (optimized) ensemble. This ensemble representation has two main advantages/novelties over traditional genetic algorithm (GA) approaches. First, by limiting the depth of the composite solution trees, we use selection pressure during evolution to find small highly-cooperative groups of individuals for the ensemble. This means that ensemble sizes are not fixed a priori (as in GA), but vary depending on the strength of the base learners. Second, we compare different function set operators in the composite solution trees to explore new ways to aggregate the member outputs and thus, control how the ensemble computes its output. We show that the proposed GP approach evolves smaller more diverse ensembles compared to an established ensemble selection algorithm, while still performing as well as, or better than the established approach. The evolved GP ensembles also perform well compared to other bagging and boosting approaches, particularly on tasks with high levels of class imbalance.

Journal ArticleDOI
TL;DR: This paper carries out a comparison of the fitness landscape for four classic optimization problems: Max-Sat, graph-coloring, traveling salesman, and quadratic assignment with a fairly comprehensive description of the properties, including the expected time to reach a local optimum.
Abstract: This paper carries out a comparison of the fitness landscape for four classic optimization problems: Max-Sat, graph-coloring, traveling salesman, and quadratic assignment. We have focused on two types of properties, local average properties of the landscape, and properties of the local optima. For the local optima we give a fairly comprehensive description of the properties, including the expected time to reach a local optimum, the number of local optima at different cost levels, the distance between optima, and the expected probability of reaching the optima. Principle component analysis is used to understand the correlations between the local optima. Most of the properties that we examine have not been studied previously, particularly those concerned with properties of the local optima. We compare and contrast the behavior of the four different problems. Although the problems are very different at the low level, many of the long-range properties exhibit a remarkable degree of similarity.

Journal ArticleDOI
TL;DR: A stochastic version of ERIPSPs, namely extended RIPSPs (ERIPSPs), in which the durations of tasks are a function of allocated resources, and a robustness measure for the solutions of SEPIPSPs when uncertainties interact is proposed.
Abstract: Planning problems, such as mission capability planning in defense, can traditionally be modeled as a resource investment project scheduling problem (RIPSP) with unconstrained resources and cost. This formulation is too abstract in some real-world applications. In these applications, the durations of tasks depend on the allocated resources. In this paper, we first propose a new version of RIPSPs, namely extended RIPSPs (ERIPSPs), in which the durations of tasks are a function of allocated resources. Moreover, we introduce a resource proportion coefficient to manifest the contribution degree of various resources to activities. Since the more realistic nature of projects in practice implies that the circumstances under which the plan will be executed are stochastic in nature, we present a stochastic version of ERIPSPs, namely stochastic extended RIPSPs (SERIPSPs). To solve SERIPSPs, we first use scenarios to capture the space of possibilities (i.e., stochastic elements of the problem). We focus on three sources of uncertainty: duration perturbation, resource breakdown, and precedence alteration. We propose a robustness measure for the solutions of SEPIPSPs when uncertainties interact. We then formulate an SERIPSP as a multiobjective optimization model with three optimization objectives: makespan, cost, and robustness. A knowledge-based multiobjective evolutionary algorithm (K-MOEA) is proposed to solve the problem. The mechanism of K-MOEA is simple and time efficient. The algorithm has two main characteristics. The first is that useful information (knowledge) contained in the obtained approximated nondominated solutions is extracted during the evolutionary process. The second is that extracted knowledge is utilized by updating the population periodically to guide subsequent search. The approach is illustrated using a synthetic case study. Randomly generated benchmark instances are used to analyze the performance of the proposed K-MOEA. The experimental results illustrate the effectiveness of the proposed algorithm and its potential for solving SERIPSPs.

Journal ArticleDOI
TL;DR: The design of a new genetic algorithm (GA) is introduced to detect the locations of license plate (LP) symbols with encouraging results with 98.4% overall accuracy for two different datasets having variability in orientation, scaling, plate location, illumination, and complex background.
Abstract: In this research, the design of a new genetic algorithm (GA) is introduced to detect the locations of license plate (LP) symbols. An adaptive threshold method is applied to overcome the dynamic changes of illumination conditions when converting the image into binary. Connected component analysis technique (CCAT) is used to detect candidate objects inside the unknown image. A scale-invariant geometric relationship matrix is introduced to model the layout of symbols in any LP that simplifies system adaptability when applied in different countries. Moreover, two new crossover operators, based on sorting, are introduced, which greatly improve the convergence speed of the system. Most of the CCAT problems, such as touching or broken bodies, are minimized by modifying the GA to perform partial match until reaching an acceptable fitness value. The system is implemented using MATLAB and various image samples are experimented with to verify the distinction of the proposed system. Encouraging results with 98.4% overall accuracy are reported for two different datasets having variability in orientation, scaling, plate location, illumination, and complex background. Examples of distorted plate images are successfully detected due to the independency on the shape, color, or location of the plate.

Journal ArticleDOI
TL;DR: It is proved that longer memory strategies outperform shorter memory strategies statistically in the sense of evolutionary stability and given an example of a memory-two strategy to show how the theoretical study of evolutionary Stability assists in developing novel strategies.
Abstract: The iterated prisoner's dilemma is an ideal model for the evolution of cooperation among payoff-maximizing individuals. It has attracted wide interest in the development of novel strategies since the success of tit-for-tat in Axelrod's iterated prisoner's dilemma competitions. Every strategy for iterated prisoner's dilemma utilizes a certain length of historical interactions with the opponent, which is regarded as the size of the memory, in making its choices. Intuitively, longer memory strategies must have an advantage over shorter memory strategies. In practice, however, most of the well known strategies are short memory strategies that utilize only the recent history of previous interactions. In this paper, the effect of the memory size of strategies on their evolutionary stability in both infinite length and indefinite length ${n}$ -person iterated prisoner's dilemma is studied. Based on the concept of a counter strategy, we develop a theoretical methodology for evaluating the evolutionary stability of strategies and prove that longer memory strategies outperform shorter memory strategies statistically in the sense of evolutionary stability. We also give an example of a memory-two strategy to show how the theoretical study of evolutionary stability assists in developing novel strategies.