scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary computation published in 2019"


Journal ArticleDOI
TL;DR: A taxonomy of different data driven evolutionary optimization problems is provided, main challenges in data-driven evolutionary optimization with respect to the nature and amount of data, and the availability of new data during optimization are discussed.
Abstract: Most evolutionary optimization algorithms assume that the evaluation of the objective and constraint functions is straightforward. In solving many real-world optimization problems, however, such objective functions may not exist. Instead, computationally expensive numerical simulations or costly physical experiments must be performed for fitness evaluations. In more extreme cases, only historical data are available for performing optimization and no new data can be generated during optimization. Solving evolutionary optimization problems driven by data collected in simulations, physical experiments, production processes, or daily life are termed data-driven evolutionary optimization. In this paper, we provide a taxonomy of different data driven evolutionary optimization problems, discuss main challenges in data-driven evolutionary optimization with respect to the nature and amount of data, and the availability of new data during optimization. Real-world application examples are given to illustrate different model management strategies for different categories of data-driven optimization problems.

344 citations


Journal ArticleDOI
TL;DR: An IGD indicator-based evolutionary algorithm for solving many-objective optimization problems (MaOPs) is proposed and experimental results measured by the chosen performance metrics indicate that the proposed algorithm is very competitive in addressing MaOPs.
Abstract: Inverted generational distance (IGD) has been widely considered as a reliable performance indicator to concurrently quantify the convergence and diversity of multiobjective and many-objective evolutionary algorithms. In this paper, an IGD indicator-based evolutionary algorithm for solving many-objective optimization problems (MaOPs) has been proposed. Specifically, the IGD indicator is employed in each generation to select the solutions with favorable convergence and diversity. In addition, a computationally efficient dominance comparison method is designed to assign the rank values of solutions along with three newly proposed proximity distance assignments. Based on these two designs, the solutions are selected from a global view by linear assignment mechanism to concern the convergence and diversity simultaneously. In order to facilitate the accuracy of the sampled reference points for the calculation of IGD indicator, we also propose an efficient decomposition-based nadir point estimation method for constructing the Utopian Pareto front (PF) which is regarded as the best approximate PF for real-world MaOPs at the early stage of the evolution. To evaluate the performance, a series of experiments is performed on the proposed algorithm against a group of selected state-of-the-art many-objective optimization algorithms over optimization problems with 8-, 15-, and 20-objective. Experimental results measured by the chosen performance metrics indicate that the proposed algorithm is very competitive in addressing MaOPs.

296 citations


Journal ArticleDOI
TL;DR: A parameter-free constraint handling technique, a two-archive evolutionary algorithm, for constrained multiobjective optimization that maintains two collaborative archives simultaneously and develops a restricted mating selection mechanism that adaptively chooses appropriate mating parents from them according to their evolution status.
Abstract: When solving constrained multiobjective optimization problems, an important issue is how to balance convergence, diversity, and feasibility simultaneously. To address this issue, this paper proposes a parameter-free constraint handling technique, a two-archive evolutionary algorithm, for constrained multiobjective optimization. It maintains two collaborative archives simultaneously: one, denoted as the convergence-oriented archive (CA), is the driving force to push the population toward the Pareto front; the other one, denoted as the diversity-oriented archive (DA), mainly tends to maintain the population diversity. In particular, to complement the behavior of the CA and provide as much diversified information as possible, the DA aims at exploring areas under-exploited by the CA including the infeasible regions. To leverage the complementary effects of both archives, we develop a restricted mating selection mechanism that adaptively chooses appropriate mating parents from them according to their evolution status. Comprehensive experiments on a series of benchmark problems and a real-world case study fully demonstrate the competitiveness of our proposed algorithm, in comparison to five state-of-the-art constrained evolutionary multiobjective optimizers.

257 citations


Journal ArticleDOI
TL;DR: The experimental results show that the solution size obtained by the SaPSO algorithm is smaller than its EC counterparts on all datasets, and it performs better than its non-EC and EC counterparts in terms of classification accuracy not only on most training sets but also on most test sets.
Abstract: Many evolutionary computation (EC) methods have been used to solve feature selection problems and they perform well on most small-scale feature selection problems. However, as the dimensionality of feature selection problems increases, the solution space increases exponentially. Meanwhile, there are more irrelevant features than relevant features in datasets, which leads to many local optima in the huge solution space. Therefore, the existing EC methods still suffer from the problem of stagnation in local optima on large-scale feature selection problems. Furthermore, large-scale feature selection problems with different datasets may have different properties. Thus, it may be of low performance to solve different large-scale feature selection problems with an existing EC method that has only one candidate solution generation strategy (CSGS). In addition, it is time-consuming to find a suitable EC method and corresponding suitable parameter values for a given large-scale feature selection problem if we want to solve it effectively and efficiently. In this article, we propose a self-adaptive particle swarm optimization (SaPSO) algorithm for feature selection, particularly for large-scale feature selection. First, an encoding scheme for the feature selection problem is employed in the SaPSO. Second, three important issues related to self-adaptive algorithms are investigated. After that, the SaPSO algorithm with a typical self-adaptive mechanism is proposed. The experimental results on 12 datasets show that the solution size obtained by the SaPSO algorithm is smaller than its EC counterparts on all datasets. The SaPSO algorithm performs better than its non-EC and EC counterparts in terms of classification accuracy not only on most training sets but also on most test sets. Furthermore, as the dimensionality of the feature selection problem increases, the advantages of SaPSO become more prominent. This highlights that the SaPSO algorithm is suitable for solving feature selection problems, particularly large-scale feature selection problems.

238 citations


Journal ArticleDOI
TL;DR: In this paper, an evolutionary attention-based LSTM training with competitive random search is proposed for multivariate time series prediction, which can help to capture long-term dependencies and pay different degree of attention on subwindow feature within multiple time-steps.
Abstract: Time series prediction with deep learning methods, especially Long Short-term Memory Neural Network (LSTM), have scored significant achievements in recent years. Despite the fact that LSTM can help to capture long-term dependencies, its ability to pay different degree of attention on sub-window feature within multiple time-steps is insufficient. To address this issue, an evolutionary attention-based LSTM training with competitive random search is proposed for multivariate time series prediction. By transferring shared parameters, an evolutionary attention learning approach is introduced to LSTM. Thus, like that for biological evolution, the pattern for importance-based attention sampling can be confirmed during temporal relationship mining. To refrain from being trapped into partial optimization like traditional gradient-based methods, an evolutionary computation inspired competitive random search method is proposed, which can well configure the parameters in the attention layer. Experimental results have illustrated that the proposed model can achieve competetive prediction performance compared with other baseline methods.

236 citations


Journal ArticleDOI
TL;DR: An EMT algorithm with explicit genetic transfer across tasks, namely EMT via autoencoding, which allows the incorporation of multiple search mechanisms with different biases in the EMT paradigm is proposed.
Abstract: Evolutionary multitasking (EMT) is an emerging research topic in the field of evolutionary computation. In contrast to the traditional single-task evolutionary search, EMT conducts evolutionary search on multiple tasks simultaneously. It aims to improve convergence characteristics across multiple optimization problems at once by seamlessly transferring knowledge among them. Due to the efficacy of EMT, it has attracted lots of research attentions and several EMT algorithms have been proposed in the literature. However, existing EMT algorithms are usually based on a common mode of knowledge transfer in the form of implicit genetic transfer through chromosomal crossover. This mode cannot make use of multiple biases embedded in different evolutionary search operators, which could give better search performance when properly harnessed. Keeping this in mind, this paper proposes an EMT algorithm with explicit genetic transfer across tasks, namely EMT via autoencoding, which allows the incorporation of multiple search mechanisms with different biases in the EMT paradigm. To confirm the efficacy of the proposed EMT algorithm with explicit autoencoding, comprehensive empirical studies have been conducted on both the single- and multi-objective multitask optimization problems.

224 citations


Journal ArticleDOI
TL;DR: This paper presents a method for reusing the valuable information available from previous individuals to guide later search by incorporating six different information feedback models into ten metaheuristic algorithms and demonstrates experimentally that the variants outperformed the basic algorithms significantly.
Abstract: In most metaheuristic algorithms, the updating process fails to make use of information available from individuals in previous iterations. If this useful information could be exploited fully and used in the later optimization process, the quality of the succeeding solutions would be improved significantly. This paper presents our method for reusing the valuable information available from previous individuals to guide later search. In our approach, previous useful information was fed back to the updating process. We proposed six information feedback models. In these models, individuals from previous iterations were selected in either a fixed or random manner. Their useful information was incorporated into the updating process. Accordingly, an individual at the current iteration was updated based on the basic algorithm plus some selected previous individuals by using a simple fitness weighting method. By incorporating six different information feedback models into ten metaheuristic algorithms, this approach provided a number of variants of the basic algorithms. We demonstrated experimentally that the variants outperformed the basic algorithms significantly on 14 standard test functions and 10 CEC 2011 real world problems, thereby, establishing the value of the information feedback models.

219 citations


Journal ArticleDOI
TL;DR: A surrogate-assisted many-objective evolutionary algorithm that uses an artificial neural network to predict the dominance relationship between candidate solutions and reference solutions instead of approximating the objective values separately is proposed.
Abstract: Surrogate-assisted evolutionary algorithms (SAEAs) have been developed mainly for solving expensive optimization problems where only a small number of real fitness evaluations are allowed. Most existing SAEAs are designed for solving low-dimensional single or multiobjective optimization problems, which are not well suited for many-objective optimization. This paper proposes a surrogate-assisted many-objective evolutionary algorithm that uses an artificial neural network to predict the dominance relationship between candidate solutions and reference solutions instead of approximating the objective values separately. The uncertainty information in prediction is taken into account together with the dominance relationship to select promising solutions to be evaluated using the real objective functions. Our simulation results demonstrate that the proposed algorithm outperforms the state-of-the-art evolutionary algorithms on a set of many-objective optimization test problems.

209 citations


Journal ArticleDOI
TL;DR: A novel dominance relation is proposed to better balance convergence and diversity for evolutionary many-objective optimization, where only the best converged candidate solution is identified to be nondominated in each niche.
Abstract: Both convergence and diversity are crucial to evolutionary many-objective optimization, whereas most existing dominance relations show poor performance in balancing them, thus easily leading to a set of solutions concentrating on a small region of the Pareto fronts. In this paper, a novel dominance relation is proposed to better balance convergence and diversity for evolutionary many-objective optimization. In the proposed dominance relation, an adaptive niching technique is developed based on the angles between the candidate solutions, where only the best converged candidate solution is identified to be nondominated in each niche. Experimental results demonstrate that the proposed dominance relation outperforms existing dominance relations in balancing convergence and diversity. A modified NSGA-II is suggested based on the proposed dominance relation, which shows competitiveness against the state-of-the-art algorithms in solving many-objective optimization problems. The effectiveness of the proposed dominance relation is also verified on several other existing multi- and many-objective evolutionary algorithms.

200 citations


Journal ArticleDOI
TL;DR: A novel swarm intelligent algorithm, known as the fitness dependent optimizer (FDO), which is based on the bee swarming the reproductive process and their collective decision-making and applied to real-world applications as evidence of its feasibility.
Abstract: In this paper, a novel swarm intelligent algorithm is proposed, known as the fitness dependent optimizer (FDO). The bee swarming the reproductive process and their collective decision-making have inspired this algorithm; it has no algorithmic connection with the honey bee algorithm or the artificial bee colony algorithm. It is worth mentioning that the FDO is considered a particle swarm optimization (PSO)-based algorithm that updates the search agent position by adding velocity (pace). However, the FDO calculates velocity differently; it uses the problem fitness function value to produce weights, and these weights guide the search agents during both the exploration and exploitation phases. Throughout this paper, the FDO algorithm is presented, and the motivation behind the idea is explained. Moreover, the FDO is tested on a group of 19 classical benchmark test functions, and the results are compared with three well-known algorithms: PSO, the genetic algorithm (GA), and the dragonfly algorithm (DA); in addition, the FDO is tested on the IEEE Congress of Evolutionary Computation Benchmark Test Functions (CEC-C06, 2019 Competition) [1]. The results are compared with three modern algorithms: (DA), the whale optimization algorithm (WOA), and the salp swarm algorithm (SSA). The FDO results show better performance in most cases and comparative results in other cases. Furthermore, the results are statistically tested with the Wilcoxon rank-sum test to show the significance of the results. Likewise, the FDO stability in both the exploration and exploitation phases is verified and performance-proofed using different standard measurements. Finally, the FDO is applied to real-world applications as evidence of its feasibility.

184 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed G-MFEA works more efficiently for multitasking optimization and successfully accelerates the convergence of expensive optimization problems compared to single-task optimization.
Abstract: Conventional evolutionary algorithms (EAs) are not well suited for solving expensive optimization problems due to the fact that they often require a large number of fitness evaluations to obtain acceptable solutions. To alleviate the difficulty, this paper presents a multitasking evolutionary optimization framework for solving computationally expensive problems. In the framework, knowledge is transferred from a number of computationally cheap optimization problems to help the solution of the expensive problem on the basis of the recently proposed multifactorial EA (MFEA), leading to a faster convergence of the expensive problem. However, existing MFEAs do not work well in solving multitasking problems whose optimums do not lie in the same location or when the dimensions of the decision space are not the same. To address the above issues, the existing MFEA is generalized by proposing two strategies, one for decision variable translation and the other for decision variable shuffling, to facilitate knowledge transfer between optimization problems having different locations of the optimums and different numbers of decision variables. To assess the effectiveness of the generalized MFEA (G-MFEA), empirical studies have been conducted on eight multitasking instances and eight test problems for expensive optimization. The experimental results demonstrate that the proposed G-MFEA works more efficiently for multitasking optimization and successfully accelerates the convergence of expensive optimization problems compared to single-task optimization.

Journal ArticleDOI
TL;DR: The novel PaDE algorithm is verified under 58 benchmarks from two Congress on Evolutionary Computation (CEC) Competition test suites on real-parameter single objective numerical optimization, and experiment results show that the proposed PaDE algorithms is competitive with the other state-of-the-art DE variants.
Abstract: Differential Evolution (DE) variants have been proven to be excellent algorithms in tackling real-parameter single objective numerical optimization because they have secured the front ranks of these competitions for many years. Nevertheless, there are still some weaknesses, e.g. (1) improper control parameter adaptation schemes; and (2) defect in a given mutation strategy., existing in some state-of-the-art DE variants, which may result in slow convergence and worse optimization performance. Therefore, in this paper, a novel Parameter adaptive DE (PaDE) is proposed to tackle the above mentioned weaknesses and the PaDE algorithm has three advantages: (1) A grouping strategy with novel adaptation scheme for C r is proposed to tackle the improper adaptation schemes of C r in some state-of-the-art DE variants; (2) A novel parabolic population size reduction scheme is proposed to tackle the weakness in linear population size reduction scheme; (3) An enhanced time stamp based mutation strategy is proposed to tackle the weakness in a former mutation strategy. The novel PaDE algorithm is verified under 58 benchmarks from two Congress on Evolutionary Computation (CEC) Competition test suites on real-parameter single objective numerical optimization, and experiment results show that the proposed PaDE algorithm is competitive with the other state-of-the-art DE variants.

Journal ArticleDOI
TL;DR: A simple and efficient two-phase framework, named ToP, is proposed in this paper to enhance current CMOEAs’ performance on DOC, the first attempt to consider both the decision and objective constraints simultaneously in the design of artificial CMOPs.
Abstract: Constrained multiobjective optimization problems (CMOPs) are frequently encountered in real-world applications, which usually involve constraints in both the decision and objective spaces. However, current artificial CMOPs never consider constraints in the decision space (i.e., decision constraints) and constraints in the objective space (i.e., objective constraints) at the same time. As a result, they have a limited capability to simulate practical scenes. To remedy this issue, a set of CMOPs, named DOC, is constructed in this paper. It is the first attempt to consider both the decision and objective constraints simultaneously in the design of artificial CMOPs. Specifically, in DOC, various decision constraints (e.g., inequality constraints, equality constraints, linear constraints, and nonlinear constraints) are collected from real-world applications, thus making the feasible region in the decision space have different properties (e.g., nonlinear, extremely small, and multimodal). On the other hand, some simple and controllable objective constraints are devised to reduce the feasible region in the objective space and to make the Pareto front have diverse characteristics (e.g., continuous, discrete, mixed, and degenerate). As a whole, DOC poses a great challenge for a constrained multiobjective evolutionary algorithm (CMOEA) to obtain a set of well-distributed and well-converged feasible solutions. In order to enhance current CMOEAs’ performance on DOC, a simple and efficient two-phase framework, named ToP, is proposed in this paper. In ToP, the first phase is implemented to find the promising feasible area by transforming a CMOP into a constrained single-objective optimization problem. Then in the second phase, a specific CMOEA is executed to obtain the final solutions. ToP is applied to four state-of-the-art CMOEAs, and the experimental results suggest that it is quite effective.

Journal ArticleDOI
TL;DR: A framework to track the Pareto optimal set directly via problem reformulation to accelerate the computational efficiency of evolutionary algorithms on large-scale multiobjective optimization and has been compared with two state-of-the-art algorithms for large- scale multiObjective optimization.
Abstract: In this paper, we propose a framework to accelerate the computational efficiency of evolutionary algorithms on large-scale multiobjective optimization. The main idea is to track the Pareto optimal set (PS) directly via problem reformulation. To begin with, the algorithm obtains a set of reference directions in the decision space and associates them with a set of weight variables for locating the PS. Afterwards, the original large-scale multiobjective optimization problem is reformulated into a low-dimensional single-objective optimization problem. In the reformulated problem, the decision space is reconstructed by the weight variables and the objective space is reduced by an indicator function. Thanks to the low dimensionality of the weight variables and reduced objective space, a set of quasi-optimal solutions can be obtained efficiently. Finally, a multiobjective evolutionary algorithm is used to spread the quasi-optimal solutions over the approximate Pareto optimal front evenly. Experiments have been conducted on a variety of large-scale multiobjective problems with up to 5000 decision variables. Four different types of representative algorithms are embedded into the proposed framework and compared with their original versions, respectively. Furthermore, the proposed framework has been compared with two state-of-the-art algorithms for large-scale multiobjective optimization. The experimental results have demonstrated the significant improvement benefited from the framework in terms of its performance and computational efficiency in large-scale multiobjective optimization.

Proceedings Article
Xiangxiang Chu1, Bo Zhang1, Hailong Ma1, Xu Ruijun1, Li Qingyuan1 
22 Jan 2019
TL;DR: This work handles super-resolution with a multi-objective approach, and proposes an elastic search tactic at both micro and macro level, based on a hybrid controller that profits from evolutionary computation and reinforcement learning.
Abstract: Deep convolutional neural networks demonstrate impressive results in the super-resolution domain. A series of studies concentrate on improving peak signal noise ratio (PSNR) by using much deeper layers, which are not friendly to constrained resources. Pursuing a trade-off between the restoration capacity and the simplicity of models is still non-trivial. Recent contributions are struggling to manually maximize this balance, while our work achieves the same goal automatically with neural architecture search. Specifically, we handle super-resolution with a multi-objective approach. We also propose an elastic search tactic at both micro and macro level, based on a hybrid controller that profits from evolutionary computation and reinforcement learning. Quantitative experiments help us to draw a conclusion that our generated models dominate most of the state-of-the-art methods with respect to the individual FLOPS.

Journal ArticleDOI
TL;DR: This paper provides a review on evolutionary machine learning techniques for major machine learning tasks such as classification, regression and clustering, and emerging topics including combinatorial optimisation, computer vision, deep learning, transfer learning, and ensemble learning.
Abstract: Artificial intelligence (AI) emphasises the creation of intelligent machines/systems that function like humans. AI has been applied to many real-world applications. Machine learning is a branch of ...

Journal ArticleDOI
TL;DR: This paper proposes a new constraint construction method to facilitate the systematic design of test problems and designs a new test suite consisting of 14 instances, which covers diverse characteristics extracted from real-world CMOPs and can be divided into four types.
Abstract: For solving constrained multiobjective optimization problems (CMOPs), many algorithms have been proposed in the evolutionary computation research community for the past two decades. Generally, the effectiveness of an algorithm for CMOPs is evaluated by artificial test problems. However, after a brief review of current artificial test problems, we have found that they are not well-designed and fail to reflect the characteristics of real-world applications (e.g., small feasibility ratio). Thus, in this paper, we first propose a new constraint construction method to facilitate the systematic design of test problems. Then, on the basis of this method, we design a new test suite consisting of 14 instances, which covers diverse characteristics extracted from real-world CMOPs and can be divided into four types. Considering that the comprehensive performance comparisons among the constraint-handling techniques (CHTs) remain scarce, we choose several representative CHTs and compare their performance on our test suite. The performance comparisons identify the strengths and weaknesses of different CHTs on different types of CMOPs and provide guidelines on how to select/design a CHT in a specific scenario.

Journal ArticleDOI
TL;DR: The feasibility rule and the $\boldsymbol {\varepsilon }$ constrained method are combined elaborately for selection in this paper and a restart scheme is proposed to help the population jump out of a local optimum in the infeasible region for some extremely complicated COPs.
Abstract: When solving constrained optimization problems (COPs) by evolutionary algorithms, the search algorithm plays a crucial role. In general, we expect that the search algorithm has the capability to balance not only diversity and convergence but also constraints and objective function during the evolution. For this purpose, this paper proposes a composite differential evolution (DE) for constrained optimization, which includes three different trial vector generation strategies with distinct advantages. In order to strike a balance between diversity and convergence, one of these three trial vector generation strategies is able to increase diversity, and the other two exhibit the property of convergence. In addition, to accomplish the tradeoff between constraints and objective function, one of the two trial vector generation strategies for convergence is guided by the individual with the least degree of constraint violation in the population, and the other is guided by the individual with the best objective function value in the population. After producing offspring by the proposed composite DE, the feasibility rule and the $\boldsymbol {\varepsilon }$ constrained method are combined elaborately for selection in this paper. Moreover, a restart scheme is proposed to help the population jump out of a local optimum in the infeasible region for some extremely complicated COPs. By assembling the above techniques together, a constrained composite DE is proposed. The experiments on two sets of benchmark test functions with various features, i.e., 24 test functions from IEEE CEC2006 and 18 test functions with 10 dimensions and 30 dimensions from IEEE CEC2010, have demonstrated that the proposed method shows better or at least competitive performance against other state-of-the-art methods.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a computationally economical algorithm for evolving unsupervised deep neural networks to efficiently learn meaningful representations, which is very suitable in the current big data era where sufficient labeled data for training is often expensive to acquire.
Abstract: Deep learning (DL) aims at learning the meaningful representations . A meaningful representation gives rise to significant performance improvement of associated machine learning (ML) tasks by replacing the raw data as the input. However, optimal architecture design and model parameter estimation in DL algorithms are widely considered to be intractable. Evolutionary algorithms are much preferable for complex and nonconvex problems due to its inherent characteristics of gradient-free and insensitivity to the local optimal. In this paper, we propose a computationally economical algorithm for evolving unsupervised deep neural networks to efficiently learn meaningful representations , which is very suitable in the current big data era where sufficient labeled data for training is often expensive to acquire. In the proposed algorithm, finding an appropriate architecture and the initialized parameter values for an ML task at hand is modeled by one computational efficient gene encoding approach, which is employed to effectively model the task with a large number of parameters. In addition, a local search strategy is incorporated to facilitate the exploitation search for further improving the performance. Furthermore, a small proportion labeled data is utilized during evolution search to guarantee the learned representations to be meaningful. The performance of the proposed algorithm has been thoroughly investigated over classification tasks. Specifically, error classification rate on MNIST with 1.15% is reached by the proposed algorithm consistently, which is considered a very promising result against state-of-the-art unsupervised DL algorithms.

Journal ArticleDOI
01 Nov 2019
TL;DR: The proposed approach is a hybrid model which merges the benefits of evolutionary computation, ensemble learning, and deep learning and can be employed in the banking system to evaluate the bank credits of the applicants and aid the bank managers in making correct decisions.
Abstract: In the recent decades, credit scoring has become a very important analytical resource for researchers and financial institutions around the world. It helps to boost both profitability and risk control since bank credits plays a significant role in the banking industry. In this study, a novel approach based on deep genetic cascade ensemble of different support vector machine (SVM) classifiers (called Deep Genetic Cascade Ensembles of Classifiers (DGCEC)) is applied to the Statlog Australian data. The proposed approach is a hybrid model which merges the benefits of: (a) evolutionary computation, (b) ensemble learning, and (c) deep learning. The proposed approach comprises of a novel 16-layer genetic cascade ensemble of classifiers, having: two types of SVM classifiers, normalization techniques, feature extraction methods, three types of kernel functions, parameter optimizations, and stratified 10-fold cross-validation method. The general architecture of the proposed approach consists of ensemble learning, deep learning, layered learning, supervised training, feature (attributes) selection using genetic algorithm, optimization of parameters for all classifiers by using genetic algorithm, and a new genetic layered training technique (for selection of classifiers). Our developed model achieved the highest prediction accuracy of 97.39%. Hence, our proposed approach can be employed in the banking system to evaluate the bank credits of the applicants and aid the bank managers in making correct decisions.

Journal ArticleDOI
TL;DR: A novel particle swarm optimization (PSO) algorithm is proposed in order to improve the accuracy of traditional clustering approaches with applications in analyzing real-time patient attendance data from an accident & emergency (A&E) department in a local U.K. hospital.
Abstract: In this paper, a novel particle swarm optimization (PSO) algorithm is proposed in order to improve the accuracy of traditional clustering approaches with applications in analyzing real-time patient attendance data from an accident & emergency (A&E) department in a local U.K. hospital. In the proposed randomly occurring distributedly delayed PSO (RODDPSO) algorithm, the evolutionary state is determined by evaluating the evolutionary factor in each iteration, based on whether the velocity updating model switches from one mode to another. With the purpose of reducing the possibility of getting trapped in the local optima and also expanding the search space, randomly occurring time-delays that reflect the history of previous personal best and global best particles are introduced in the velocity updating model in a distributed manner. Eight well-known benchmark functions are employed to evaluate the proposed RODDPSO algorithm which is shown via extensive comparisons to outperform some currently popular PSO algorithms. To further illustrate the application potential, the RODDPSO algorithm is successfully exploited in the patient clustering problem for data analysis with respect to a local A&E department in West London. Experiment results demonstrate that the RODDPSO-based clustering method is superior over two other well-known clustering algorithms.

Journal ArticleDOI
TL;DR: The proposed model is comprehensive, which aggregates the length, energy consumption, and collision risk into the objective function and incorporates the steering window constraint and develops a nature-inspired ant colony optimization algorithm to search the optimal path.
Abstract: Path planning is a critical issue to ensure the safety and reliability of the autonomous navigation system of the autonomous underwater vehicles (AUVs). Due to the nonlinearity and constraint issues, existing algorithms perform unsatisfactorily or even cannot find a feasible solution when facing large-scale problem spaces. This paper improves the path planning of AUVs in terms of both the path planning model and the optimization algorithm. The proposed model is comprehensive, which aggregates the length, energy consumption, and collision risk into the objective function and incorporates the steering window constraint. Based on the model, we develop a nature-inspired ant colony optimization algorithm to search the optimal path. Our algorithm is named alarm pheromone-assisted ant colony system (AP-ACS), since it incorporates the alarm pheromone in addition to the traditional guiding pheromone. The alarm pheromone alerts the ants to infeasible areas, which saves invalid search efforts and, thus, improves the search efficiency. Meanwhile, three heuristic measures are specifically designed to provide additional knowledge to the ants for path planning. In the experiments, different from the previous works that are tested on synthetic instances only, we implement an interface to retrieve the practical underwater environment data. AP-ACS and the compared algorithms are thus tested on several practical environments of different scales. The experimental results show that AP-ACS can effectively handle the constraints and outperforms the other algorithms in terms of accuracy, efficiency, and stability.

Journal ArticleDOI
TL;DR: This overview considers the entire spectrum of algorithmic aspects and proposes a novel methodology that analyses the technical resemblances and differences in ECRL.
Abstract: A variety of Reinforcement Learning (RL) techniques blends with one or more techniques from Evolutionary Computation (EC) resulting in hybrid methods classified according to their goal, new focus, and their component methodologies. We denote this class of hybrid algorithmic techniques as the evolutionary computation versus reinforcement learning (ECRL) paradigm. This overview considers the entire spectrum of algorithmic aspects and proposes a novel methodology that analyses the technical resemblances and differences in ECRL. Our design analyses the motivation for each ECRL paradigm, the underlying natural models, the sub-component algorithmic techniques, as well as the properties of their ensemble.

Journal ArticleDOI
TL;DR: A clustering-based adaptive MOEA that adaptively generate a set of cluster centers for guiding selection at each generation to maintain diversity and accelerate convergence is proposed for solving MOPs with irregular Pareto fronts.
Abstract: Existing multiobjective evolutionary algorithms (MOEAs) perform well on multiobjective optimization problems (MOPs) with regular Pareto fronts in which the Pareto optimal solutions distribute continuously over the objective space. When the Pareto front is discontinuous or degenerated, most existing algorithms cannot achieve good results. To remedy this issue, a clustering-based adaptive MOEA (CA-MOEA) is proposed in this paper for solving MOPs with irregular Pareto fronts. The main idea is to adaptively generate a set of cluster centers for guiding selection at each generation to maintain diversity and accelerate convergence. We investigate the performance of CA-MOEA on 18 widely used benchmark problems. Our results demonstrate the competitiveness of CA-MOEA for multiobjective optimization, especially for problems with irregular Pareto fronts. In addition, CA-MOEA is shown to perform well on the optimization of the stretching parameters in the carbon fiber formation process.

Journal ArticleDOI
TL;DR: Experimental results indicate that MOEA/D-CRA outperforms its peers on 61% of the test cases in terms of three metrics, thereby validating the effectiveness of the proposed CRA strategy in solving MOPs.
Abstract: Decomposition of a multiobjective optimization problem (MOP) into several simple multiobjective subproblems, named multiobjective evolutionary algorithm based on decomposition (MOEA/D)-M2M, is a new version of multiobjective optimization-based decomposition. However, it fails to consider different contributions from each subproblem but treats them equally instead. This paper proposes a collaborative resource allocation (CRA) strategy for MOEA/D-M2M, named MOEA/D-CRA. It allocates computational resources dynamically to subproblems based on their contributions. In addition, an external archive is utilized to obtain the collaborative information about contributions during a search process. Experimental results indicate that MOEA/D-CRA outperforms its peers on 61% of the test cases in terms of three metrics, thereby validating the effectiveness of the proposed CRA strategy in solving MOPs.

Journal ArticleDOI
TL;DR: A novel multimodal multiobjective evolutionary algorithm using two-archive and recombination strategies to solve multi-objective optimization problems and the overall performance of the proposed algorithm is significantly superior to the competing algorithms.
Abstract: There have been few researches on solving multimodal multiobjective optimization problems, whereas they are commonly seen in real-world applications but difficult for the existing evolutionary optimizers. In this paper, we propose a novel multimodal multiobjective evolutionary algorithm using two-archive and recombination strategies. In the proposed algorithm, the properties of decision variables and the relationships among them are analyzed at first to guide the evolutionary search. Then, a general framework using two archives, i.e., the convergence and the diversity archives, is adopted to cooperatively solve these problems. Moreover, the diversity archive simultaneously employs a clustering strategy to guarantee diversity in the objective space and a niche-based clearing strategy to promote the same in the decision space. At the end of evolution process, solutions in the convergence and the diversity archives are recombined to obtain a large number of multiple Pareto optimal solutions. In addition, a set of benchmark test functions and a performance metric are designed for multimodal multiobjective optimization. The proposed algorithm is empirically compared with two state-of-the-art evolutionary algorithms on these test functions. The comparative results demonstrate that the overall performance of the proposed algorithm is significantly superior to the competing algorithms.

Journal ArticleDOI
TL;DR: A learning-to-decompose (LTD) paradigm that adaptively sets the decomposition method by learning the characteristics of the Pareto front (PF) shapes is developed.
Abstract: The decomposition-based evolutionary multiobjective optimization (EMO) algorithm has become an increasingly popular choice for a posteriori multiobjective optimization. However, recent studies have shown that their performance strongly depends on the Pareto front (PF) shapes. This can be attributed to the decomposition method, of which the reference points and subproblem formulation settings are not well adaptable to various problem characteristics. In this paper, we develop a learning-to-decompose (LTD) paradigm that adaptively sets the decomposition method by learning the characteristics of the estimated PF. Specifically, it consists of two interdependent parts, i.e., a learning module and an optimization module. Given the current nondominated solutions from the optimization module, the learning module periodically learns an analytical model of the estimated PF. Thereafter, useful information is extracted from the learned model to set the decomposition method for the optimization module: 1) reference points compliant with the PF shape and 2) subproblem formulations whose contours and search directions are appropriate for the current status. Accordingly, the optimization module, which can be any decomposition-based EMO algorithm in principle, decomposes the multiobjective optimization problem into a number of subproblems and optimizes them simultaneously. To validate our proposed LTD paradigm, we integrate it with two decomposition-based EMO algorithms, and compare them with four state-of-the-art algorithms on a series of benchmark problems with various PF shapes.

Journal ArticleDOI
TL;DR: An adaptive knowledge reuse framework for surrogate-assisted multiobjective optimization of computationally expensive problems, based on the novel idea of multiproblem surrogates, which provides the capability to acquire and spontaneously transfer learned models across problems, facilitating efficient global optimization.
Abstract: In most real-world settings, designs are often gradually adapted and improved over time. Consequently, there exists knowledge from distinct (but possibly related) design exercises, which have either been previously completed or are currently in-progress, that may be leveraged to enhance the optimization performance of a particular target optimization task of interest. Further, it is observed that modern day design cycles are typically distributed in nature, and consist of multiple teams working on associated ideas in tandem. In such environments, vast amounts of related information can become available at various stages of the search process corresponding to some ongoing target optimization exercise. Successfully exploiting this knowledge is expected to be of significant value in many practical settings, where solving an optimization problem from scratch may be exorbitantly costly or time consuming. Accordingly, in this paper, we propose an adaptive knowledge reuse framework for surrogate-assisted multiobjective optimization of computationally expensive problems, based on the novel idea of multiproblem surrogates . This idea provides the capability to acquire and spontaneously transfer learned models across problems, facilitating efficient global optimization . The efficacy of our proposition is demonstrated on a series of synthetic benchmark functions, as well as two practical case studies.

Journal ArticleDOI
TL;DR: A novel indicator-based algorithm with an enhanced diversification mechanism is developed that outperforms eight state-of-the-art approaches on the examined problems in the third category and shows its advantage in the balance between diversification and convergence.
Abstract: The performance of traditional multiobjective evolutionary algorithms (MOEAs) often deteriorates rapidly as the number of decision variables increases. While some efforts were made to design new algorithms by adapting existing techniques to large-scale single-objective optimization to the MOEA context, the specific difficulties that may arise from large-scale multiobjective optimization have rarely been studied. In this paper, the exclusive challenges along with the increase of the number of variables of a multiobjective optimization problem (MOP) are examined empirically, and the popular benchmarks are categorized into three groups accordingly. Problems in the first category only require MOEAs to have stronger convergence, and can thus be mitigated using techniques employed in large-scale single-objective optimization. Problems that require MOEAs to have stronger diversification but ignore a correlation between position and distance functions are grouped as the second. The rest of the problems that pose a great challenge to the balance between diversification and convergence by considering a correlation between position and distance functions are grouped as the third. While existing large-scale MOEAs perform well on the problems in the first two categories, they suffer a significant loss when applied to those in the third category. To solve large-scale MOPs in this category, we have developed a novel indicator-based algorithm with an enhanced diversification mechanism. The proposed algorithm incorporates a new solution generator with an external archive, thus forcing the search toward different subregions of the Pareto front using a dual local search mechanism. The results obtained by applying the proposed algorithm to a wide variety of problems (108 instances in total) with up to 8192 variables demonstrate that it outperforms eight state-of-the-art approaches on the examined problems in the third category and show its advantage in the balance between diversification and convergence.

Journal ArticleDOI
TL;DR: A two-stage multiobjective multidepot vehicle routing problem with time windows is proposed and a hybrid neighborhood structure is designed for solution improvement, which significantly outperforms two other representative algorithms.
Abstract: This paper proposes a multiobjective multidepot vehicle routing problem with time windows and designs some real-world test instances. It develops a two-stage multiobjective evolutionary algorithm (TS-MOEA) for dealing with the problem. Stage I of our proposed algorithm focuses on finding extreme solutions, and forms a coarse Pareto front, while stage II extends the found extreme solutions for approximating the whole Pareto front. The two-stage strategy provides a new method to balance convergence and diversity. Moreover, a hybrid neighborhood structure is designed for solution improvement. Experimental result shows that TS-MOEA significantly outperforms two other representative algorithms.