scispace - formally typeset
Search or ask a question

Showing papers presented at "Congress on Evolutionary Computation in 2019"


Proceedings ArticleDOI
10 Jun 2019
TL;DR: This paper proposes a new algorithm based on the self-adaptive differential evolution algorithm jDE to tackle the 100-Digit Challenge, and provides the score for each function as required by the organizers of this challenge competition.
Abstract: Real parameter optimization problems are often very complex and computationally expensive. We can find such problems in engineering and scientific applications. In this paper, a new algorithm is proposed to tackle the 100-Digit Challenge. There are 10 functions representing 10 optimization problems, and the goal is to compute each function’s minimum value to 10 digits of accuracy. There is no limit on either time or the maximum number of function evaluations. The proposed algorithm is based on the self-adaptive differential evolution algorithm jDE. Our algorithm uses two populations and some other mechanisms when tackling the challenge. We provide the score for each function as required by the organizers of this challenge competition.

64 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: Experimental results show that the RDG method can greatly improve the search ability of an optimization algorithm via divide-and-conquer, and outperforms RDG, random decomposition as well as other state-of-the-art methods.
Abstract: In this paper we use a divide-and-conquer approach to tackle large-scale optimization problems with overlapping components. Decomposition for an overlapping problem is challenging as its components depend on one another. The existing decomposition methods typically assign all the linked decision variables into one group, thus cannot reduce the original problem size. To address this issue we modify the Recursive Differential Grouping (RDG) method to decompose overlapping problems, by breaking the linkage at variables shared by multiple components. To evaluate the efficacy of our method, we extend two existing overlapping benchmark problems considering various level of overlap. Experimental results show that our method can greatly improve the search ability of an optimization algorithm via divide-and-conquer, and outperforms RDG, random decomposition as well as other state-of-the-art methods. We further evaluate our method using the CEC’2013 benchmark problems and show that our method is very competitive when equipped with a component optimizer.

61 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: A correlation analysis of historical water data and climatic factors that cause an impact on the usage of urban water and the prediction error is reduced by using the CNN-Bi-LSTM model.
Abstract: Water demand forecast is the basis of urban intelligent water supply, because the system is limited by nonlinear changes in the process of water consumption, the traditional prediction model has great impact in accuracy and stability. Even small changes in temperature and holidays periods can lead to abnormal changes in urban water use. To solve these problems, a hybrid model combining convolutional neural network and bidirectional long and short term memory network was adopted in this study. Corresponding corrective model is established for special situations such as weather natural changes and holidays. In order to extract the features of water quantity and climate data, these features are input into the Bi-LSTM network to predict the usage of urban water. This paper carries out a correlation analysis of historical water data and climatic factors that cause an impact on the usage of urban water. The previous five days water usage data and the daily maximum temperature were selected as the basis for the holiday correction model and the temperature correction model. Comparing the different models before and after correcting the deviation, the prediction results have been improved. The present work was compared with results of long-term and short-term memory networks (LSTM), bidirectional long-term memory networks (Bi-LSTM), CNN, sparse autoencoder (SAEs), and CNN-LSTM, hence the prediction error is reduced by using the CNN-Bi-LSTM model. Finally, under the same training period, the training time and convergence of the six models were analyzed. The training time of CNN-Bi-LSTM is less than LSTM, Bi-LSTM, CNN, and CNN-LSTM, but larger than SAEs. The training convergence of CNN-Bi-LSTM was set in 125 times, which is smaller than the training times of the other five models.

49 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: The proposed PSO-based CNN-LSTM method explores the optimal prediction structure and achieves nearly perfect prediction performance for energy prediction and achieves the lowest mean square error (MSE) compared to conventional machine learning methods.
Abstract: Recently, there have been many attempts to predict residential energy consumption using artificial neural networks. The optimization of these neural networks depends on the trial and error of the operator that lacks prior knowledge. They are also influenced by the initial values of the model based on the gradient algorithm and the size of the search space. In this paper, different kinds of hyperparameters are automatically determined by integrating particle swarm optimization (PSO) to CNN-LSTM network for forecasting energy consumption. Our findings reveal that the proposed optimization strategy can be used as a promising alternative prediction method for high prediction accuracy and better generalization capability. PSO achieves effective global exploration by eliminating crossover and mutation operations compared to genetic algorithms. To verify the usefulness of the proposed method, we use the household power consumption data in the UCI repository. The proposed PSO-based CNN-LSTM method explores the optimal prediction structure and achieves nearly perfect prediction performance for energy prediction. It also achieves the lowest mean square error (MSE) compared to conventional machine learning methods.

41 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: The experimental results indicate that the evolved CNN-LSTM models are capable of dealing with complex nonlinear inventory forecasting problem.
Abstract: Inventory forecasting is a key component of effective inventory management. In this work, we utilise hybrid deep learning models for inventory forecasting. According to the highly nonlinear and non-stationary characteristics of inventory data, the models employ Long Short-Term Memory (LSTM) to capture long temporal dependencies and Convolutional Neural Network (CNN) to learn the local trend features. However, designing optimal CNN-LSTM network architecture and tuning parameters can be challenging and would require consistent human supervision. To automate optimal architecture searching of CNN-LSTM, we implement three meta-heuristics: a Particle Swarm Optimisation (PSO) and two Differential Evolution (DE) variants. Computational experiments on real-world inventory forecasting problems are conducted to evaluate the performance of the applied meta-heuristics in terms of evolved network architectures for obtaining prediction accuracy. Moreover, the evolved CNN-LSTM models are also compared to Seasonal Auto-regressive Integrated Moving Average (SARIMA) models for inventory forecasting problems. The experimental results indicate that the evolved CNN-LSTM models are capable of dealing with complex nonlinear inventory forecasting problem.

39 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: Two well-known multi-objective optimisation frameworks, i.e. non-dominated sorting genetic algorithm II (NSGA-II) and strength Pareto evolutionary algorithm 2 (SPEA2), are incorporated into the genetic programming hyper-heuristic method to solve the multi- objective DFJSS problem.
Abstract: Dynamic flexible job shop scheduling (DFJSS) is one of the well-known combinational optimisation problems, which aims to handle machine assignment (routing) and operation sequencing (sequencing) simultaneously in dynamic environment. Genetic programming, as a hyper-heuristic method, has been successfully applied to evolve the routing and sequencing rules for DFJSS, and achieved promising results. In the actual production process, it is necessary to get a balance between several objectives instead of simply focusing only one objective. No existing study considered solving multi-objective DFJSS using genetic programming. In order to capture multi-objective nature of job shop scheduling and provide different trade-offs between conflicting objectives, in this paper, two well-known multi-objective optimisation frameworks, i.e. non-dominated sorting genetic algorithm II (NSGA-II) and strength Pareto evolutionary algorithm 2 (SPEA2), are incorporated into the genetic programming hyper-heuristic method to solve the multi-objective DFJSS problem. Experimental results show that the strategy of NSGA-II incorporated into genetic programming hyper-heuristic performs better than SPEA2-based GPHH, as well as the weighted sum approaches, in the perspective of both training performance and generalisation.

39 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: Two modified MMO algorithms are used to solve the MMO feature selection problems and simulation results show that these MMO algorithms can find more feature subsets than unimodal optimization algorithms.
Abstract: In feature selection, the number of selected features and the classification accuracy are two common objectives to be optimized. However, few studies pay attention to which features are selected. In many feature selection problems, different feature subsets with the same number of selected features can achieve similar classification accuracy. These are multimodal multiobjective optimization (MMO) problems in feature selection. In this paper, the MMO problems in feature selection are described in detail. Then, the great significance and importance to find these different feature subsets are discussed. Two modified MMO algorithms are used to solve the MMO feature selection problems. Simulation results show that these MMO algorithms can find more feature subsets than unimodal optimization algorithms.

37 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: The main objective of the proposed clustering method, called ASOSCA, is to find automatically the optimal number of centroids and their positions in order to minimize the CS-index (which refers to Compact- separated index).
Abstract: Automatic clustering based hybrid metaheuristic algorithms has attracted the center of interest of scientists and engineers which become a hot topic for different data analysis applications. For example, image clustering, bioinformatics, image segmentation, and natural language processing. Where the process of determining the number and position of centroids is an NP-hard problem. So, this paper presents an alternative automatic clustering algorithm based on the hybrid between the atom search optimization (ASO) and the sine-cosine algorithm (SCA). The main objective of the proposed clustering method, called ASOSCA, is to find automatically the optimal number of centroids and their positions in order to minimize the CS-index (which refers to Compact-separated index). To achieve this goal, the ASOSCA uses SCA as a local search operator to improve the quality of ASO. The performance of the proposed hybrid method is compared with other metaheuristic methods; in which all of them are tested on sixteen clustering datasets and using different cluster validity indexes as Dunn, Silihouette, Davies Bouldin, and Calinski Harabasz. The experimental results show that the ASOSCA depict high superiority in comparison with other types of hybrid metaheuristic in terms of clustering measures.

36 citations


Proceedings ArticleDOI
Yanan Yu1, Anmin Zhu1, Zexuan Zhu1, Qiuzhen Lin1, Jian Yin1, Xiaoliang Ma1 
10 Jun 2019
TL;DR: This paper integrates differential evolution (DE) and opposition-based learning (OBL) into MFEA and hence proposes MFE a/DE-OBL, which dramatically improves the performance compared with the M FEA.
Abstract: Recently, multi-tasking optimization (MTO) has become a rising research topic in the field of evolutionary computation that has attracted increasing attention of academia. Comparing with single-objective optimization (SOO) and multi-objective optimization (MOO), MTO can solve different optimization tasks simultaneously by utilizing inter-task similarities and complementarities. Based on crossover operator, the classical multifactorial evolutionary algorithm (MFEA) transfers inter-task knowledge. To broaden the search region and accelerate the convergence, this paper integrates differential evolution (DE) and opposition-based learning (OBL) into MFEA and hence proposes MFEA/DE-OBL. The motivation of integrating DE and OBL is that they have different search neighborhoods and strong complementarity with simulated binary crossover (SBX) used in MFEA. Furthermore, integrating DE and OBL can help MFEA jump out of local optima. The effectiveness and efficiency of integrating DE and OBL into MFEA are experimentally studied on a set of benchmark problems with different degrees of similarities. Experimental results demonstrate that the proposed MFEA/DE-OBL dramatically improves the performance compared with the MFEA.

34 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: The shape of Pareto fronts (i.e., triangular or inverted triangular) has large effects on the performance of decomposition-based and hypervolume-based algorithms.
Abstract: Performance of evolutionary multi-objective and many-objective optimization algorithms is usually evaluated by computational experiments on a number of test problems. Thus, performance comparison results depend on the choice of test problems. For fair comparison, it is needed to use a wide variety of test problems with various characteristics. However, most of well-known and frequently-used scalable test problems have the same type of Pareto fronts called "regular" Pareto fronts: Their shape is triangular. In this paper, we discuss the reality of this type of Pareto fronts. First, we show that a triangular Pareto front has some unrealistic properties as the Pareto front of a real-world multi-objective problem. Next, we examine the shape of the Pareto fronts of some other multi-objective test problems with independently generated objectives (i.e., with objectives that are not derived from a pre-specified shape of Pareto fronts). It is shown that the Pareto fronts of those test problems are inverted triangular (i.e., not regular). Then, we demonstrate that the shape of Pareto fronts (i.e., triangular or inverted triangular) has large effects on the performance of decomposition-based and hypervolume-based algorithms. Finally, we show difficulties of hypervolume-based performance evaluation for many-objective problems with inverted triangular Pareto fronts.

31 citations


Proceedings ArticleDOI
10 Jun 2019
TL;DR: The proposed MTMSO algorithm is compared with a single-task conventional PSO and a popular EMTO algorithm on two test suites composing of 9 simple and 10 complex single-objective MTO problems, respectively, which demonstrates its superiority.
Abstract: Multi-task optimization (MTO) is a newly emerging research area in the field of optimization, studying on how to solve multiple optimization problems at the same time so that the processes of solving different but relevant problems could help each other via knowledge transfer to improve the overall performance of solving all problems. Evolutionary MTO (EMTO) employs evolutionary algorithms as the optimizer and treats the candidate solutions that perform commonly well on multiple tasks as the transferable knowledge between these tasks. In this work, we propose a multitasking multi-swarm optimization (MTMSO) algorithm which extends a popular dynamic multi-swarm optimization (DMS-PSO) algorithm into the multitasking scenario. In MTMSO, the whole swarm is randomly partitioned into multiple swarms (i.e., task groups) with each being responsible for solving a specific task, and each swarm is further partitioned into multiple sub-swarms. Within each task group, optimization is performed as per the mechanism of DMS-PSO for solving a specific task. Cross-task knowledge transfer is realized via probabilistic crossover of the personal bests of the particles from different task groups. Both task groups and each group’s sub-swarms are periodically reformed to maintain search diversity. An adaptive local search process, featuring dynamic allocation of the computational resource for each task, is incorporated in the final stage of optimization to improve the quality of the best solution found for each task. The proposed MTMSO algorithm is compared with a single-task conventional PSO and a popular EMTO algorithm on two test suites composing of 9 simple and 10 complex single-objective MTO problems, respectively, which demonstrates its superiority.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A study to investigate how different mutation strategies for knowledge transfer affect the performance of MFDE and proposes a new mutation strategy called DE/best/1+ρ, which is able to adjust its behavior along the search process is proposed.
Abstract: Differential evolution (DE) is a simple yet powerful evolutionary algorithm for the solving of continuous optimization problems. In the last decades, a plethora of DE variants have been proposed in the literature for enhanced optimization performance. However, most of these DE variants are designed to solve a single problem in a single run. Recently, a multifactorial DE (MFDE) has been proposed to conduct evolutionary search on multiple tasks simultaneously. Benefitting from the implicit knowledge transfer among different tasks, MFDE has demonstrated a superior performance against the single-task DE in terms of convergence speed and solution quality. In MFDE, the knowledge transfer is realized via the mutation operation conducted on solutions with different skill factors. However, despite a lot of mutation strategies suggested in the literature, the current MFDE takes DE/rand/1 as the only strategy for knowledge transfer. The impacts of different mutation strategies on the performance of MFDE is still unexplored. Taking this cue, in this paper, we embark a study to investigate how different mutation strategies for knowledge transfer affect the performance of MFDE. In particular, besides DE/rand/1, another four commonly-used mutation strategies are adapted for the purpose of multitask optimization. Further, towards effective mutation for knowledge transfer in MFDE, a new mutation strategy called DE/best/1+ρ, which is able to adjust its behavior along the search process is proposed. Lastly, comprehensive empirical studies are conducted to investigate the performance of existing and the new proposed mutation strategies on the 9 single-objective multitasking benchmarks.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: This paper proposes a Q-Learning-based Particle Swarm Optimization (QLPSO) algorithm, which uses the Reinforcement Learning (RL) to train the parameters in Particles Swarmoptimization (PSO), and reveals the competitive performance of QLPSO compared with other algorithms.
Abstract: Parameter control is critical to the performance of any evolutionary algorithm (EA). In this paper, we propose a Q-Learning-based Particle Swarm Optimization (QLPSO) algorithm, which uses the Reinforcement Learning (RL) to train the parameters in Particle Swarm Optimization (PSO) algorithm. The core of the QLPSO algorithm is a three-dimensional Q table which consists of a state plane and an action axis. The state plane includes the state of the particles in both of the decision space and the objective space. The action axis controls the exploration and exploitation of particles by setting different parameters. The Q table can help particles to select actions according to their states. Besides, the Q table should be updated by reward function which is designed according to the performance change of particles and the number of iterations. The main difference between the QLPSO algorithms for single-objective and multi-objective optimization lies in the evaluation of the solution performance. In single-objective optimization, we only compare the fitness values of solutions, while in multi-objective optimization, we need to discuss the dominant relationship between solutions with the help of Pareto front. The performance of QLPSO is tested based on 6 single-objective and 5 multi-objective benchmark functions. The experiment results reveal the competitive performance of QLPSO compared with other algorithms.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: The classical stochastic local optimization algorithm Simulated Annealing is used to train a selection hyper-heuristic for solving JSSPs and the results suggest that training with the highest number of instances lead to better and more stable hyper- heuristics.
Abstract: Job Shop Scheduling problems (JSSPs) have become increasingly popular due to their application in supply chain systems. Several solution approaches have appeared in the literature. One of them is the use of low-level heuristics. These methods approximate a solution but only work well on some kind of problems. Hence, combining them may improve performance. In this paper, we use the classical stochastic local optimization algorithm Simulated Annealing to train a selection hyper-heuristic for solving JSSPs. To do so, we use an instance generator provided in literature to create training sets with a different number of instances: 20, 40, and 60. In addition, we select instances from the literature to create two test scenarios, one similar to the training instances, and another with bigger problems. Our results suggest that training with the highest number of instances lead to better and more stable hyper-heuristics. For example, in the first test scenario, we achieved a reduction in the data range of over 60% and an improvement in the median performance of almost 30%. Moreover, under these conditions about 75% of the generated hyper-heuristics were able to perform equal to or better than the best heuristic. Even so, less than 25% were able to outperform the synthetic Oracle. Because of the aforementioned, we strongly support the idea of using a selection hyper-heuristic model powered by Simulated Annealing for creating a high-level solver for Job Shop Scheduling problems.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: Normalized standard bit mutation as mentioned in this paper allows a straightforward way to control its variance, and hence the degree of randomness involved, which is a simple way to interpolate between the random global search of EAs and their deterministic counterparts which sample from a fixed radius only.
Abstract: A key property underlying the success of evolutionary algorithms (EAs) is their global search behavior, which allows the algorithms to "jump" from a current state to other parts of the search space, thereby avoiding to get stuck in local optima. This property is obtained through a random choice of the radius at which offspring are sampled from previously evaluated solutions. It is well known that, thanks to this global search behavior, the probability that an EA using standard bit mutation finds a global optimum of an arbitrary function f : {0, 1}n → ℝ tends to one as the number of function evaluations grows. This advantage over heuristics using a fixed search radius, however, comes at the cost of using non-optimal step sizes also in those regimes in which the optimal rate is stable for a long time. This downside results in significant performance losses for many standard benchmark problems.We introduce in this work a simple way to interpolate between the random global search of EAs and their deterministic counterparts which sample from a fixed radius only. To this end, we introduce normalized standard bit mutation, in which the binomial choice of the search radius is replaced by a normal distribution. Normalized standard bit mutation allows a straightforward way to control its variance, and hence the degree of randomness involved. We experiment with a self-adjusting choice of this variance, and demonstrate its effectiveness for the two classic benchmark problems LeadingOnes and OneMax. Our work thereby also touches a largely ignored question in discrete evolutionary computation: multi-dimensional parameter control.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: New experiments are introduced which can specifically highlight the occurrence of “failed exploration” and its effects through selection that can trap a metaheuristic in a less promising part of the search space and propose new lines of research to reduce the effects of selection and failed exploration.
Abstract: The goal of exploration to produce diverse search points throughout the search space can be countered by the goal of selection to focus search around the fittest current solution(s). In the limit, if all exploratory search points are rejected by selection, then the behaviour of the metaheuristic will be equivalent to one which performs no exploration at all (e.g. hill climbing). The effects of selection on exploration are clearly important, but our review of the literature indicates limited coverage. To address this deficit, we introduce new experiments which can specifically highlight the occurrence of “failed exploration” and its effects through selection that can trap a metaheuristic in a less promising part of the search space. We subsequently propose new lines of research to reduce the effects of selection and failed exploration which we believe are distinctly different from traditional lines of research to increase (pre-selection) exploration.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A credit assignment approach for selecting proper task to conduct knowledge transfer in explicit evolutionary many-tasking is proposed based on the feedbacks from the transferred solutions across tasks, which is adaptively updated along the evolutionary search.
Abstract: Recently, evolutionary multi-tasking (EMT) has been proposed as a new evolutionary search paradigm that op-timizes multiple problems simultaneously. Due to the knowledge transfer across optimization tasks occurs along the evolutionary search process, EMT has been demonstrated to outperform the traditional single-task evolutionary search algorithms on many complex optimization problems, such as multimodal continuous optimization problems, NP-hard combinatorial optimization problems, and constrained optimization problems. Today, EMT has attracted lots of attentions, and many EMT algorithms have been proposed in the literature. The explicit EMT algorithm (EEMTA) is a recent proposed new EMT algorithm. In contrast to most of existing EMT algorithms, which employ a single population using unified space and common search operators for solving multiple problems, the EEMTA uses multiple populations which possess problem-specific solution representations and search mechanisms for different problems in evolutionary multi-tasking, which thus could lead to enhanced optimization performance. However, the original EEMTA was proposed for solving only two tasks. As knowledge transfer from inappropriate tasks may lead to negative effect on the evolutionary optimization process, additional designs of identifying task pairs for knowledge transfer is necessary in EEMTA for evolutionary multi-tasking with tasks more than two. To the best of our knowledge, there is no research effort has been conducted on this issue. Keeping this in mind, in this paper, we present a preliminary study on the task selection in EEMTA for many-task optimization. As task similarity may lose to capture the usefulness between tasks in evolutionary search, instead of using similarity measures for task selection, here we propose a credit assignment approach for selecting proper task to conduct knowledge transfer in explicit evolutionary many-tasking. The proposed approach is based on the feedbacks from the transferred solutions across tasks, which is adaptively updated along the evolutionary search. To confirm the efficacy of the proposed method, empirical studies on the many-task optimization problem, which consists of 7 commonly used optimization benchmarks, have been presented and discussed.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A novel definition of the two-level container allocation problem is provided and a hybrid approach using genetic programming hyper-heuristics combined with human-designed rules to solve the problem is developed.
Abstract: Container technology has become a new trend in both the software industry and cloud computing. Containers support the fast development of web applications and they have the potential to reduce energy consumption in data centers. Containers are usually first allocated to virtual machines (VMs) and VMs are allocated to physical machines. The container allocation is a challenging task which involves a two-level allocation problem. Current research overly simplifies the container allocation into a one-level allocation problem and uses simple rule-based approaches to solve the problem. As a result, the resource is not allocated efficiently which leads to high energy consumption. This paper provides a novel definition of the two-level container allocation problem. Then, we develop a hybrid approach using genetic programming hyper-heuristics combined with human-designed rules to solve the problem. The experiments show that our hybrid approach is able to significantly reduce energy consumption than solely using human-designed rules.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: This comprehensive study investigates the behaviour of the existing GP transfer methods for evolving routing policy in UCARP, and identifies the potentials of existing methods and concludes that UCARP needs stronger and more effective transfer learning methods.
Abstract: Uncertain Capacitated Arc Routing Problem (UCARP) is a combinatorial optimization problem that has many important real-world applications. Genetic programming (GP) is a powerful machine learning technique that has been successfully used to automatically evolve routing policies for UCARP. Generalisation is an open issue in the field of UCARP and in this direction, an open challenge is the case of changes in number of vehicles which currently leads to new training procedures to be initiated. Considering the expensive training cost of evolving routing policies for UCARP, a promising strategy is to learn and reuse knowledge from a previous problem solving process to improve the effectiveness and efficiency of solving a new related problem, i.e. transfer learning. Since none of the existing GP transfer methods have been used as a hyper-heuristic in solving UCARP, we conduct a comprehensive study to investigate the behaviour of the existing GP transfer methods for evolving routing policy in UCARP, and identify the potentials of existing methods. The results suggest that the existing methods applying subtree transfer cannot scale well to environment changes and cannot be adapted for this purpose. However, applying GP transfer methods is a good option for creating a better initial populations on target domain and though this effect does not last, we can obtain comparable results in the target domain in a much shorter time. Overall, we conclude that UCARP needs stronger and more effective transfer learning methods.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: The proposed scalable multimodal multiobjective test problem is the first scalable test problem with respect to all of these five parameters: the number of objectives, number of decision variables, theNumber of equivalent Pareto optimal solution sets in the decision space, thenumber of local Pare to fronts, and the numberof local Paredto optimal solutions in the decided space.
Abstract: Recently, multimodal multiobjective optimization has started to attract a lot of attention. Its task is to find multiple Pareto optimal solution sets in the decision space, which are equivalent in the objective space. In some applications, it is important to find multiple global and local Pareto optimal solution sets in the decision space, which have similar quality in the objective space. In evolutionary computation, a wide variety of test problems with various characteristics are needed for fair comparison of different algorithms. However, we have only a small number of test problems for multimodal multiobjective optimization. In this paper, we propose a scalable multimodal multiobjective test problem with respect to the five parameters: (i) the number of objectives, (ii) the number of decision variables, (iii) the number of equivalent Pareto optimal solution sets in the decision space, (iv) the number of local Pareto fronts, and (v) the number of local Pareto optimal solution sets in the decision space for each local Pareto front. Our proposal is the first scalable test problem with respect to all of these five parameters.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: This paper proposed a new GP-based EDL method with convolution operators (COGP) for feature learning on binary and multi-class image classification and demonstrated that COGP achieved significantly better performance in most comparisons with 11 effectively competitive methods.
Abstract: Evolutionary deep learning (EDL) as a hot topic in recent years aims at using evolutionary computation (EC) techniques to address existing issues in deep learning. Most existing work focuses on employing EC methods for evolving hyper-parameters, deep structures or weights for neural networks (NNs). Genetic programming (GP) as an EC method is able to achieve deep learning due to the characteristics of its representation. However, many current GP-based EDL methods are limited to binary image classification. This paper proposed a new GP-based EDL method with convolution operators (COGP) for feature learning on binary and multi-class image classification. A novel flexible program structure is developed to allow COGP to evolve solutions with deep or shallow structures. Associated with the program structure, a new function set and a new terminal set are developed in COGP. The experimental results on six different image classification data sets of varying difficulty demonstrated that COGP achieved significantly better performance in most comparisons with 11 effectively competitive methods. The visualisation of the best program further revealed the high interpretability of the solutions found by COGP.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: Later competition winners are not statistically better than algorithms from previous years in a general sense on every problem and dimensionality, according to a benchmark set used by the competition throughout these years.
Abstract: The Special Sessions and Competitions on Real-Parameter Single Objective Optimization are benchmarking competitions held every year since 2013 that are used to evaluate the performance of new optimization algorithms.One flaw of these competitions is that algorithms are compared only to other algorithms submitted in the same year, not with algorithms submitted in previous years of the competition, so it can make comparison between all algorithms troublesome. Almost every year uses different benchmark functions, so the results between the years are not directly comparable. As a result, the winner of the most recent competition might not necessarily be significantly better than the winners of previous years.In this article, we directly compare winners of every competition held from 2013 to 2018 and present the results of this comparison. We use a benchmark set that consists of all test functions used by the competition throughout these years. We compare them on benchmark functions grouped by dimension (10, 30, 50, 100) and by year (2013, 2014, 2015, 2017). This allows us on one hand to see which algorithms perform best at specific dimensions, while grouping by year shows effects of parameter tuning on the end results.We present the results of these comparisons and find that later competition winners are not statistically better than algorithms from previous years in a general sense on every problem and dimensionality.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: This work proposes a simple and effective knowledge transfer strategy which utilizes the best solution found so far for one problem to assist in solving the other problems during the optimization process, based on random replacement.
Abstract: Evolutionary multi-task optimization (EMTO) studies on how to simultaneously solve multiple optimization problems, so-called component problems, via evolutionary algorithms, which has drawn much attention in the field of evolutionary computation. Knowledge transfer across multiple optimization problems (being solved) is the key to make EMTO to outperform traditional optimization paradigms. In this work, we propose a simple and effective knowledge transfer strategy which utilizes the best solution found so far for one problem to assist in solving the other problems during the optimization process. This strategy is based on random replacement. It does not introduce extra computational cost in terms of objective function evaluations for solving each component problem. However, it helps to improve optimization effectiveness and efficiency, compared to solving each component problem in a standalone way. This light-weight knowledge transfer strategy is implemented via differential evolution within a multi-population based EMTO paradigm, leading to a differential evolutionary multi-task optimization (DEMTO) algorithm. Experiments are conducted on the CEC’2017 competition test bed to compare the proposed DEMTO algorithm with five state-of-the-art EMTO algorithms, which demonstrate the superiority of DEMTO.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: This paper proposes a genetic algorithm that can, for a given image processing task, efficiently explore a defined space of potentially suitable CNN architectures and simultaneously optimise their hyperparameters and named this fast automatic optimisation model fast-CNN.
Abstract: Convolutional Neural Networks (CNNs) are currently the most prominent deep neural network models and have been used with great success for image classification and other applications. The performance of CNNs depends on their architecture and hyperparameter settings. Early CNN models like LeNet and AlexNet were manually designed by experienced researchers. The empirical design and optimisation of a new CNN architecture require a lot of expertise and can be very time-consuming. In this paper, we propose a genetic algorithm that can, for a given image processing task, efficiently explore a defined space of potentially suitable CNN architectures and simultaneously optimise their hyperparameters. We named this fast automatic optimisation model fast-CNN and employed it to find competitive CNN architectures for image classification on CIFAR10. In a series of comparative simulation experiments we could demonstrate that the network designed by fast-CNN achieved nearly as good accuracy as some of the other best network models available but fast-CNN took significantly less time to evolve. The trained fast-CNN network model also generalised well to CIFAR100.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: The ant colony optimization (ACO) metaheuristic is applied to coordinate the charging process of the EVs within the charging station by generating efficient schedules and results show that the application of ACO is highly effective and outperforms other approaches.
Abstract: In this work we consider the scheduling problem for charging a fleet of electric vehicles (EVs) within a station such that the total tardiness of the problem is minimized. The generation of a feasible and efficient schedule is a difficult task due to the physical and power constraints of the charging station, i.e., the maximum contracted power and the maximum power imbalance between the lines of the electric feeder. The ant colony optimization (ACO) metaheuristic is applied to coordinate the charging process of the EVs within the charging station by generating efficient schedules. The behaviour and performance of ACO is analyzed and compared against state-of-the-art approaches on a benchmark set inspired by real-world scenarios. The experimental results show that the application of ACO is highly effective and outperforms other approaches.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A new kind of CC algorithm called Soft Grouping Cooperative Co-evolution (SGCC) is proposed to tackle the problem of group decision variables by softly assigning variables into multiple groups by controlling the degree of membership of variables to the groups.
Abstract: Cooperative Co-evolution (CC) is a promising framework to scale up conventional evolutionary algorithms for large scale global optimization (LSGO) problems. However, how to group decision variables is still a problem while there is no prior knowledge about the dependence relationship between variables. In this paper, a new kind of CC algorithm called Soft Grouping Cooperative Co-evolution (SGCC) is proposed to tackle the problem. Instead of explicitly dividing variables into multiple groups, the algorithm softly assigns variables into multiple groups by controlling the degree of membership of variables to the groups. In this work, the degree of membership is controlled by a probability distribution function. The experimental investigation shows that Soft Grouping CC is better than the explicit grouping CC on partially separable and non-separable problems.

Proceedings ArticleDOI
04 Jan 2019
TL;DR: In this article, a hierarchical genetic reinforcement learner is designed to evolve a swarm controller for an agent shepherding a boids-based swarm, with each model in the hierarchy learning incrementally through a multi-part reward function, and the hierarchy acts as a decision fusion function that combines the individual behaviours and skills learnt by each instruction to create a smart shepherd to control the swarm.
Abstract: The design of reward functions in reinforcement learning is a human skill that comes with experience. Unfortunately, there is not any methodology in the literature that could guide a human to design the reward function or to allow a human to transfer the skills developed in designing reward functions to another human and in a systematic manner. In this paper, we use Systematic Instructional Design, an approach in human education, to engineer a machine education methodology to design reward functions for reinforcement learning. We demonstrate the methodology in designing a hierarchical genetic reinforcement learner that adopts a neural network representation to evolve a swarm controller for an agent shepherding a boids-based swarm. The results reveal that the methodology is able to guide the design of hierarchical reinforcement learners, with each model in the hierarchy learning incrementally through a multi-part reward function. The hierarchy acts as a decision fusion function that combines the individual behaviours and skills learnt by each instruction to create a smart shepherd to control the swarm.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: A radial basis function (RBF) assisted optimization algorithm with batch infill sampling criterion for solving EOPs (short for RBFBS), where the quality of RBF model is adjusted by choosing a good shape parameter via solving a sub-expensive hyperparameter optimization problem.
Abstract: The surrogate-assisted optimization algorithms (SAOAs) are very promising for solving computationally expensive optimization problems (EOPs). Generally, the performance of a SAOA is determined by the quality of its surrogate model and the infill sampling criterion. In this paper, we propose a radial basis function (RBF) assisted optimization algorithm with batch infill sampling criterion for solving EOPs (short for RBFBS). In RBFBS, the quality of RBF model is adjusted by choosing a good shape parameter via solving a sub-expensive hyperparameter optimization problem. Moreover, a batch infill sampling criterion that includes a bi-objective-based sampling approach and a single-objective-based sampling approach is proposed to get a batch of samples for expensive evaluation. The experimental results on various benchmark problems show that RBFBS is very promising for expensive optimization.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: The results show that the proposed representation and initialisation method can achieve promising accuracy compared to manually designed architectures, despite the simplicity of the random search approach and the reduced data set.
Abstract: Convolutional neural networks (CNNs) have demonstrated highly effective performance in image classification across a range of data sets. The best performance can only be obtained with CNNs when the appropriate architecture is chosen, which depends on both the volume and nature of the training data available. Many of the state-of-the-art architectures in the literature have been hand-crafted by human researchers, but this requires expertise in CNNs, domain knowledge, or trial-and-error experimentation, often using expensive resources. Recent work based on evolutionary deep learning has offered an alternative, in which evolutionary computation (EC) is applied to automatic architecture search. A key component in evolutionary deep learning is the chosen encoding strategy; however, previous approaches to CNN encoding in EC typically have restrictions in the architectures that can be represented. Here, we propose an encoding strategy based on a directed acyclic graph representation, and introduce an algorithm for random generation of CNN architectures using this encoding. In contrast to previous work, our proposed encoding method is more general, enabling representation of CNNs of arbitrary connectional structure and unbounded depth. We demonstrate its effectiveness using a random search, in which 200 randomly generated CNN architectures are evaluated. To improve the computational efficiency, the 200 CNNs are trained using only 10% of the CIFAR-10 training data; the three bestperforming CNNs are then re-trained on the full training set. The results show that the proposed representation and initialisation method can achieve promising accuracy compared to manually designed architectures, despite the simplicity of the random search approach and the reduced data set. We intend that future work can improve on these results by applying evolutionary search using this encoding.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A variant of the reference vector guided evolutionary algorithm is proposed by adjusting reference vectors according to the distribution of the solutions in the current population to make sure that most reference vectors are associated with solutions.
Abstract: For problems with irregular Pareto fronts, only part of the objective space is covered by optimal solutions. Most decomposition based evolutionary many-objective algorithms, however, predefine uniformly distributed weight or reference vectors, making them less suited for problems with irregular Pareto fronts, since many weight or reference vectors will be wasted. To address the above issue, this paper proposes a variant of the reference vector guided evolutionary algorithm by adjusting reference vectors according to the distribution of the solutions in the current population to make sure that most reference vectors are associated with solutions. A secondary selection criterion based on the dominance relationship is adopted in addition to the angle penalized distance based selection so that a sufficient number of solutions can survive and be passed to the next generation. Experiments on 12 irregular test problems with 60 instances show that the proposed algorithm is competitive compared to the state-of-the-art algorithms for solving problems with irregular Pareto fronts.