scispace - formally typeset
Search or ask a question

Showing papers presented at "Congress on Evolutionary Computation in 2014"


Proceedings ArticleDOI
06 Jul 2014
TL;DR: L-SHADE is proposed, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function and is quite competitive with state-of-the-art evolutionary algorithms.
Abstract: SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.

1,048 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: A systematic review of the existing population initialization techniques is conducted, which categorize initialization techniques from three exclusive perspectives, i.e., randomness, compositionality and generality.
Abstract: Although various population initialization techniques have been employed in evolutionary algorithms (EAs), there lacks a comprehensive survey on this research topic. To fill this gap and attract more attentions from EA researchers to this crucial yet less explored area, we conduct a systematic review of the existing population initialization techniques. Specifically, we categorize initialization techniques from three exclusive perspectives, i.e., randomness, compositionality and generality. Characteristics of the techniques belonging to each category are carefully analysed to further lead to several sub-categories. We also discuss several open issues related to this research topic, which demands further in-depth investigations.

164 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: The Dynamic Search Fireworks Algorithm (dynFWA) is proposed which uses a dynamic explosion amplitude for the firework at the currently best position which is able to significantly outperform EFWA, and achieves better performance than the latest SPSO version S PSO2011.
Abstract: We propose an improved version of the recently developed Enhanced Fireworks Algorithm (EFWA) based on an adaptive dynamic local search mechanism. In EFWA, the explosion amplitude (i.e., search area around the current location) of each firework is computed based on the quality of the firework's current location. This explosion amplitude is limited by a lower bound which decreases with the number of iterations in order to avoid the explosion amplitude to be [close to] zero, and in order to enhance global search abilities at the beginning and local search abilities towards the later phase of the algorithm. As the explosion amplitude in EFWA depends solely on the fireworks' fitness and the current number of iterations, this procedure does not allow for an adaptive optimization process. To deal with these limitations, we propose the Dynamic Search Fireworks Algorithm (dynFWA) which uses a dynamic explosion amplitude for the firework at the currently best position. If the fitness of the best firework could be improved, the explosion amplitude will increase in order to speed up convergence. On the contrary, if the current position of the best firework could not be improved, the explosion amplitude will decrease in order to narrow the search area. In addition, we show that one of the EFWA operators can be removed in dynFWA without a loss in accuracy - this makes dynFWA computationally more efficient than EFWA. Experiments on 28 benchmark functions indicate that dynFWA is able to significantly outperform EFWA, and achieves better performance than the latest SPSO version SPSO2011.

136 citations


Book ChapterDOI
06 Jul 2014
TL;DR: In this chapter, a new FWA algorithm called Adaptive Fireworks Algorithm is proposed by replacing the explosion amplitude operator in FWA with an adaptive method.
Abstract: The explosion amplitude in FWA is a key factor influencing the performance of fireworks algorithm, which needs to be controlled precisely. In this chapter, a new FWA algorithm called Adaptive Fireworks Algorithm is proposed by replacing the explosion amplitude operator in FWA with an adaptive method.

116 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: This paper reviews techniques which have combined evolutionary multi-objective optimization and multiple criteria decision making, including methods used to model the decision-makers preferences and example algorithms for each category.
Abstract: For real-world problems, the task of decision-makers is to identify a solution that can satisfy a set of performance criteria, which are often in conflict with each other. Multi-objective evolutionary algorithms tend to focus on obtaining a family of solutions that represent the trade-offs between the criteria; however ultimately a single solution must be selected. This need has driven a requirement to incorporate decision-maker preference models into such algorithms - a technique that is very common in the wider field of multiple criteria decision making. This paper reviews techniques which have combined evolutionary multi-objective optimization and multiple criteria decision making. Three classes of hybrid techniques are presented: a posteriori, a priori, and interactive, including methods used to model the decision-makers preferences and example algorithms for each category. To encourage future research directions, a commentary on the remaining issues within this research area is also provided.

110 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: A new multi-modal evolutionary opti-miser, the niching migratory multi-swarm optimiser (NMMSO), which dynamically manages many particle swarms, which is shown to cope with a range of problem types, and to produce results competitive with the state-of-the-art on the CEC 2013 multi- modal optimisation competition test problems.
Abstract: We present a new multi-modal evolutionary opti-miser, the niching migratory multi-swarm optimiser (NMMSO), which dynamically manages many particle swarms. These sub-swarms are concerned with optimising separate local modes, and employ measures to allow swarm elements to migrate away from their parent swarm if they are identified as being in the vicinity of a separate peak, and to merge swarms together if they are identified as being concerned with the same peak. We employ coarse peak identification to facilitate the mode identification required. Swarm members are not constrained to particular sub-regions of the parameter space, however members are initialised in the vicinity of a swarm's local mode estimate. NMMSO is shown to cope with a range of problem types, and to produce results competitive with the state-of-the-art on the CEC 2013 multi-modal optimisation competition test problems, providing new benchmark results in the field.

77 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: The benchmark suite in the competition of congress of evolutionary computation (CEC) 2014 is used to test the performance of FWA-DM, a fireworks algorithm inspired by the fireworks explosion in the sky at night, which can be further improved by differential mutation operator.
Abstract: The idea of fireworks algorithm (FWA) is inspired by the fireworks explosion in the sky at night. When a firework explodes, a shower of sparks appear around it. In this way, the adjacent area of the firework is searched. By controlling the amplitude of the explosion, the ability of local search for FWA is guaranteed. The way of fireworks algorithm searching the surrounding area can be further improved by differential mutation operator, forming an algorithm called FWA-DM. In this paper, the benchmark suite in the competition of congress of evolutionary computation (CEC) 2014 is used to test the performance of FWA-DM.

72 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: 9 benchmark functions derived from the benchmark suite used for the 2009 IEEE Congress on Evolutionary Computation competition on bound-constrained and static MO optimization algorithms are introduced and a variant of multiobjective EA based on decomposition (MOEA/D) have been put forward and tested along with peer algorithms to evaluate the newly proposed benchmarks.
Abstract: Time varying nature of the constraints, objectives and parameters that characterize several practical optimization problems have led to the field of dynamic optimization with Evolutionary Algorithms. In recent past, very few researchers have concentrated their efforts on the study of Dynamic multi-objective Optimization Problems (DMOPs) where the dynamicity is attributed to multiple objectives of conflicting nature. Considering the lack of a somewhat diverse and challenging set of benchmark functions, in this article, we discuss some ways of designing DMOPs and propose some general techniques for introducing dynamicity in the Pareto Set and in the Pareto Front through shifting, shape variation, slope variation, phase variation, and several other types. We introduce 9 benchmark functions derived from the benchmark suite used for the 2009 IEEE Congress on Evolutionary Computation competition on bound-constrained and static MO optimization algorithms. Additionally a variant of multiobjective EA based on decomposition (MOEA/D) have been put forward and tested along with peer algorithms to evaluate the newly proposed benchmarks.

70 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: Lion algorithm is a simulation model of the lion's unique characteristics such as territorial defense, territorial takeover, laggardness exploitation and pride, which is equivalent to differential evolution and better than genetic algorithm when using large scale bilinear model.
Abstract: Nonlinear system identification process, especially bilinear system identification process exploits global optimization algorithms for betterment of identification precision. This paper attempts to introduce a new optimization algorithm called as Lion algorithm to accomplish the system characteristics precisely. Our algorithm is a simulation model of the lion's unique characteristics such as territorial defense, territorial takeover, laggardness exploitation and pride. Experiments are conducted by identifying a nonlinear rationale digital benchmark system using standard bilinear model and comparisons are made with prominent genetic algorithm and differential evolution. Subsequently, curse of dimensionality is also experimented by defining a large scale bilinear model, i.e. bilinear system with 1023 bilinear kernel models, to identify the same digital benchmark system. Lion algorithm dominates when using standard bilinear model, whereas it is equivalent to differential evolution and better than genetic algorithm when using large scale bilinear model.

68 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: An improved evolutionary algorithm for bilevel optimization by incorporating archiving and local search based on Quadratic Approximations for faster convergence of the algorithm.
Abstract: In this paper, we provide an improved evolutionary algorithm for bilevel optimization. It is an extension of a recently proposed Bilevel Evolutionary Algorithm based on Quadratic Approximations (BLEAQ). Bilevel optimization problems are known to be difficult and computationally demanding. The recently proposed BLEAQ approach has been able to bring down the computational expense significantly as compared to the contemporary approaches. The strategy proposed in this paper further improves the algorithm by incorporating archiving and local search. Archiving is used to store the feasible members produced during the course of the algorithm that provide a larger pool of members for better quadratic approximations of optimal lower level solutions. Frequent local searches at upper level supported by the quadratic approximations help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems, and comparison results against the contemporary approaches are also provided.

68 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: This paper puts forward a proposal for combining multi-operator evolutionary algorithms (EAs), in which three EAs, each with multiple search operators, are used.
Abstract: This paper puts forward a proposal for combining multi-operator evolutionary algorithms (EAs), in which three EAs, each with multiple search operators, are used. During the evolution process, the algorithm gradually emphasizes on the best performing multi-operator EA, as well as the search operator. The proposed algorithm is tested on the CEC2014 single objective real-parameter competition. The results show that the proposed algorithm has the ability to reach good solutions.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: An adaptive method is proposed, MLSoft, that uses widely-used techniques in reinforcement learning such as the value function method and softmax selection rule to adapt the subcomponent size during the optimization process and is significantly better than an existing adaptive algorithm called MLCC on a set of large-scale fully-separable problems.
Abstract: In this paper we investigate the performance of cooperative co-evolutionary (CC) algorithms on large-scale fully-separable continuous optimization problems. We have shown that decomposition can have significant impact on the performance of CC algorithms. The empirical results show that the subcomponent size should be chosen small enough so that the subcomponent size is within the capacity of the subcomponent optimizer. In practice, determining the optimal size is difficult. Therefore, adaptive techniques are desired by practitioners. Here we propose an adaptive method, MLSoft, that uses widely-used techniques in reinforcement learning such as the value function method and softmax selection rule to adapt the subcomponent size during the optimization process. The experimental results show that MLSoft is significantly better than an existing adaptive algorithm called MLCC on a set of large-scale fully-separable problems.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: Two adaptations of Heuristics Miner, one of the most effective control-flow discovery algorithms, to the treatment of streams of event data, are proposed and experimentally compared against both artificial and real streams.
Abstract: Process Mining represents an important research field that connects Business Process Modeling and Data Mining. One of the most prominent task of Process Mining is the discovery of a control-flow starting from event logs. This paper focuses on the important problem of control-flow discovery starting from a stream of event data. We propose to adapt Heuristics Miner, one of the most effective control-flow discovery algorithms, to the treatment of streams of event data. Two adaptations, based on Lossy Counting and Lossy Counting with Budget, as well as a sliding window based version of Heuristics Miner, are proposed and experimentally compared against both artificial and real streams. Experimental results show the effectiveness of control-flow discovery algorithms for streams on artificial and real datasets.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: An improved version of TAA is proposed, namely ITAA, which incorporates a ranking mechanism for updating CA which enables truncating CA while CA overflows, and a shifted density estimation technique is embedded to replace the old ranking method in DA.
Abstract: Multi-Objective Evolutionary Algorithms have been deeply studied in the research community and widely used in the real-world applications. However, the performance of traditional Pareto-based MOEAs, such as NSGA-II and SPEA2, may deteriorate when tackling Many-Objective Problems, which refer to the problems with at least four objectives. The main cause for the degradation lies in that the high-proportional non-dominated solutions severely weaken the differentiation ability of Pareto-dominance. This may lead to stagnation. The Two Archive Algorithm (TAA) uses two archives, namely Convergence Archive (CA) and Diversity Archive (DA) as non-dominated solution repositories, focusing on convergence and diversity respectively. However, as the objective dimension increases, the size of CA increases enormously, leaving little space for DA. Besides, the update rate of CA is quite low, which causes severe problems for TAA to drive forth. Moreover, since TAA prefers DA members that are far away from CA, DA might drag the population backwards. In order to deal with these weaknesses, this paper proposes an improved version of TAA, namely ITAA. Compared to TAA, ITAA incorporates a ranking mechanism for updating CA which enables truncating CA while CA overflows. Besides, a shifted density estimation technique is embedded to replace the old ranking method in DA. The efficiency of ITAA is demonstrated by the experimental studies on benchmark problems with up to 20 objectives.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: Experimental results demonstrate the search ability of MVMO-SH for effectively tackling a variety of problems with different dimensions and mathematical properties.
Abstract: This paper provides a survey on the performance of the hybrid variant of the Mean-Variance Mapping Optimization (MVMO-SH) when applied for solving the IEEE-CEC 2014 competition test suite on Single Objective RealParameter Numerical Optimization. MVMO-SH adopts a swarm intelligence scheme, where each particle is characterized by its own solution archive and mapping function. Besides, multi-parent crossover is incorporated into the offspring creation stage in order to force the particles with worst fitness to explore other sub-regions of the search space. In addition, MVMO-SH can be customized to perform with an embedded local search strategy. Experimental results demonstrate the search ability of MVMO-SH for effectively tackling a variety of problems with different dimensions and mathematical properties.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A thorough empirical investigation of the conditions placed on particle swarm optimization control parameters to ensure convergent behavior found that parameters near the edge of the theoretically derived region converge at a very slow rate, after an initial population explosion.
Abstract: This paper performs a thorough empirical investigation of the conditions placed on particle swarm optimization control parameters to ensure convergent behavior. At present there exists a large number of theoretically derived parameter regions that will ensure particle convergence, however, selecting which region to utilize in practice is not obvious. The empirical study is carried out over a region slightly larger than that needed to contain all the relevant theoretically derived regions. It was found that there is a very strong correlation between one of the theoretically derived regions and the empirical evidence. It was also found that parameters near the edge of the theoretically derived region converge at a very slow rate, after an initial population explosion. Particle convergence is so slow, that in practice, the edge parameter settings should not really be considered useful as convergent parameter settings.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A migration probability is designed to integrate the migration of BBO and the normal explosion operator of FWA, which can not only reduce the computational burden, but also achieve a better balance between solution diversification and intensification.
Abstract: The paper presents a hybrid biogeography-based optimization (BBO) and fireworks algorithm (FWA) for global optimization. The key idea is to introduce the migration operator of BBO to FWA, in order to enhance information sharing among the population, and thus improve solution diversity and avoid premature convergence. A migration probability is designed to integrate the migration of BBO and the normal explosion operator of FWA, which can not only reduce the computational burden, but also achieve a better balance between solution diversification and intensification. The Gaussian explosion of the enhanced FWA (EFWA) is reserved to keep the high exploration ability of the algorithm. Experimental results on selected benchmark functions show that the hybrid BBO FWA has a significantly performance improvement in comparison with both BBO and EFWA.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: The optimal parameter configurations which can lead to the statistically superior performance over the CEC-2013 large-scale test problems are revealed and interestingly favour much larger population sizes while agreeing with the other parameter settings compared to the most commonly employed parameter configuration.
Abstract: This work provides an in-depth investigation of the effects of population initialization on Differential Evolution (DE) for dealing with large scale optimization problems. Firstly, we conduct a statistical parameter sensitive analysis to study the effects of DE's control parameters on its performance of solving large scale problems. This study reveals the optimal parameter configurations which can lead to the statistically superior performance over the CEC-2013 large-scale test problems. Thus identified optimal parameter configurations interestingly favour much larger population sizes while agreeing with the other parameter settings compared to the most commonly employed parameter configuration. Based on one of the identified optimal configurations and the most commonly used configuration, which only differ in the population size, we investigate the influence of various population initialization techniques on DE's performance. This study indicates that initialization plays a more crucial role in DE with a smaller population size. However, this observation might be the result of insufficient convergence due to the use of a large population size under the limited computational budget, which deserve more investigations.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A new feature selection approach based on particle swarm optimisation and a local search that mimics the typical backward elimination feature selection method and can be successfully used to select a significantly smaller number of features and simultaneously improve the classification performance over using all features.
Abstract: The advances in data collection increase the dimensionality of the data (i.e. the total number of features) in many fields, which arises a challenge to many existing feature selection approaches. This paper develops a new feature selection approach based on particle swarm optimisation (PSO) and a local search that mimics the typical backward elimination feature selection method. The proposed algorithm uses a wrapper based fitness function, i.e. the classification error rate. The local search is performed only on the global best and uses a filter based measure, which aims to take the advantages of both filter and wrapper approaches. The proposed approach is tested and compared with three recent PSO based feature selection algorithms and two typical traditional feature selection methods. Experiments on eight benchmark datasets show that the proposed algorithm can be successfully used to select a significantly smaller number of features and simultaneously improve the classification performance over using all features. The proposed approach outperforms the three PSO based algorithms and the two traditional methods.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: This study proposed a partial opposition-based learning (POBL) schema that focuses a set of partial opposite points (or partial opposite population) of an estimate that improves the effectiveness and improvement of the POBL-ADE compared with ADE.
Abstract: —Opposition-based Learning (OBL) has been reported with an increased performance in enhancing various optimization approaches. Instead of investigating the opposite point of a candidate in OBL, this study proposed a partial opposition-based learning (POBL) schema that focuses a set of partial opposite points (or partial opposite population) of an estimate. Furthermore, a POBL-based adaptive differential evolution algorithm (POBL-ADE) is proposed to improve the effectiveness of ADE. The proposed algorithm is evaluated on the CEC2014’s test suite in the special session and competition for real parameter single objective optimization in IEEE CEC 2014. Simulation results over the benchmark functions demonstrate the effectiveness and improvement of the POBL-ADE compared with ADE. Keywords—opposition-based learning; differential evolution; real parameter; optimization. I. INTRODUCTION Opposition-based learning (OBL), originally introduced by Tizhoosh [1], tries to find a better candidate solution by simultaneously considering an estimate point and its corresponding opposite estimate. It has been proved that an opposite candidate solution can provide a higher chance of finding solutions that are closer to the global optimum one [2][3]. The concept of OBL has been applied to improve the performance meta-heuristic algorithms and machine learning algorithms [4], [5]. In [6], the convergence speed of evolutionary algorithm is accelerated by replacing the random initialization with the opposition-based population initialization. Further, [2] mathematically and experimentally proves this advantage when there is no prior knowledge about the solution. The benefits of the opposite of a candidate solution over random solutions is also shown intuitively based on an Euclidean distance-to-optimal solution proof in [7]. Recently, by considering the opposite individuals in the population initialization stage and generation jumping stage, OBL has been recently applied to accelerate various meta-heuristic algorithms, such as Differential Evolution (DE)[8]–[10], Particle Swarm Optimization (PSO) [11], [12], Biogeography-Based Optimization (BBO) [13]–[15], teaching and learning algorithm [16], gravitational search algorithm [17], Harmony Search (HS) [18], and Artificial Bee Colony (ABC)[19]. OBL are also employed to accelerate machine learning algorithms including reinforcement learning and backpropagation learning in neural networks, and Estimation of Distribution Algorithm (EDA). Opposition-based reinforcement learning (ORL) were proposed by considering opposite states and opposite actions[20]–[22]. The results demonstrate that ORL outperforms the reinforcement learning. Similarly, by considering the opposite transfer functions and opposite weights, the opposition-based neural networks were also proposed to improve their learning speed and accuracy [23]–[25]. Motivated by the idea of OBL, this study presents an improved OBL, namely partial opposition-based learning (POBL). Rather than only examining the opposite point of a candidate, the POBL is devised to compute partial opposite points (or partial opposite population) of an estimate. Further, a POBL-based adaptive DE algorithm is proposed to solve the numerical optimization problems in “CEC2014 Special Session and Competition on real parameter single objective optimization”[26]. In the proposed POBL-based adaptive DE, the Adaptive Differential Evolution (ADE) [27] that needs no parameters to be tuned is improved by POBL during the population initialization and generation jumping. Experimental simulations on benchmark functions show that POBL-ADE obtains better performance on the majority of the test problems compared with basic ADE and OBL-ADE. The rest of the paper is structured as follows. Section II provides the overview of the ADE and the Opposition-based learning. Section III describes the proposed Partial opposition based learning and the POBL-based ADE. The experimental setting is depicted in Section IV. Section V reports the experimental results with discussions. Finally, Section V concludes this study with comments toward future research directions.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A new scheme that utilizes the centroid point of a population to calculate opposite individuals and is identified as centroid opposition-based differential evolution (CODE), which is comprehensively evaluated on well-known complex benchmark functions and compared with the performance of conventional DE, ODE, and some other state-of-the-art algorithms in terms of solution accuracy.
Abstract: The capabilities of evolutionary algorithms (EAs) in solving nonlinear and non-convex optimization problems are significant. Among the many types of methods, differential evolution (DE) is an effective population-based stochastic algorithm, which has emerged as very competitive. Since its inception in 1995, many variants of DE to improve the performance of its predecessor have been introduced. In this context, opposition-based differential evolution (ODE) established a novel concept in which, each individual must compete with its opposite in terms of the fitness value in order to make an entry in the next generation. The generation of opposite points is based on the population's current extreme points (i.e., maximum and minimum) in the search space; these extreme points are not proper representatives for whole population, compared to centroid point which is inclusive regarding all individuals in the population. This paper develops a new scheme that utilizes the centroid point of a population to calculate opposite individuals. Therefore, the classical scheme of an opposite point is modified accordingly. Incorporating this new scheme into ODE leads to an enhanced ODE that is identified as centroid opposition-based differential evolution (CODE). The performance of the CODE algorithm is comprehensively evaluated on well-known complex benchmark functions and compared with the performance of conventional DE, ODE, and some other state-of-the-art algorithms (such as SaDE, ADE, SDE, and jDE) in terms of solution accuracy. The results for CODE are promising.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A controlled restart in differential evolution (DE) is proposed, which is used in the solution of the benchmark problems defined for the CEC 2014 competition.
Abstract: A controlled restart in differential evolution (DE) is proposed. The conditions of restart are derived from the difference of maximum and minimum values of the objective function and the estimated maximum distance among the points in the current population. The restart is applied in a competitive-adaptation variant of DE. This DE algorithm with the controlled restart is used in the solution of the benchmark problems defined for the CEC 2014 competition. Two control parameters of restart are set up intuitively. The population size, which is the only control parameter of competitive-adaptation variant of DE, is set up to the values based on a short preliminary experimentation.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: The proposed approach uses information related to the neighborhood adopted in MOEA/D in order to obtain solutions which minimize the objective functions within the allowed feasible region to solve the constrained test problems adopted in the comparative study.
Abstract: In spite of the popularity of the Multi-objective Evolutionary Algorithm based on Decomposition (MOEA/D), its use in Constrained Multi-objective Optimization Problems (CMOPs) has not been fully explored. In the last few years, there have been a few proposals to extend MOEA/D to the solution of CMOPs. However, most of these proposals have adopted selection mechanisms based on penalty functions. In this paper, we present a novel selection mechanism based on the well-known e-constraint method. The proposed approach uses information related to the neighborhood adopted in MOEA/D in order to obtain solutions which minimize the objective functions within the allowed feasible region. Our preliminary results indicate that our approach is highly competitive with respect to a state-of-the-art MOEA which solves in an efficient way the constrained test problems adopted in our comparative study.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: The comparative study shows that R-MEAD2 outperforms the dominance-based method R-NSGA-II on many-objective problems and shows that a uniform random number generator is simple and able to generate evenly distributed points in a high dimensional space.
Abstract: Evolutionary algorithms that rely on dominance ranking often suffer from a low selection pressure problem when dealing with many-objective problems. Decomposition and user-preference based methods can help to alleviate this problem to a great extent. In this paper, a user-preference based evolutionary multi-objective algorithm is proposed that uses decomposition methods for solving many-objective problems. Decomposition techniques that are widely used in multi-objective evolutionary optimization require a set of evenly distributed weight vectors to generate a diverse set of solutions on the Pareto-optimal front. The newly proposed algorithm, R-MEAD2, improves the scalability of its previous version, R-MEAD, which uses a simplexlattice design method for generating weight vectors. This makes the population size is dependent on the dimension size of the objective space. R-MEAD2 uses a uniform random number generator to remove the coupling between dimension and the population size. This paper shows that a uniform random number generator is simple and able to generate evenly distributed points in a high dimensional space. Our comparative study shows that R-MEAD2 outperforms the dominance-based method R-NSGA-II on many-objective problems.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A definition of population diversity in BSO algorithm to measure the change of solutions' distribution is proposed in this paper and the experimental results show that the performance of the BSO is improved by these two strategies.
Abstract: Swarm intelligence suffers the premature convergence, which happens partially due to the solutions getting clustered together, and not diverging again. The brain storm optimization (BSO), which is a young and promising algorithm in swarm intelligence, is based on the collective behavior of human being, that is, the brainstorming process. Premature convergence also happens in the BSO algorithm. The solutions get clustered after a few iterations, which indicate that the population diversity decreases quickly during the search. A definition of population diversity in BSO algorithm to measure the change of solutions' distribution is proposed in this paper. The algorithm's exploration and exploitation ability can be measured based on the change of population diversity. Two kinds of partial re-initialization strategies are utilized to improve the population diversity in BSO algorithm. The experimental results show that the performance of the BSO is improved by these two strategies.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: This paper derives the runtime of selection hyper-heuristic with a number of the most commonly used learning mechanisms not only on a classical example problem, but also on a general model of fitness landscapes, which helps in understanding the behaviour of hyper- heuristics.
Abstract: The term selection hyper-heuristics refers to a randomised search technique used to solve computational problems by choosing and executing heuristics from a set of pre-defined low-level heuristic components. Selection hyper-heuristics have been successfully employed in many problem domains. Nevertheless, a theoretical foundation of these heuristics is largely missing. Gaining insight into the behaviour of selection hyper-heuristics is challenging due to the complexity and random design of these heuristics. This paper is one of the initial studies to analyse rigorously the runtime of selection hyper-heuristics with a number of the most commonly used learning mechanisms; namely, simple random, random gradient, greedy, and permutation. We derive the runtime of selection hyper-heuristic with these learning mechanisms not only on a classical example problem, but also on a general model of fitness landscapes. This in turn helps in understanding the behaviour of hyper-heuristics. Our results show that all the considered selections hyper-heuristics have roughly the same performance. This suggests that the learning mechanisms do not necessarily improve the performance of hyper-heuristics. A new learning mechanism that improves the performance of hyper-heuristic on our example problem is presented.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A decomposition method based on High Dimensional Model Representation which extracts separable and nonseparable subcomponents for Cooperative Co-evolutionary algorithms and is promisingly efficient to solve large-scale optimization problems.
Abstract: Cooperative Co-evolutionary algorithms are effective approaches to solve large-scale optimization problems. The crucial challenge in these methods is the design of a decomposition method which is able to detect interactions among variables. In this paper, we proposed a decomposition method based on High Dimensional Model Representation (HDMR) which extracts separable and nonseparable subcomponents for Cooperative Co-evolutionary algorithms. The entire decomposition procedure is conducted before applying the optimization. The experimental results for D=1000 on twenty CEC-2010 benchmark functions show that the proposed method is promisingly efficient to solve large-scale optimization problems. The proposed approach is compared with two other methods and discussed in details.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: A new replacement named global replacement is proposed and can improve the performance of MOEA/D and trade-offs between convergence and diversity can be easily controlled in this replacement strategy.
Abstract: This paper studies the replacement schemes in MOEA/D and proposes a new replacement named global replacement. It can improve the performance of MOEA/D. Moreover, trade-offs between convergence and diversity can be easily controlled in this replacement strategy. It also shows that different problems need different trade-offs between convergence and diversity. We test the MOEA/D with this global replacement on three sets of benchmark problems to demonstrate its effectiveness.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: This paper characterises the landscapes of two commonly used benchmark suites, and uses these landscape characteristics to obtain a high level view of the current state of benchmark functions.
Abstract: New and existing optimisation algorithms are often compared by evaluating their performance on a benchmark suite. This set of functions aims to evaluate the algorithm across a range of problems and serves as a baseline measurement of how the algorithm may perform on real-world problems. It is important that the functions serve as a good representative of commonly occurring problems. In order to select functions that will make up the benchmark suite, the characteristics and relationships among the functions must be known. This paper characterises the landscapes of two commonly used benchmark suites, and uses these landscape characteristics to obtain a high level view of the current state of benchmark functions. This is done by using a self-organising feature map to cluster and analyse functions based on landscape characteristics. It is found that while there are numerous functions that cover a wide range of characteristics, there are characteristics that are under represented, or not even covered at all. Furthermore, it is discovered that common benchmark suites are composed of functions which are highly similar according to the measured characteristics.

Proceedings ArticleDOI
06 Jul 2014
TL;DR: This work is the first to propose a classical multi-objective formalisation where both objectives are equally important, and enables software engineers to select not just one solution but instead to select from an array of test suite possibilities the one that best matches the economical and technological constraints of their testing context.
Abstract: Software Product Lines (SPLs) are families of related software products, each with its own set of feature combinations. Their commonly large number of products poses a unique set of challenges for software testing as it might not be technologically or economically feasible to test of all them individually. SPL pairwise testing aims at selecting a set of products to test such that all possible combinations of two features are covered by at least one selected product. Most approaches for SPL pairwise testing have focused on achieving full coverage of all pairwise feature combinations with the minimum number of products to test. Though useful in many contexts, this single-objective perspective does not reflect the prevailing scenario where software engineers do face trade-offs between the objectives of maximizing the coverage or minimizing the number of products to test. In contrast and to address this need, our work is the first to propose a classical multi-objective formalisation where both objectives are equally important. In this paper, we study the application to SPL pairwise testing of four classical multi-objective evolutionary algorithms. We developed three seeding strategies - techniques that leverage problem domain knowledge - and measured their performance impact on a large and diverse corpus of case studies using two well-known multi-objective quality measures. Our study identifies the performance differences among the algorithms and corroborates that the more domain knowledge leveraged the better the search results. Our findings enable software engineers to select not just one solution (as in the case of single-objective techniques) but instead to select from an array of test suite possibilities the one that best matches the economical and technological constraints of their testing context.