scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2011"


Journal ArticleDOI
TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Abstract: Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.

4,321 citations


Journal ArticleDOI
TL;DR: A novel method, called composite DE (CoDE), has been proposed, which uses three trial vector generation strategies and three control parameter settings and randomly combines them to generate trial vectors.
Abstract: Trial vector generation strategies and control parameters have a significant influence on the performance of differential evolution (DE). This paper studies whether the performance of DE can be improved by combining several effective trial vector generation strategies with some suitable control parameter settings. A novel method, called composite DE (CoDE), has been proposed in this paper. This method uses three trial vector generation strategies and three control parameter settings. It randomly combines them to generate trial vectors. CoDE has been tested on all the CEC2005 contest test instances. Experimental results show that CoDE is very competitive.

1,207 citations


Journal ArticleDOI
TL;DR: Compared with other PSO algorithms, the comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.
Abstract: Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSO-L algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.

633 citations


Journal ArticleDOI
TL;DR: A comprehensive multi-facet survey of recent research in memetic computation is presented and includes simple hybrids, adaptive hybrids and memetic automaton.
Abstract: Memetic computation is a paradigm that uses the notion of meme(s) as units of information encoded in computational representations for the purpose of problem-solving. It covers a plethora of potentially rich meme-inspired computing methodologies, frameworks and operational algorithms including simple hybrids, adaptive hybrids and memetic automaton. In this paper, a comprehensive multi-facet survey of recent research in memetic computation is presented.

485 citations


Journal ArticleDOI
TL;DR: This paper incorporates a novel framework based on the proximity characteristics among the individual solutions as they evolve, which incorporates information of neighboring individuals in an attempt to efficiently guide the evolution of the population toward the global optimum.
Abstract: Differential evolution is a very popular optimization algorithm and considerable research has been devoted to the development of efficient search operators. Motivated by the different manner in which various search operators behave, we propose a novel framework based on the proximity characteristics among the individual solutions as they evolve. Our framework incorporates information of neighboring individuals, in an attempt to efficiently guide the evolution of the population toward the global optimum, without sacrificing the search capabilities of the algorithm. More specifically, the random selection of parents during mutation is modified, by assigning to each individual a probability of selection that is inversely proportional to its distance from the mutated individual. The proposed framework can be applied to any mutation strategy with minimal changes. In this paper, we incorporate this framework in the original differential evolution algorithm, as well as other recently proposed differential evolution variants. Through an extensive experimental study, we show that the proposed framework results in enhanced performance for the majority of the benchmark problems studied.

303 citations


Journal ArticleDOI
TL;DR: In this paper, two diversity management mechanisms are introduced and it is found that the inclusion of one of the mechanisms improves the performance of a well-established MOEA in many-objective optimization problems, in terms of both convergence and diversity.
Abstract: In evolutionary multiobjective optimization, the task of the optimizer is to obtain an accurate and useful approximation of the true Pareto-optimal front. Proximity to the front and diversity of solutions within the approximation set are important requirements. Most established multiobjective evolutionary algorithms (MOEAs) have mechanisms that address these requirements. However, in many-objective optimization, where the number of objectives is greater than 2 or 3, it has been found that these two requirements can conflict with one another, introducing problems such as dominance resistance and speciation. In this paper, two diversity management mechanisms are introduced to investigate their impact on overall solution convergence. They are introduced separately, and in combination, and tested on a set of test functions with an increasing number of objectives (6-20). It is found that the inclusion of one of the mechanisms improves the performance of a well-established MOEA in many-objective optimization problems, in terms of both convergence and diversity. The relevance of this for many-objective MOEAs is discussed.

240 citations


Journal ArticleDOI
TL;DR: A novel algorithm, Pareto corner search evolutionary algorithm (PCSEA), is introduced in this paper, which searches for the corners of the PareTO front instead of searching for the complete Pare to front to identify the relevant objectives.
Abstract: Many-objective optimization refers to the optimization problems containing large number of objectives, typically more than four. Non-dominance is an inadequate strategy for convergence to the Pareto front for such problems, as almost all solutions in the population become non-dominated, resulting in loss of convergence pressure. However, for some problems, it may be possible to generate the Pareto front using only a few of the objectives, rendering the rest of the objectives redundant. Such problems may be reducible to a manageable number of relevant objectives, which can be optimized using conventional multiobjective evolutionary algorithms (MOEAs). For dimensionality reduction, most proposals in the paper rely on analysis of a representative set of solutions obtained by running a conventional MOEA for a large number of generations, which is computationally overbearing. A novel algorithm, Pareto corner search evolutionary algorithm (PCSEA), is introduced in this paper, which searches for the corners of the Pareto front instead of searching for the complete Pareto front. The solutions obtained using PCSEA are then used for dimensionality reduction to identify the relevant objectives. The potential of the proposed approach is demonstrated by studying its performance on a set of benchmark test problems and two engineering examples. While the preliminary results obtained using PCSEA are promising, there are a number of areas that need further investigation. This paper provides a number of useful insights into dimensionality reduction and, in particular, highlights some of the roadblocks that need to be cleared for future development of algorithms attempting to use few selected solutions for identifying relevant objectives.

228 citations


Journal ArticleDOI
TL;DR: The proposed compact differential evolution algorithm cDE outperforms other modern compact algorithms and displays a competitive performance with respect to state-of-the-art population-based algorithms employing a DE logic.
Abstract: This paper proposes the compact differential evolution (cDE) algorithm. cDE, like other compact evolutionary algorithms, does not process a population of solutions but its statistic description which evolves similarly to all the evolutionary algorithms. In addition, cDE employs the mutation and crossover typical of differential evolution (DE) thus reproducing its search logic. Unlike other compact evolutionary algorithms, in cDE, the survivor selection scheme of DE can be straightforwardly encoded. One important feature of the proposed cDE algorithm is the capability of efficiently performing an optimization process despite a limited memory requirement. This fact makes the cDE algorithm suitable for hardware contexts characterized by small computational power such as micro-controllers and commercial robots. In addition, due to its nature cDE uses an implicit randomization of the offspring generation which corrects and improves the DE search logic. An extensive numerical setup has been implemented in order to prove the viability of cDE and test its performance with respect to other modern compact evolutionary algorithms and state-of-the-art population-based DE algorithms. Test results show that cDE outperforms on a regular basis its corresponding population-based DE variant. Experiments have been repeated for four different mutation schemes. In addition cDE outperforms other modern compact algorithms and displays a competitive performance with respect to state-of-the-art population-based algorithms employing a DE logic. Finally, the cDE is applied to a challenging experimental case study regarding the on-line training of a nonlinear neural-network-based controller for a precise positioning system subject to changes of payload. The main peculiarity of this control application is that the control software is not implemented into a computer connected to the control system but directly on the micro-controller. Both numerical results on the test functions and experimental results on the real-world problem are very promising and allow us to think that cDE and future developments can be an efficient option for optimization in hardware environments characterized by limited memory.

218 citations


Journal ArticleDOI
TL;DR: A new memetic algorithm (MA) called decomposition-based MA with extended neighborhood search (D-MAENS) is proposed, which combines the advanced features from both the MAENS approach for single-objective CARP and multiobjective evolutionary optimization.
Abstract: The capacitated arc routing problem (CARP) is a challenging combinatorial optimization problem with many real-world applications, e.g., salting route optimization and fleet management. There have been many attempts at solving CARP using heuristic and meta-heuristic approaches, including evolutionary algorithms. However, almost all such attempts formulate CARP as a single-objective problem although it usually has more than one objective, especially considering its real-world applications. This paper studies multiobjective CARP (MO-CARP). A new memetic algorithm (MA) called decomposition-based MA with extended neighborhood search (D-MAENS) is proposed. The new algorithm combines the advanced features from both the MAENS approach for single-objective CARP and multiobjective evolutionary optimization. Our experimental studies have shown that such combination outperforms significantly an off-the-shelf multiobjective evolutionary algorithm, namely nondominated sorting genetic algorithm II, and the state-of-the-art multiobjective algorithm for MO-CARP (LMOGA). Our work has also shown that a specifically designed multiobjective algorithm by combining its single-objective version and multiobjective features may lead to competitive multiobjective algorithms for multiobjective combinatorial optimization problems.

190 citations


Journal ArticleDOI
TL;DR: This paper presents the first comprehensive study showing that phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases, and suggests a path forward that combines indirect encodings with a separate process of refinement.
Abstract: This paper investigates how an evolutionary algorithm with an indirect encoding exploits the property of phenotypic regularity, an important design principle found in natural organisms and engineered designs. We present the first comprehensive study showing that such phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases. Such an ability to produce regular solutions that can exploit the regularity of problems is an important prerequisite if evolutionary algorithms are to scale to high-dimensional real-world problems, which typically contain many regularities, both known and unrecognized. The indirect encoding in this case study is HyperNEAT, which evolves artificial neural networks (ANNs) in a manner inspired by concepts from biological development. We demonstrate that, in contrast to two direct encoding controls, HyperNEAT produces both regular behaviors and regular ANNs, which enables HyperNEAT to significantly outperform the direct encodings as regularity increases in three problem domains. We also show that the types of regularities HyperNEAT produces can be biased, allowing domain knowledge and preferences to be injected into the search. Finally, we examine the downside of a bias toward regularity. Even when a solution is mainly regular, some irregularity may be needed to perfect its functionality. This insight is illustrated by a new algorithm called HybrID that hybridizes indirect and direct encodings, which matched HyperNEAT's performance on regular problems yet outperformed it on problems with some irregularity. HybrID's ability to improve upon the performance of HyperNEAT raises the question of whether indirect encodings may ultimately excel not as stand-alone algorithms, but by being hybridized with a further process of refinement, wherein the indirect encoding produces patterns that exploit problem regularity and the refining process modifies that pattern to capture irregularities. This paper thus paints a more complete picture of indirect encodings than prior studies because it analyzes the impact of the continuum between irregularity and regularity on the performance of such encodings, and ultimately suggests a path forward that combines indirect encodings with a separate process of refinement.

151 citations


Journal ArticleDOI
TL;DR: A new heterogeneous decentralized DE algorithm combining the two studied operators in the best performing studied population structure has been designed and evaluated and is shown to improve the previously obtained results, and outperform the compared state-of-the-art DEs.
Abstract: Differential evolution (DE) algorithms compose an efficient type of evolutionary algorithm (EA) for the global optimization domain. Although it is well known that the population structure has a major influence on the behavior of EAs, there are few works studying its effect in DE algorithms. In this paper, we propose and analyze several DE variants using different panmictic and decentralized population schemes. As it happens for other EAs, we demonstrate that the population scheme has a marked influence on the behavior of DE algorithms too. Additionally, a new operator for generating the mutant vector is proposed and compared versus a classical one on all the proposed population models. After that, a new heterogeneous decentralized DE algorithm combining the two studied operators in the best performing studied population structure has been designed and evaluated. In total, 13 new DE algorithms are presented and evaluated in this paper. Summarizing our results, all the studied algorithms are highly competitive compared to the state-of-the-art DE algorithms taken from the literature for most considered problems, and the best ones implement a decentralized population. With respect to the population structure, the proposed decentralized versions clearly provide a better performance compared to the panmictic ones. The new mutation operator demonstrates a faster convergence on most of the studied problems versus a classical operator taken from the DE literature. Finally, the new heterogeneous decentralized DE is shown to improve the previously obtained results, and outperform the compared state-of-the-art DEs.

Journal ArticleDOI
TL;DR: The scalability in the number of objectives observed in the literature is addressed and the challenges for the treatment of many-objective problems for evolution strategies are extracted and used to explain recent advances in this field.
Abstract: In this paper, we study the influence of the number of objectives of a continuous multiobjective optimization problem on its hardness for evolution strategies which is of particular interest for many-objective optimization problems. To be more precise, we measure the hardness in terms of the evolution (or convergence) of the population toward the set of interest, the Pareto set. Previous related studies consider mainly the number of nondominated individuals within a population which greatly improved the understanding of the problem and has led to possible remedies. However, in certain cases this ansatz is not sophisticated enough to understand all phenomena, and can even be misleading. In this paper, we suggest alternatively to consider the probability to improve the situation of the population which can, to a certain extent, be measured by the sizes of the descent cones. As an example, we make some qualitative considerations on a general class of uni-modal test problems and conjecture that these problems get harder by adding an objective, but that this difference is practically not significant, and we support this by some empirical studies. Further, we address the scalability in the number of objectives observed in the literature. That is, we try to extract the challenges for the treatment of many-objective problems for evolution strategies based on our observations and use them to explain recent advances in this field.

Journal ArticleDOI
TL;DR: This paper proposes a new approach that applies GP to improve existing software by optimizing its non-functional properties such as execution time, memory usage, or power consumption and discusses how best to combine and extend the existing evolutionary methods of GP, multiobjective optimization, and coevolution in order to improved existing software.
Abstract: Most applications of genetic programming (GP) involve the creation of an entirely new function, program or expression to solve a specific problem. In this paper, we propose a new approach that applies GP to improve existing software by optimizing its non-functional properties such as execution time, memory usage, or power consumption. In general, satisfying non-functional requirements is a difficult task and often achieved in part by optimizing compilers. However, modern compilers are in general not always able to produce semantically equivalent alternatives that optimize non-functional properties, even if such alternatives are known to exist: this is usually due to the limited local nature of such optimizations. In this paper, we discuss how best to combine and extend the existing evolutionary methods of GP, multiobjective optimization, and coevolution in order to improve existing software. Given as input the implementation of a function, we attempt to evolve a semantically equivalent version, in this case optimized to reduce execution time subject to a given probability distribution of inputs. We demonstrate that our framework is able to produce non-obvious optimizations that compilers are not yet able to generate on eight example functions. We employ a coevolved population of test cases to encourage the preservation of the function's semantics. We exploit the original program both through seeding of the population in order to focus the search, and as an oracle for testing purposes. As well as discussing the issues that arise when attempting to improve software, we employ rigorous experimental method to provide interesting and practical insights to suggest how to address these issues.

Journal ArticleDOI
TL;DR: This analysis constitutes so far the most realistic attempt to better understand and approach the real PSO dynamics from a stochastic point of view.
Abstract: Particle swarm optimization (PSO) can be interpreted physically as a particular discretization of a stochastic damped mass-spring system. Knowledge of this analogy has been crucial to derive the PSO continuous model and to introduce different PSO family members including the generalized PSO (GPSO) algorithm, which is the generalization of PSO for any time discretization step. In this paper, we present the stochastic analysis of the linear continuous and generalized PSO models for the case of a stochastic center of attraction. Analysis of the GPSO second order trajectories is performed and clarifies the roles of the PSO parameters and that of the cost function through the algorithm execution: while the PSO parameters mainly control the eigenvalues of the dynamical systems involved, the mean trajectory of the center of attraction and its covariance functions with the trajectories and their derivatives (or the trajectories in the near past) act as forcing terms to update first and second order trajectories. The similarity between the oscillation center dynamics observed for different kinds of benchmark functions might explain the PSO success for a broad range of optimization problems. Finally, a comparison between real simulations and the linear continuous PSO and GPSO models is shown. As expected, the GPSO tends to the continuous PSO when time step approaches zero. Both models account fairly well for the dynamics (first and second order moments) observed in real runs. This analysis constitutes so far the most realistic attempt to better understand and approach the real PSO dynamics from a stochastic point of view.

Journal ArticleDOI
TL;DR: In this paper, the authors define a discrete dynamical system that governs the evolution of a population of agents and derive a variant of differential evolution (DE) from the system, which has fixed points toward which it converges with probability one for an infinite number of generations.
Abstract: In this paper, we define a discrete dynamical system that governs the evolution of a population of agents. From the dynamical system, a variant of differential evolution (DE) is derived. It is then demonstrated that, under some assumptions on the differential mutation strategy and on the local structure of the objective function, the proposed dynamical system has fixed points toward which it converges with probability one for an infinite number of generations. This property is used to derive an algorithm that performs better than standard DE on some space trajectory optimization problems. The novel algorithm is then extended with a guided restart procedure that further increases the performance, reducing the probability of stagnation in deceptive local minima.

Journal ArticleDOI
TL;DR: Analysis of the adaptive operators illustrates that the key benefit of ACROMUSE is the synergy of the operators working together to achieve an effective balance between exploration and exploitation.
Abstract: This paper presents ACROMUSE, a novel genetic algorithm (GA) which adapts crossover, mutation, and selection parameters. ACROMUSEs objective is to create and maintain a diverse population of highly-fit (healthy) individuals, capable of adapting quickly to fitness landscape change and well-suited to the efficient optimization of multimodal fitness landscapes. A new methodology is introduced for determining standard population diversity (SPD) and an original measure of healthy population diversity (HPD) is proposed. The SPD measure is employed to adapt crossover and mutation, while selection pressure is controlled by adapting tournament size according to HPD. In addition to selection pressure control, ACROMUSE tournament selection selects individuals according to healthy diversity contribution rather than fitness. This proposed selection mechanism simultaneously promotes diversity and fitness within the population. The performance of ACROMUSE is evaluated using various multimodal benchmark functions. Statistically significant results are presented comparing ACROMUSEs fitness and diversity performance to that of several other GAs. By maintaining a diverse population of healthy individuals, ACROMUSE responds to fitness landscape change by restoring better fitness scores faster than other GAs. Analysis of the adaptive operators illustrates that the key benefit of ACROMUSE is the synergy of the operators working together to achieve an effective balance between exploration and exploitation.

Journal ArticleDOI
TL;DR: The experimental results suggest that the cooperation of CLPSO and AdpISPO in the framework of memetic algorithm is capable of searching the ARV codebook space efficiently.
Abstract: With the rapid development of high-throughput DNA sequencing technologies, the amount of DNA sequence data is accumulating exponentially. The huge influx of data creates new challenges for storage and transmission. This paper proposes a novel adaptive particle swarm optimization-based memetic algorithm (POMA) for DNA sequence compression. POMA is a synergy of comprehensive learning particle swarm optimization (CLPSO) and an adaptive intelligent single particle optimizer (AdpISPO)-based local search. It takes advantage of both CLPSO and AdpISPO to optimize the design of approximate repeat vector (ARV) codebook for DNA sequence compression. ARV is first introduced in this paper to represent the repeated fragments across multiple sequences in direct, mirror, pairing, and inverted patterns. In POMA, candidate ARV codebooks are encoded as particles and the optimal solution, which covers the most approximate repeated fragments with the fewest base variations, is identified through the exploration and exploitation of POMA. In each iteration of POMA, the leader particles in the swarm are selected based on weighted fitness values and each leader particle is fine-tuned with an AdpISPO-based local search, so that the convergence of the search in local region is accelerated. A detailed comparison study between POMA and the counterpart algorithms is performed on 29 (23 basic and 6 composite) benchmark functions and 11 real DNA sequences. POMA is observed to obtain better or competitive performance with a limited number of function evaluations. POMA also attains lower bits-per-base than other state-of-the-art DNA-specific algorithms on DNA sequence data. The experimental results suggest that the cooperation of CLPSO and AdpISPO in the framework of memetic algorithm is capable of searching the ARV codebook space efficiently.

Journal ArticleDOI
TL;DR: It is argued that the proposed model offers a new perspective into the problem difficulty of combinatorial optimization problems and may inspire the design of more effective search heuristics.
Abstract: In previous work, we have introduced a network based model that abstracts many details of the underlying landscape and compresses the landscape information into a weighted, oriented graph which we call the local optima network. The vertices of this graph are the local optima of the given fitness landscape, while the arcs are transition probabilities between local optima basins. Here, we extend this formalism to neutral fitness landscapes, which are common in difficult combinatorial search spaces. The study is based on two neutral variants of the well-known NK family of landscapes (where N stands for the chromosome length, and K for the number of gene epistatic interactions within the chromosome). By using these two NK variants, probabilistic (NKp), and quantified NK (NKq), in which the amount of neutrality can be tuned by a parameter, we show that our new definitions of the optima networks and the associated basins are consistent with the previous definitions for the non-neutral case. Moreover, our empirical study and statistical analysis show that the features of neutral landscapes interpolate smoothly between landscapes with maximum neutrality and non-neutral ones. We found some unknown structural differences between the two studied families of neutral landscapes. But overall, the network features studied confirmed that neutrality, in landscapes with percolating neutral networks, may enhance heuristic search. Our current methodology requires the exhaustive enumeration of the underlying search space. Therefore, sampling techniques should be developed before this analysis can have practical implications. We argue, however, that the proposed model offers a new perspective into the problem difficulty of combinatorial optimization problems and may inspire the design of more effective search heuristics.

Journal ArticleDOI
TL;DR: Fertile Darwinian Bytecode Harvester (FINCH), a methodology for evolving Java bytecode, enabling the evolution of extant, unrestricted Java programs, or programs in other languages that compile to Javabytecode, is described.
Abstract: We describe Fertile Darwinian Bytecode Harvester (FINCH), a methodology for evolving Java bytecode, enabling the evolution of extant, unrestricted Java programs, or programs in other languages that compile to Java bytecode. Our approach is based upon the notion of compatible crossover, which produces correct programs by performing operand stack-based, local variables-based, and control flow-based compatibility checks on source and destination bytecode sections. This is in contrast to existing work that uses restricted subsets of the Java bytecode instruction set as a representation language for individuals in genetic programming. We demonstrate FINCH's unqualified success at solving a host of problems, including simple and complex regression, trail navigation, image classification, array sum, and tic-tac-toe. FINCH exploits the richness of the Java virtual machine architecture and type system, ultimately evolving human-readable solutions in the form of Java programs. The ability to evolve Java programs will hopefully lead to a valuable new tool in the software engineer's toolkit.

Journal ArticleDOI
TL;DR: A highly effective multilevel memetic algorithm, which integrates a new multiparent crossover operator and a powerful perturbation-based tabu search algorithm that performs far better than any of the existing graph partitioning algorithms in terms of solution quality.
Abstract: Graph partitioning is one of the most studied NP-complete problems. Given a graph G=(V, E) , the task is to partition the vertex set V into k disjoint subsets of about the same size, such that the number of edges with endpoints in different subsets is minimized. In this paper, we present a highly effective multilevel memetic algorithm, which integrates a new multiparent crossover operator and a powerful perturbation-based tabu search algorithm. The proposed crossover operator tends to preserve the backbone with respect to a certain number of parent individuals, i.e., the grouping of vertices which is common to all parent individuals. Extensive experimental studies on numerous benchmark instances from the graph partitioning archive show that the proposed approach, within a time limit ranging from several minutes to several hours, performs far better than any of the existing graph partitioning algorithms in terms of solution quality.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed hybrid operators have better performance than that of pure gradient-based operators in attaining either a broad distribution or maintaining much diversity of obtained non-dominated solutions.
Abstract: The production planning optimization for mineral processing is important for non-renewable raw mineral resource utilization. This paper presents a nonlinear multiobjective programming model for a mineral processing production planning (MPPP) for optimizing five production indices, including its iron concentrate output, the concentrate grade, the concentration ratio, the metal recovery, and the production cost. A gradient-based hybrid operator is proposed in two evolutionary algorithms named the gradient-based NSGA-II (G-NSGA-II) and the gradient-based SPEA2 (G-SPEA2) for MPPP optimization. The gradient-based operator of the proposed hybrid operator is normalized as a strictly convex cone combination of negative gradient direction of each objective, and is provided to move each selected point along some descent direction of the objective functions to the Pareto front, so as to reduce the invalid trial times of crossover and mutation. Two theorems are established to reveal a descent direction for the improvement of all objective functions. Experiments on standard test problems, namely ZDT 1-3, CONSTR, SRN, and TNK, have demonstrated that the proposed algorithms can improve the chance of minimizing all objectives compared to pure evolutionary algorithms in solving the multiobjective optimization problems with differentiable objective functions under short running time limitation. Computational experiments in MPPP application case have indicated that the proposed algorithms can achieve better production indices than those of NSGA-II, T-NSGA-FD, T-NSGA-SP, and SPEA2 in the case of small number of generations. Also, those experimental results show that the proposed hybrid operators have better performance than that of pure gradient-based operators in attaining either a broad distribution or maintaining much diversity of obtained non-dominated solutions.

Journal ArticleDOI
TL;DR: The improvement obtained from diversity induced by differences between individuals sent and received and the resident population in an island model is investigated by comparing different migration policies, including the proposed multikulti methods, which outperform the usual policy of sending the best or a random individual.
Abstract: The natural mate-selection behavior of preferring individuals which are somewhat (but not too much) different has been proved to increase the resistance to infection of the resulting offspring, and thus fitness. Inspired by these results we have investigated the improvement obtained from diversity induced by differences between individuals sent and received and the resident population in an island model, by comparing different migration policies, including our proposed multikulti methods, which choose the individuals that are going to be sent to other nodes based on the principle of multiculturality; the individual sent should be different enough to the target population, which will be represented through a proxy string (computed in several possible ways) in the emitting population. We have checked a set of policies following these principles on two discrete optimization problems of diverse difficulty for different sizes and number of nodes, and found that, in average or in median, multikulti policies outperform the usual policy of sending the best or a random individual; however, the size of this advantage changes with the number of nodes involved and the difficulty of the problem, tending to be greater as the number of nodes increases. The success of this kind of policies will be explained via the measurement of entropy as a representation of population diversity for the policies tested.

Journal ArticleDOI
TL;DR: A novel evolutionary algorithm that enhances its performance by utilizing the entire previous search history, namely history driven evolutionary algorithm (HdEA), employs a binary space partitioning tree structure to memorize the positions and the fitness values of the evaluated solutions.
Abstract: In this paper, we report a novel evolutionary algorithm that enhances its performance by utilizing the entire previous search history. The proposed algorithm, namely history driven evolutionary algorithm (HdEA), employs a binary space partitioning tree structure to memorize the positions and the fitness values of the evaluated solutions. Benefiting from the space partitioning scheme, a fast fitness function approximation using the archive is obtained. The approximation is used to improve the mutation strategy in HdEA. The resultant mutation operator is parameter-less, anisotropic, and adaptive. Moreover, the mutation operator naturally avoids the generation of out-of-bound solutions. The performance of HdEA is tested on 34 benchmark functions with dimensions ranging from 2 to 40. We also provide a performance comparison of HdEA with eight benchmark evolutionary algorithms, including a real coded genetic algorithm, differential evolution, two improved differential evolution, covariance matrix adaptation evolution strategy, two improved particle swarm optimization, and an estimation of distribution algorithm. Seen from the experimental results, HdEA outperforms the other algorithms for multimodal function optimization.

Journal ArticleDOI
TL;DR: MOJITO is a system that performs structural synthesis of analog circuits, returning designs that are trustworthy by construction, and generalizes to other problem domains which have accumulated structural domain knowledge, such as robotic structures, car assemblies, and modeling biological systems.
Abstract: This paper presents MOJITO, a system that performs structural synthesis of analog circuits, returning designs that are trustworthy by construction. The search space is defined by a set of expert-specified, trusted, hierarchically-organized analog building blocks, which are organized as a parameterized context-free grammar. The search algorithm is a multiobjective evolutionary algorithm that uses an age-layered population structure to balance exploration versus exploitation. It is validated with experiments to search across >;100 000 different one-stage and two-stage opamp topologies, returning human-competitive results. The runtime is orders of magnitude faster than open-ended systems, and unlike the other evolutionary algorithm approaches, the resulting circuits are trustworthy by construction. The approach generalizes to other problem domains which have accumulated structural domain knowledge, such as robotic structures, car assemblies, and modeling biological systems.

Journal ArticleDOI
TL;DR: This paper shows how game strategies can be coupled to multiobjective evolutionary algorithms and robust design techniques to produce a set of high quality solutions.
Abstract: A number of game strategies have been developed in past decades and used in the fields of economics, engineering, computer science, and biology due to their efficiency in solving design optimization problems. In addition, research in multiobjective and multidisciplinary design optimization has focused on developing a robust and efficient optimization method so it can produce a set of high quality solutions with less computational time. In this paper, two optimization techniques are considered; the first optimization method uses multifidelity hierarchical Pareto-optimality. The second optimization method uses the combination of game strategies Nash-equilibrium and Pareto-optimality. This paper shows how game strategies can be coupled to multiobjective evolutionary algorithms and robust design techniques to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid and non-Hybrid-Game strategies are demonstrated.

Journal ArticleDOI
TL;DR: The proposed method has always reached the correct ranking with less samples and, in the case of non-Gaussian PDFs, the proposed methodology has worked well, while the other methods have not been able to detect some PDF differences.
Abstract: This paper presents a statistical based comparison methodology for performing evolutionary algorithm comparison under multiple merit criteria. The analysis of each criterion is based on the progressive construction of a ranking of the algorithms under analysis, with the determination of significance levels for each ranking step. The multicriteria analysis is based on the aggregation of the different criteria rankings via a non-dominance analysis which indicates the algorithms which constitute the efficient set. In order to avoid correlation effects, a principal component analysis pre-processing is performed. Bootstrapping techniques allow the evaluation of merit criteria data with arbitrary probability distribution functions. The algorithm ranking in each criterion is built progressively, using either ANOVA or first order stochastic dominance. The resulting ranking is checked using a permutation test which detects possible inconsistencies in the ranking-leading to the execution of more algorithm runs which refine the ranking confidence. As a by-product, the permutation test also delivers -values for the ordering between each two algorithms which have adjacent rank positions. A comparison of the proposed method with other methodologies has been performed using reference probability distribution functions (PDFs). The proposed methodology has always reached the correct ranking with less samples and, in the case of non-Gaussian PDFs, the proposed methodology has worked well, while the other methods have not been able even to detect some PDF differences. The application of the proposed method is illustrated in benchmark problems.

Journal ArticleDOI
TL;DR: This paper aims to provide a theoretical model that can depict the collaboration between global search and local search in memetic computation on a broad class of objective functions and includes the subthreshold seeker, taken as a representative archetype of memetic algorithms.
Abstract: The synergy between exploration and exploitation has been a prominent issue in optimization. The rise of memetic algorithms, a category of optimization techniques which feature the explicit exploration-exploitation coordination, much accentuates this issue. While memetic algorithms have achieved remarkable success in a wide range of real-world applications, the key to successful exploration-exploitation synergies still remains obscure as conclusions drawn from empirical results or theoretical derivations are usually quite algorithm specific and/or problem dependent. This paper aims to provide a theoretical model that can depict the collaboration between global search and local search in memetic computation on a broad class of objective functions. In the proposed model, the interaction between global search and local search creates a set of local search zones, in which the global optimal points reside, within the search space. Based on such a concept, the quasi-basin class (QBC) which categorizes problems according to the distribution of their local search zones is adopted. The subthreshold seeker, taken as a representative archetype of memetic algorithms, is analyzed on various QBCs to develop a general model for memetic algorithms. As the proposed model not only well describes the expected time for a simple memetic algorithm to find the optimal point on different QBCs but also consists with the observations made in previous studies in the literature, the proposed model may reveal important insights to the design of memetic algorithms in general.

Journal ArticleDOI
TL;DR: A class of IEC in which the user actively chooses when and how each evolutionary operator is applied, and the potential of HIEC as a research tool with which one can record the evolutionary actions taken by human users is demonstrated.
Abstract: We propose hyperinteractive evolutionary computation (HIEC), a class of IEC in which the user actively chooses when and how each evolutionary operator is applied. To evaluate the benefits of HIEC, we conducted three human-subject experiments. The first two experiments showed that HIEC is associated with a more positive user experience and produced higher quality designs. The third experiment demonstrates the potential of HIEC as a research tool with which one can record the evolutionary actions taken by human users. Implications, limitations, and future directions of research are discussed.

Journal ArticleDOI
TL;DR: The underdetermined blind source separation (BSS), which based on sparse representation, is discussed in this paper and an improved PSO version called the cluster guide PSO (CGPSO) is further proposed according to the character of sparse representation.
Abstract: The underdetermined blind source separation (BSS), which based on sparse representation, is discussed in this paper; moreover, some difficulties (or real assumptions) that were left out of consideration before are aimed. For instance, the number of sources, , is unknown, large-scale, or time-variant; the mixing matrix is ill-conditioned. For the proposed algorithm, in order to detect a time-variant mixing matrix, short-time Fourier transform is employed to segment received mixtures. Because is unknown, our algorithm use more estimates to find out the mixing vectors by particle swarm optimizer (PSO); and then, surplus estimates are removed by two proposed processes. However, the estimated accuracy of PSO will affect the correctness of extracting mixing vectors. Consequently, an improved PSO version called the cluster guide PSO (CGPSO) is further proposed according to the character of sparse representation. In simulations, several real assumptions that were less discussed before will be tested. Some representative BSS algorithms and PSO versions are compared with the CGPSO-based algorithm. The advantages of the proposed algorithm are demonstrated by simulation results.

Journal ArticleDOI
TL;DR: This paper presents a multi population pattern searching algorithm (MuPPetS), which is supposed to be an answer to situations where long coded individuals are a must, and uses some of the messy GA ideas like coding and operators.
Abstract: One of the main evolutionary algorithms bottlenecks is the significant effectiveness dropdown caused by increasing number of genes necessary for coding the problem solution. In this paper, we present a multi population pattern searching algorithm (MuPPetS), which is supposed to be an answer to situations where long coded individuals are a must. MuPPetS uses some of the messy GA ideas like coding and operators. The presented algorithm uses the binary coding, however the objective is to use MuPPetS against real-life problems, whatever coding schema. The main novelty in the proposed algorithm is a gene pattern idea based on retrieving, and using knowledge of gene groups which contains genes highly dependent on each other. Thanks to gene patterns the effectiveness of data exchange between population individuals improves, and the algorithm gains new, interesting, and beneficial features like a kind of “selective attention” effect.