scispace - formally typeset
Search or ask a question

Showing papers in "Evolutionary Computation in 2010"


Journal ArticleDOI
TL;DR: The concept of local search chain is presented as a springboard to design memetic algorithm approaches that can effectively use intense continuous local search methods as local search operators.
Abstract: Memetic algorithms with continuous local search methods have arisen as effective tools to address the difficulty of obtaining reliable solutions of high precision for complex continuous optimisation problems. There exists a group of continuous local search algorithms that stand out as exceptional local search optimisers. However, on some occasions, they may become very expensive, because of the way they exploit local information to guide the search process. In this paper, they are called intensive continuous local search methods. Given the potential of this type of local optimisation methods, it is interesting to build prospective memetic algorithm models with them. This paper presents the concept of local search chain as a springboard to design memetic algorithm approaches that can effectively use intense continuous local search methods as local search operators. Local search chain concerns the idea that, at one stage, the local search operator may continue the operation of a previous invocation, starting from the final configuration (initial solution, strategy parameter values, internal variables, etc.) reached by this one. The proposed memetic algorithm favours the formation of local search chains during the memetic algorithm run with the aim of concentrating local tuning in search regions showing promise. In order to study the performance of the new memetic algorithm model, an instance is implemented with CMA-ES as an intense local search method. The benefits of the proposal in comparison to other kinds of memetic algorithms and evolutionary algorithms proposed in the literature to deal with continuous optimisation problems are experimentally shown. Concretely, the empirical study reveals a clear superiority when tackling high-dimensional problems.

165 citations


Journal ArticleDOI
TL;DR: The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure.
Abstract: Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

154 citations


Journal ArticleDOI
TL;DR: It is shown that optimal solutions can be approximated within a logarithmic factor of the size of the ground set, using the multi-objective approach, while the approximation quality obtainable by the single- objective approach in expected polynomial time may be arbitrarily bad.
Abstract: The main aim of randomized search heuristics is to produce good approximations of optimal solutions within a small amount of time. In contrast to numerous experimental results, there are only a few theoretical explorations on this subject. We consider the approximation ability of randomized search heuristics for the class of covering problems and compare single-objective and multi-objective models for such problems. For the VertexCover problem, we point out situations where the multi-objective model leads to a fast construction of optimal solutions while in the single-objective case, no good approximation can be achieved within the expected polynomial time. Examining the more general SetCover problem, we show that optimal solutions can be approximated within a logarithmic factor of the size of the ground set, using the multi-objective approach, while the approximation quality obtainable by the single-objective approach in expected polynomial time may be arbitrarily bad.

126 citations


Journal ArticleDOI
TL;DR: The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems and shows a slower convergence, compared to the other algorithms, but requires less CPU time.
Abstract: This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.

120 citations


Journal ArticleDOI
TL;DR: A new algorithm is presented which determines for a population of size n with d objectives, a solution with minimal hypervolume contribution in time (nd2 log n) for d > 2, which improves all previously published algorithms by a factor of n.
Abstract: The hypervolume indicator serves as a sorting criterion in many recent multi-objective evolutionary algorithms (MOEAs). Typical algorithms remove the solution with the smallest loss with respect to the dominated hypervolume from the population. We present a new algorithm which determines for a population of size n with d objectives, a solution with minimal hypervolume contribution in time (nd//2 log n) for d > 2. This improves all previously published algorithms by a factor of n for all d > 3 and by a factor of for d == 3. We also analyze hypervolume indicator based optimization algorithms which remove λλ > 1 solutions from a population of size n == μµ ++ λλ. We show that there are populations such that the hypervolume contribution of iteratively chosen λλ solutions is much larger than the hypervolume contribution of an optimal set of λλ solutions. Selecting the optimal set of λλ solutions implies calculating conventional hypervolume contributions, which is considered to be computationally too expensive. We present the first hypervolume algorithm which directly calculates the contribution of every set of λλ solutions. This gives an additive term of in the runtime of the calculation instead of a multiplicative factor of . More precisely, for a population of size n with d objectives, our algorithm can calculate a set of λλ solutions with minimal hypervolume contribution in time (nd//2 log n ++ nλλ) for d > 2. This improves all previously published algorithms by a factor of nmin{λλ,d//2} for d > 3 and by a factor of n for d == 3.

95 citations


Journal ArticleDOI
TL;DR: A new concept of an adaptive individual niche radius is applied to niching with the covariance matrix adaptation evolution strategy (CMA-ES), and two approaches are considered that are shown to be robust and to achieve satisfying results.
Abstract: While the motivation and usefulness of niching methods is beyond doubt, the relaxation of assumptions and limitations concerning the hypothetical search landscape is much needed if niching is to be valid in a broader range of applications. Upon the introduction of radii-based niching methods with derandomized evolution strategies (ES), the purpose of this study is to address the so-called niche radius problem. A new concept of an adaptive individual niche radius is applied to niching with the covariance matrix adaptation evolution strategy (CMA-ES). Two approaches are considered. The first approach couples the radius to the step size mechanism, while the second approach employs the Mahalanobis distance metric with the covariance matrix mechanism for the distance calculation, for obtaining niches with more complex geometrical shapes. The proposed approaches are described in detail, and then tested on high-dimensional artificial landscapes at several levels of difficulty. They are shown to be robust and to achieve satisfying results.

82 citations


Journal ArticleDOI
TL;DR: The results show that GAs using appropriate methods to self-adapt their mutation operator and mutation rate find solutions of comparable or lower cost than algorithms with static operators, even when the latter have been extensively pretuned.
Abstract: The choice of mutation rate is a vital factor in the success of any genetic algorithm (GA), and for permutation representations this is compounded by the availability of several alternative mutation operators. It is now well understood that there is no one “"optimal choice”"; rather, the situation changes per problem instance and during evolution. This paper examines whether this choice can be left to the processes of evolution via self-adaptation, thus removing this nontrivial task from the GA user and reducing the risk of poor performance arising from (inadvertent) inappropriate decisions. Self-adaptation has been proven successful for mutation step sizes in the continuous domain, and for the probability of applying bitwise mutation to binary encodings; here we examine whether this can translate to the choice and parameterisation of mutation operators for permutation encodings. We examine one method for adapting the choice of operator during runtime, and several different methods for adapting the rate at which the chosen operator is applied. In order to evaluate these algorithms, we have used a range of benchmark TSP problems. Of course this paper is not intended to present a state of the art in TSP solvers; rather, we use this well known problem as typical of many that require a permutation encoding, where our results indicate that self-adaptation can prove beneficial. The results show that GAs using appropriate methods to self-adapt their mutation operator and mutation rate find solutions of comparable or lower cost than algorithms with “"static”" operators, even when the latter have been extensively pretuned. Although the adaptive GAs tend to need longer to run, we show that is a price well worth paying as the time spent finding the optimal mutation operator and rate for the nonadaptive versions can be considerable. Finally, we evaluate the sensitivity of the self-adaptive methods to changes in the implementation, and to the choice of other genetic operators and population models. The results show that the methods presented are robust, in the sense that the performance benefits can be obtained in a wide range of host algorithms.

78 citations


Journal ArticleDOI
TL;DR: The potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategymulti-objective continuation methods is demonstrated by showing that the concept of -dominance can be integrated into this approach in a suitable way.
Abstract: Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of e-dominance. Though bounds on the quality of the limit approximation---which are entirely determined by the archiving strategy and the value of e---have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a ∈ A, is “large.” Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy---multi-objective continuation methods---by showing that the concept of e-dominance can be integrated into this approach in a suitable way.

68 citations


Journal ArticleDOI
Liviu Panait1
TL;DR: An extended formal model for cooperative coevolutionary algorithms is details, and it is demonstrated that, under specific conditions, this theoretical model will converge to the globally optimal solution.
Abstract: Cooperative coevolutionary algorithms have the potential to significantly speed up the search process by dividing the space into parts that can each be conquered separately. However, recent research presented theoretical and empirical arguments that these algorithms tend to converge to suboptimal solutions in the search space, and are thus not fit for optimization tasks. This paper details an extended formal model for cooperative coevolutionary algorithms, and uses it to explore possible reasons these algorithms converge to optimal or suboptimal solutions. We demonstrate that, under specific conditions, this theoretical model will converge to the globally optimal solution. The proofs provide the underlying theoretical foundation for a better application of cooperative coevolutionary algorithms. We demonstrate the practical advantages of applying ideas from this theoretical work to a simple problem domain.

62 citations


Journal ArticleDOI
TL;DR: This work considers the runtime of evolutionary algorithms using biased mutations on illustrative example functions as well as on function classes, and shows on which functions biased mutations lead to a speedup, on which function biased mutations increase the runtime, and in which settings there is almost no difference in performance.
Abstract: Evolutionary algorithms are general randomized search heuristics and typically perform an unbiased random search that is guided only by the fitness of the search points encountered. However, in applications there is often problem-specific knowledge that suggests some additional bias. The use of appropriately biased variation operators may speed up the search considerably. Problems defined over bit strings of finite length often have the property that good solutions have only very few 1-bits or very few 0-bits. A mutation operator tailored toward such situations is studied under different perspectives and in a rigorous way discussing its assets and drawbacks. We consider the runtime of evolutionary algorithms using biased mutations on illustrative example functions as well as on function classes. A comparison with unbiased operators shows on which functions biased mutations lead to a speedup, on which functions biased mutations increase the runtime, and in which settings there is almost no difference in performance. The main focus is on theoretical runtime analysis yielding asymptotic results. These findings are accompanied by the results of empirical investigations that deliver additional insights.

52 citations


Journal ArticleDOI
TL;DR: Two simple bi-objective problems which emphasise when populations are needed in MOEAs are presented, which point out an exponential runtime gap between the population-based algorithm simple evolutionary multi-objectives optimiser (SEMO) and several single individual-based algorithms on this problem.
Abstract: Multi-objective evolutionary algorithms (MOEAs) have become increasingly popular as multi-objective problem solving techniques. An important open problem is to understand the role of populations in MOEAs. We present two simple bi-objective problems which emphasise when populations are needed. Rigorous runtime analysis points out an exponential runtime gap between the population-based algorithm simple evolutionary multi-objective optimiser (SEMO) and several single individual-based algorithms on this problem. This means that among the algorithms considered, only the population-based MOEA is successful and all other algorithms fail.

Journal ArticleDOI
TL;DR: A comparison of several bloat control methods and also evaluates a recent proposal for limiting the size of the individuals: a genetic operator called prune and plant, which has demonstrated to be better in terms of fitness, size reduction, and time consumption than any of the other bloat Control techniques under comparison.
Abstract: This paper reports a comparison of several bloat control methods and also evaluates a recent proposal for limiting the size of the individuals: a genetic operator called prune and plant. The aim of this work is to test the adequacy of this method. Since a preliminary study of the method has already shown promising results, we have performed a thorough study in a set of benchmark problems aiming at demonstrating the utility of the new approach. Prune and plant has obtained results that maintain the quality of the final solutions in terms of fitness while achieving a substantial reduction of the mean tree size in all four problem domains considered. In addition, in one of these problem domains, prune and plant has demonstrated to be better in terms of fitness, size reduction, and time consumption than any of the other bloat control techniques under comparison. The experimental part of the study presents a comparison of performance in terms of phenotypic and genotypic diversity. This comparison study can provide the practitioner with some relevant clues as to which bloat control method is better suited to a particular problem and whether the advantage of a method does or does not derive from its influence on the genetic pool diversity.

Journal ArticleDOI
TL;DR: The affinity propagation EDA (AffEDA) which learns a marginal product model by clustering a matrix of mutual information learned from the data using a very efficient message-passing algorithm known as affinity propagation is introduced.
Abstract: Estimation of distribution algorithms (EDAs) that use marginal product model factorizations have been widely applied to a broad range of mainly binary optimization problems. In this paper, we introduce the affinity propagation EDA (AffEDA) which learns a marginal product model by clustering a matrix of mutual information learned from the data using a very efficient message-passing algorithm known as affinity propagation. The introduced algorithm is tested on a set of binary and nonbinary decomposable functions and using a hard combinatorial class of problem known as the HP protein model. The results show that the algorithm is a very efficient alternative to other EDAs that use marginal product model factorizations such as the extended compact genetic algorithm (ECGA) and improves the quality of the results achieved by ECGA when the cardinality of the variables is increased.

Journal ArticleDOI
TL;DR: A natural vector-valued fitness function f is presented for the multi-Objective shortest path problem, which is a fundamental multi-objective combinatorial optimization problem known to be NP-hard and lower bounds for the worst-case optimization time are presented.
Abstract: We present a natural vector-valued fitness function f for the multi-objective shortest path problem, which is a fundamental multi-objective combinatorial optimization problem known to be NP-hard. Thereafter, we conduct a rigorous runtime analysis of a simple evolutionary algorithm (EA) optimizing f. Interestingly, this simple general algorithm is a fully polynomial-time randomized approximation scheme (FPRAS) for the problem under consideration, which exemplifies how EAs are able to find good approximate solutions for hard problems. Furthermore, we present lower bounds for the worst-case optimization time.


Journal ArticleDOI
TL;DR: Experimental results indicate that SoD is a better discretization method to work with ECGA and works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.
Abstract: An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.

Journal ArticleDOI
TL;DR: This study proposes new user interface techniques for interactive EC that allow faster evaluation of large numbers of individuals and the combination of interactive with noninteractive evaluation, and for the first time a set of rigorous usability experiments compares these techniques with existing interactive EC and non-EC interfaces.
Abstract: Perhaps the biggest limitation of interactive EC is the fitness evaluation bottleneck, caused by slow user evaluation and leading to small populations and user fatigue. In this study these problems are addressed through the proposal of new user interface techniques for interactive EC, which allow faster evaluation of large numbers of individuals and the combination of interactive with noninteractive evaluation. For the first time in the interactive EC literature a set of rigorous usability experiments compares these techniques with existing interactive EC and non-EC interfaces, for the application domain of sound synthesis. The results show that a new user interface for interactive EC improves performance, and further experiments lead to refinement of its design. The experimental protocol shows, again for the first time, that formal usability experiments are useful in the interactive EC setting. Statistically significant results are obtained on clearly-defined performance metrics, and the protocol is general enough to be of potential interest to all interactive EC researchers.

Journal ArticleDOI
TL;DR: This paper studies the performance of multi-recombinative evolution strategies using isotropically distributed mutations with cumulative step length adaptation when applied to optimising cigar functions and develops a simplified model of the strategy's behaviour.
Abstract: This paper studies the performance of multi-recombinative evolution strategies using isotropically distributed mutations with cumulative step length adaptation when applied to optimising cigar functions. Cigar functions are convex-quadratic objective functions that are characterised by the presence of only two distinct eigenvalues of their Hessian, the smaller one of which occurs with multiplicity one. A simplified model of the strategy's behaviour is developed. Using it, expressions that approximately describe the stationary state that is attained when the mutation strength is adapted are derived. The performance achieved by cumulative step length adaptation is compared with that obtained when using optimally adapted step lengths.

Journal ArticleDOI
TL;DR: The study reported here shows that the EA behavior can be profoundly affected: the EA performance obtained when using the ECEPP PE model is significantly worse than that obtained whenUsing the Amber, OPLS, and CVFF PE models, and the optimal EA control parameter values for the E CEPP model also differ significantly from those associated with the other models.
Abstract: Ab initio protein structure prediction involves determination of the three-dimensional (3D) conformation of proteins on the basis of their amino acid sequence, a potential energy (PE) model that captures the physics of the interatomic interactions, and a method to search for and identify the global minimum in the PE (or free energy) surface such as an evolutionary algorithm (EA). Many PE models have been proposed over the past three decades and more. There is currently no understanding of how the behavior of an EA is affected by the PE model used. The study reported here shows that the EA behavior can be profoundly affected: the EA performance obtained when using the ECEPP PE model is significantly worse than that obtained when using the Amber, OPLS, and CVFF PE models, and the optimal EA control parameter values for the ECEPP model also differ significantly from those associated with the other models.

Journal ArticleDOI
TL;DR: The proposed method can significantly assist ECGA to handle problems comprising structures of disparate fitness contributions and therefore may potentially help EDAs in general to overcome those situations in which the entire problem structure cannot be recognized properly due to the temporal delay of emergence of some promising partial solutions.
Abstract: The probabilistic model building performed by estimation of distribution algorithms (EDAs) enables these methods to use advanced techniques of statistics and machine learning for automatic discovery of problem structures. However, in some situations, it may not be possible to completely and accurately identify the whole problem structure by probabilistic modeling due to certain inherent properties of the given problem. In this work, we illustrate one possible cause of such situations with problems consisting of structures with unequal fitness contributions. Based on the illustrative example, we introduce a notion that the estimated probabilistic models should be inspected to reveal the effective search directions and further propose a general approach which utilizes a reserved set of solutions to examine the built model for likely inaccurate fragments. Furthermore, the proposed approach is implemented on the extended compact genetic algorithm (ECGA) and experiments are performed on several sets of additively separable problems with different scaling setups. The results indicate that the proposed method can significantly assist ECGA to handle problems comprising structures of disparate fitness contributions and therefore may potentially help EDAs in general to overcome those situations in which the entire problem structure cannot be recognized properly due to the temporal delay of emergence of some promising partial solutions.

Journal ArticleDOI
TL;DR: This work formalizes this concept mathematically, showing that the representations generate a group that acts upon the search space, and provides a complete characterization of crossover and mutation operators that have such invariance properties.
Abstract: A genetic algorithm is invariant with respect to a set of representations if it runs the same no matter which of the representations is used. We formalize this concept mathematically, showing that the representations generate a group that acts upon the search space. Invariant genetic operators are those that commute with this group action. We then consider the problem of characterizing crossover and mutation operators that have such invariance properties. In the case where the corresponding group action acts transitively on the search space, we provide a complete characterization, including high-level representation-independent algorithms implementing these operators.

Journal ArticleDOI
TL;DR: Algorithms that improve the sampling process from multi-dimensional real-coded spaces that run a population of samples and apply recombination operators in order to exchange useful information and preserve commonalities in highly probable individual states are presented.
Abstract: Markov chain Monte Carlo (MCMC) algorithms are sampling methods for intractable distributions. In this paper, we propose and investigate algorithms that improve the sampling process from multi-dimensional real-coded spaces. We present MCMC algorithms that run a population of samples and apply recombination operators in order to exchange useful information and preserve commonalities in highly probable individual states. We call this class of algorithms Evolutionary MCMCs (EMCMCs). We introduce and analyze various recombination operators which generate new samples by use of linear transformations, for instance, by translation or rotation. These recombination methods discover specific structures in the search space and adapt the population samples to the proposal distribution. We investigate how to integrate recombination in the MCMC framework to sample from a desired distribution. The recombination operators generate individuals with a computational effort that scales linearly in the number of dimensions and the number of parents. We present results from experiments conducted on a mixture of multivariate normal distributions. These results show that the recombinative EMCMCs outperform the standard MCMCs for target distributions that have a nontrivial structural relationship between the dimensions.

Journal ArticleDOI
TL;DR: The paper identifies static and dynamic selection thresholds governing accumulation of information in a genetic algorithm (GA) and model the dynamic behavior of GAs using these thresholds and demonstrate their effectiveness by simulation and benchmark problems.
Abstract: Mutation applied indiscriminately across a population has, on average, a detrimental effect on the accumulation of solution alleles within the population and is usually beneficial only when targeted at individuals with few solution alleles. Many common selection techniques can delete individuals with more solution alleles than are easily recovered by mutation. The paper identifies static and dynamic selection thresholds governing accumulation of information in a genetic algorithm (GA). When individuals are ranked by fitness, there exists a dynamic threshold defined by the solution density of surviving individuals and a lower static threshold defined by the solution density of the information source used for mutation. Replacing individuals ranked below the static threshold with randomly generated individuals avoids the need for mutation while maintaining diversity in the population with a consequent improvement in population fitness. By replacing individuals ranked between the thresholds with randomly selected individuals from above the dynamic threshold, population fitness improves dramatically. We model the dynamic behavior of GAs using these thresholds and demonstrate their effectiveness by simulation and benchmark problems.

Journal ArticleDOI
TL;DR: This paper presents a method for modeling the behavior of finite population evolutionary algorithms (EAs), and shows that if the population size is greater than 1 and much less than the cardinality of the search space, the resulting exact model requires considerably less memory space for theoretically running the stochastic search process of the original EA than the Nix and Vose-style Markov chain model.
Abstract: As practitioners we are interested in the likelihood of the population containing a copy of the optimum. The dynamic systems approach, however, does not help us to calculate that quantity. Markov chain analysis can be used in principle to calculate the quantity. However, since the associated transition matrices are enormous even for modest problems, it follows that in practice these calculations are usually computationally infeasible. Therefore, some improvements on this situation are desirable. In this paper, we present a method for modeling the behavior of finite population evolutionary algorithms (EAs), and show that if the population size is greater than 1 and much less than the cardinality of the search space, the resulting exact model requires considerably less memory space for theoretically running the stochastic search process of the original EA than the Nix and Vose-style Markov chain model. We also present some approximate models that use still less memory space than the exact model. Furthermore, based on our models, we examine the selection pressure by fitness-proportionate selection, and observe that on average over all population trajectories, there is no such strong bias toward selecting the higher fitness individuals as the fitness landscape suggests.

Journal ArticleDOI
TL;DR: This special issue presents current advances to the theoretical understanding of evolutionary algorithms for multi-objective optimization and presents a faster algorithm for determining the individual with the smallest hypervolume contribution in a given population.
Abstract: Evolutionary algorithms have been widely used to tackle multi-objective optimization problems. Despite many successful applications there is a lack of theoretical knowledge on how and why this type of algorithm works particularly well for multi-objective problems. This special issue presents current advances to the theoretical understanding of evolutionary algorithms for multi-objective optimization. It comprises four very high quality papers which have been selected via a rigorous reviewing process. The first paper “On the Effect of Populations in Evolutionary Multi-Objective Optimisation” by Oliver Giel and Per Kristian Lehre studies the basic question whether a population is provably useful to compute the whole Pareto front of a multi-objective optimization problem. They compare restart strategies of several single-objective approaches based on one individual to a natural multi-objective approach using a population. Their analyses point out the benefits of the multi-objective evolutionary algorithm in a rigorous way. Using illustrative example functions they provide not only proven results but also a good intuition about why population-based approaches can outperform individual-based approaches in multi-objective optimization. The paper “Exploring the Runtime of an Evolutionary Algorithm for the Multi-Objective Shortest Path Problem” by Christian Horoba examines the behavior of evolutionary algorithms for the classical NP-hard multi-objective shortest path problem. The author considers an evolutionary approach based on -dominance and proves that this algorithm constitutes a fully polynomial-time randomized approximation scheme (FPRAS) for the problem. The algorithm achieves this by mimicking a problemspecific algorithm for that problem. This provides general insight into why evolutionary algorithms can perform well for problems that allow for efficient problem-specific algorithms. Hypervolume-based algorithms have become very popular in evolutionary multiobjective optimization. In the paper, “An Efficient Algorithm for Computing Hypervolume Contributions,” Karl Bringmann and Tobias Friedrich present a faster algorithm for determining the individual with the smallest hypervolume contribution in a given population. Furthermore, they study the problem of removing λ > 1 individuals with the smallest hypervolume contribution from a given population and present an elegant and fast method to carry out this task.