On the hardness of offline multi-objective optimization
read more
Citations
Borg: An auto-adaptive many-objective evolutionary computing framework
Evolutionary algorithms and other metaheuristics in water resources
Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization
Evolutionary multiobjective optimization in water resources: The past, present, and future
Evolutionary multiobjective optimization in water resources: The past, present, and future
References
Multi-Objective Optimization Using Evolutionary Algorithms
Weak Convergence and Empirical Processes: With Applications to Statistics
Nonlinear Multiobjective Optimization
A Probabilistic Theory of Pattern Recognition
ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems
Related Papers (5)
Frequently Asked Questions (9)
Q2. what is the upper bound for a very poor random search?
To strenghten the results, the upper bound is proved for a very poor random search (see the poor distribution used in the random search of theorem 1) and the lower bound is proved for a very small family of fitness functions (see theorem 2; the upper bound holds a fortiori for larger families of problems).
Q3. What are the assumptions underlying the family of problems P?
The problems in this paper are a family of problems with smooth Pareto sets; mainly, the assumptions underlying the family of problems P (used in the lower bound, in theorem 2) are that (i) it includes all possible Lipschitzian Pareto sets with some given bound on the Lipschitz coefficient (ii) there’s no possible reduction of the number of objectives (the set of objectives is minimal).
Q4. What is the way to deal with non-comparison-based methods?
This is not an artificial way of dealing with non-comparison-based methods; for example, in the mono-objective case, limits on the convergence rate of comparison-based algorithms derived through entropy theorems ([15]) do also hold in practice for gradient-based techniques, as the gradient is computed with a finite precision; as well as comparison-based EA, Newton’s method is only linear when dimensionality is sufficient to see the effects of the finite precision; this is an already known fact (see e.g. [21]).
Q5. What is the effect of reducing the set of assumptions?
If the authors reduce the set of assumptions, the packing numbers increase - the lower bound remains essentially the same, and the proximity between the upper and the lower bound is preserved.
Q6. What is the way to calculate the fitness of a random search?
Assume that ∀x, 0 ≤ fitness(x) ≤d(Xn,X ) ≤ K d √ en/q, where en = O (d log(n) − log(δ)) /n.This is a very poor random search, with a distribution uniform in the fitness space.
Q7. What is the definition of the term 'analysis of performance of evolutionary algorithms'?
The analysis of performance of evolutionary algorithms is typically the study of the computation time required by the algorithm for reaching a given precision for all problems of a given family of problems.
Q8. What is the definition of a pareto-optimal algorithm?
A point in D is said Pareto-optimal if it is not Pareto-dominated by any other point in D. Multi-objective optimization (MOO,[2, 14, 5]) is the research of the set of non-dominated points, i.e.{x ∈ D; ∄y ∈ D, y ≻ x}.
Q9. What is the main difference between the two types of MOO?
The authors here apply similar techniques in order to show that comparison-based MOO has strong limitations in terms of convergence rates, when it is applied to continuous problems in which (i) the computational cost is well approximated by the number of comparisons (ii) only binary comparisons are used (iii) all objectives are conflicting (iv) the number of objectives is high.