scispace - formally typeset
Open AccessJournal ArticleDOI

On the hardness of offline multi-objective optimization

Olivier Teytaud
- 01 Dec 2007 - 
- Vol. 15, Iss: 4, pp 475-491
TLDR
It is shown that the convergence rate of all comparison-based multi-objective algorithms, for the Hausdorff distance, is not much better than the converge rate of the random search under certain conditions.
Abstract
It has been empirically established that multiobjective evolutionary algorithms do not scale well with the number of conflicting objectives. This paper shows that the convergence rate of all comparison-based multi-objective algorithms, for the Hausdorff distance, is not much better than the convergence rate of the random search under certain conditions. The number of objectives must be very moderate and the framework should hold the following assumptions: the objectives are conflicting and the computational cost is lower bounded by the number of comparisons is a good model. Our conclusions are: (i) the number of conflicting objectives is relevant (ii) the criteria based on comparisons with random-search for multi-objective optimization is also relevant (iii) having more than 3-objectives optimization is very hard. Furthermore, we provide some insight into cross-over operators.

read more

Content maybe subject to copyright    Report

HAL Id: inria-00173239
https://hal.inria.fr/inria-00173239
Submitted on 19 Sep 2007
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
On the hardness of oine multiobjective optimization
Olivier Teytaud
To cite this version:
Olivier Teytaud. On the hardness of oine multiobjective optimization. Evolutionary Computation,
Massachusetts Institute of Technology Press (MIT Press), 2007. �inria-00173239�

On the hardness of offline multi-objective
optimization
Olivier Teytaud
TAO-inria, LRI, UMR 8623(CNRS - Universite Paris-Sud ),
bat 490 Universite Paris-Sud 91405 Orsay Cedex France
Abstract. It is empirically established that multiobjective evolutionary
algorithms do not scale well with the number of conflicting objectives.
We here show that the convergence rate of all comparison-based multi-
objective algorithms, for the Hausdorff distance, is not much better than
the convergence rate of the random search, unless the numb er of objec-
tives is very moderate, in a framework in which the stronger assumptions
are (i) that the objectives are conflicting (ii) that lower bounding the
computational cost by the number of comparisons is a good model. Our
conclusions are (i) the relevance of the number of confl icting objectives
(ii) the relevance of criteria based on comparisons with random-search
for multi-objective optimization (iii) the very-hardness of more than 3-
objectives optimization (iv) some hints about cross-over operators.
1 Introduction
Many evolutionary algorithms ar e comparison-based, in the sense that the only
information coming from the objective function and used by the algorithm is the
results of binary comparisons between fitness-values. [15] has shown that this lim-
itation has non-trivial co ns e quences in continuous mono-objective optimization.
We here apply similar techniques in order to show that comparison-bas e d MOO
has strong limitations in terms of convergence rates, when it is applied to contin-
uous problems in which (i) the computational cost is well approximated by the
number of comparisons (ii) only binary comparisons are used (iii) all o bjectives
are conflicting (iv) the number of objectives is high. This is not only a general
negative result, as it emphasizes some tricks for avoiding these limitations, such
as removing non-conflicting objectives as in [3] or using non-binary comparisons
as done in [17] through ”informed” cross-over.
Consider fitness = (fitness
1
, . . . , f itness
d
) some real-valued objective func-
tions (to be maximized) on a same do main D. A point y D dominates (or

Pareto-dominates) a point x D if for all i [[1, d]], fitness
i
(y) f itness
i
(x),
and for at least one i
0
, fitness
i
0
(y) > fitness
i
0
(x) (i.e. y is at least as good as
x for each objective and y is better than x for at least one objective). This is
denoted by y x. We denote by y x the fact that i [[1, d]], fi t ness
i
(y)
fitness
i
(x). We use the same notation for p oints in the so-called fitness-space:
fitness(x) fitness(y) (resp. ) if and only if x y (resp. x y). Also, we
say that a set A dominates a set B if b B, a A; a b; This is denoted by
A B. A point in D is said Pareto-optimal if it is not Pareto-dominated by any
other point in D. Multi-objective optimization (MOO,[2, 14 , 5]) is the research
of the set of non-dominated points, i.e.
{x D; y D, y x}. (1)
This set is called the Pareto set.
Offline MOO is the research of the whole Pareto s e t, which is, a fter the
optimization (offline), studied by the user. On the other hand, on-line MOO
is the interactive research of an interesting point in the Pareto set; typically,
such programs use a weighted average of the various objectives, and the weights
are updated by the user depending on his preferences, during the run of the
MOO. On-line MOO is in some sense easier than offline MOO: the user provides
some information during the run of the MOO and this information simplifies the
problem by restricting the optimization to a part of the Pareto set chosen by
the user. Offline MOO a nd online MOO are compared in algos 1 and 2.
Algorithm 1 Offline MOO.
Init n = 0.
while stopping criterion not met do
Modify the population (mutation, selection, crossover, . . . ) (if n > 0) or initialize
it (if n = 0).
for Each newly visited point x do
n n + 1
P
n
P
n
{x}
end for
end while
Output P
n
(or possibly only the non-dominated points in P
n
, or any oth er subset of
P
n
).
A main tool for studying families of objective functions is the notion of
conflicting objectives ([16, 3]). Consider F a family of objective functions and

Algorithm 2 Online MOO (interactive MOO). This case is not studied in this
paper; we focus on offline algorithms as in algo. 1.
Init n = 0.
while stopping criterion not met do
Evaluate the population.
Modify the population (mutation, selection, crossover, . . . ) (if n > 0) or initialize
it (if n = 0).
for Each newly visited point x do
n n + 1
P
n
P
n
{x}
end for
Possibly: show information about P
n
(possibly the full P
n
or only the non-
dominated points in P
n
) t o the user and update some information about the
preferences of the user (p ossibly a simple weighting of t he preferences).
end while
Output P
n
(or possibly only the non-dominated points in P
n
, or any oth er subset of
P
n
).
δ > 0. We say that x
δ
F
y if, f F, fi tness(x) fitness(y) + δ. Given two
sets F
1
and F
2
of objective functions, we say that F
1
and F
2
are δ-non-conflicting
if
(x, y) D, x
δ
F
1
y x
F
2
y
(x, y) D, x
δ
F
2
y x
F
1
y
A set of objectives F is δ-minimal wrt F if
There exists no F
F, F
6= F such that F
is δ-non-conflicting with F . (2)
The case δ = 0 can also be considered; a set of objectives is minimal if it
is 0-minimal. Mainly, minimal sets of objective functions are sets of o bjective
functions that can not be replaced by smaller sets of functions. We will here
study minimal sets of objective functions.
The analysis of performa nce of evolutionary algorithms is typically the study
of the co mputation time required by the algorithm for reaching a given preci-
sion for all problems of a given family of problems. Upper bounds show that
this computation time is smaller than a given quantity for a given algorithm.
Lower bo unds show that this computation time is larger tha n a given quantity
for a given algorithm, or in some cases for all algorithms of a given family of
algorithms.

In the stochastic case (stochastic algorithms), and if we only have to reach
the target within a given precision, then the writing is a bit more tedious. Let’s
consider a family P of problems. An upper bound is therefore of the form:
Upper bound: there is an algo rithm A, such that problem P, the
computation time required by algorithm A for solving it with precision ǫ and
with probability at least 1 δ is at mos t UpperBound(ǫ, δ).
and a lower bound is of the form:
Lower bound: algorithm a given family, a problem in P such that the
computation time required for solving it w ith pre cision ǫ and with probability
at least 1 δ is at least LowerBound(ǫ, δ).
If UpperBound is close to LowerBound, the complexity problem is solved.
Here, we will consider lower bounds for all algorithms that are based on
binary comparisons (see the formalization of this assumption in alg o. 4; this non-
negligible assumption is discussed in 4), and upper bounds for a simple naive
algorithm. The problems in this paper are a family of problems with s mooth
Pareto sets; mainly, the assumptions underlying the family of problems P (used
in the lower bound, in theorem 2) are that (i) it includes all possible Lipschitzian
Pareto sets with some given bound o n the Lipschitz coefficient (ii) there’s no
possible reduction of the number of objectives (the set of objectives is minimal).
Roughly, our results are as follows:
Upper bound: there is an algo rithm A, namely random se arch as in eq. 3,
such that problem P, the computation time required by algorithm A for
solving it with precision ǫ and with probability at least 1 δ is a t most
Up perBound(ǫ, δ).
and:
Lower bound: algorithm which is based on binary comparisons (all
algorithms as in algo. 4), a problem in P such that the computation time
required for solving it with precision ǫ and with probability at least 1 δ is at
least LowerBound(ǫ, δ).
Both LowerBound and U pperBound depend on the dimension d, and interest-
ingly they are close to each other when d is large. Our conclusion is therefore
that, at least for the family P of problems, and when the dimension d is larg e ,
all algorithms are roughly (at best) equivalent to random s earch - at least for
the criteria tha t we have defined (see discussion for more information about the
non-negligible effect of the assumptions in theorems 1 and 2).

Citations
More filters
Journal ArticleDOI

Borg: An auto-adaptive many-objective evolutionary computing framework

TL;DR: The Borg MOEA combines -dominance, a measure of convergence speed named -progress, randomized restarts, and auto-adaptive multioperator recombination into a unified optimization framework for many-objective, multimodal optimization.
Journal ArticleDOI

Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization

TL;DR: A new performance indicator, Δp, is defined, which can be viewed as an “averaged Hausdorff distance” between the outcome set and the Pareto front and which is composed of (slight modifications of) the well-known indicators generational distance (GD) and inverted generational Distance (IGD).
Journal ArticleDOI

Evolutionary multiobjective optimization in water resources: The past, present, and future

TL;DR: This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations.

Evolutionary multiobjective optimization in water resources: The past, present, and future

TL;DR: A comprehensive diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) for water resources can be found in this article, which highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems.
References
More filters
Book

Multi-Objective Optimization Using Evolutionary Algorithms

TL;DR: This text provides an excellent introduction to the use of evolutionary algorithms in multi-objective optimization, allowing use as a graduate course text or for self-study.
Book

Weak Convergence and Empirical Processes: With Applications to Statistics

TL;DR: In this article, the authors define the Ball Sigma-Field and Measurability of Suprema and show that it is possible to achieve convergence almost surely and in probability.
Book

Nonlinear Multiobjective Optimization

TL;DR: This paper is concerned with the development of methods for dealing with the role of symbols in the interpretation of semantics.
Book

A Probabilistic Theory of Pattern Recognition

TL;DR: The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
Journal ArticleDOI

ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems

TL;DR: Results show that NSGA-II, a popular multiobjective evolutionary algorithm, performs well compared with random search, even within the restricted number of evaluations used.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What have the authors contributed in "On the hardness of offline multiobjective optimization" ?

The authors here show that the convergence rate of all comparison-based multiobjective algorithms, for the Hausdorff distance, is not much better than the convergence rate of the random search, unless the number of objectives is very moderate, in a framework in which the stronger assumptions are ( i ) that the objectives are conflicting ( ii ) that lower bounding the computational cost by the number of comparisons is a good model. 

To strenghten the results, the upper bound is proved for a very poor random search (see the poor distribution used in the random search of theorem 1) and the lower bound is proved for a very small family of fitness functions (see theorem 2; the upper bound holds a fortiori for larger families of problems). 

The problems in this paper are a family of problems with smooth Pareto sets; mainly, the assumptions underlying the family of problems P (used in the lower bound, in theorem 2) are that (i) it includes all possible Lipschitzian Pareto sets with some given bound on the Lipschitz coefficient (ii) there’s no possible reduction of the number of objectives (the set of objectives is minimal). 

This is not an artificial way of dealing with non-comparison-based methods; for example, in the mono-objective case, limits on the convergence rate of comparison-based algorithms derived through entropy theorems ([15]) do also hold in practice for gradient-based techniques, as the gradient is computed with a finite precision; as well as comparison-based EA, Newton’s method is only linear when dimensionality is sufficient to see the effects of the finite precision; this is an already known fact (see e.g. [21]). 

If the authors reduce the set of assumptions, the packing numbers increase - the lower bound remains essentially the same, and the proximity between the upper and the lower bound is preserved. 

Assume that ∀x, 0 ≤ fitness(x) ≤d(Xn,X ) ≤ K d √ en/q, where en = O (d log(n) − log(δ)) /n.This is a very poor random search, with a distribution uniform in the fitness space. 

The analysis of performance of evolutionary algorithms is typically the study of the computation time required by the algorithm for reaching a given precision for all problems of a given family of problems. 

A point in D is said Pareto-optimal if it is not Pareto-dominated by any other point in D. Multi-objective optimization (MOO,[2, 14, 5]) is the research of the set of non-dominated points, i.e.{x ∈ D; ∄y ∈ D, y ≻ x}. 

The authors here apply similar techniques in order to show that comparison-based MOO has strong limitations in terms of convergence rates, when it is applied to continuous problems in which (i) the computational cost is well approximated by the number of comparisons (ii) only binary comparisons are used (iii) all objectives are conflicting (iv) the number of objectives is high.