4OR-Q J Oper Res manuscript No.
(will be inserted by the editor)
Recent contributions to linear semi-in…nite
optimization
?
M.A. Goberna
??
, M.A. López
1
Depa rtment of Mathematics , University of Alicante, Alicante, Spain, e-mail:
mgoberna@ua.es
2
Depa rtment of Mathematics , University of Alicante, Alicante, Spain and Feder-
ation University of Australia, Ballarat, Austral ia, e-mail: marco.antonio@ua.es
The date of receipt and a cceptanc e will be inserted by the editor
Abstract This paper reviews the state-of-the-art in the theory of de-
terministic and uncertain linear semi-in…nite optimization, presents some
numerical approaches to this type of problems, and describes a selection of
recent applications in a variety of …elds. Extensions to related optimization
areas, as convex semi-in…nite optimization, linear in…nite optimization, and
multi-objective linear semi-in…nite optimization, are also commented.
Key words Linear semi-in…nite optimization –Theory –Methods –Ap-
plications
1 Introduction
Linear semi-in…nite optimization (LSIO in short) deals with linear opti-
mization problems in which either the dimension of the decision space or
the number of constraints (but not both) is in…nite. We say that a linear
optimization problem is ordinary (respectively, in…nite) when the dimen-
sion of the decision space and the number of constraints are both …nite
(respectively, in…nite).
The …rst three known contributions to LSIO are due to A. Haar (1924),
E. Remez (1934), and G. Dantzig (1939), but they were basically ignored
until the 1960s due to either the low di¤usion, inside the mathematical com-
munity, of the journals where Haar and Remez published their discoveries,
?
This wo rk was supported by the MINECO of Spain and ERDF of EU,
Grant MTM2014-59179-C2-1-P, and by the Australian Research Council, Project
DP160100854.
??
Corresponding author.
2 M.A. Goberna, M.A. López
and the languages used (German and French, respectively), or to the tem-
porary leave by Dantzig of his incipient academic career when he joined the
Pentagon; in fact, he only wrote on his …ndings on LSIO within his pro-
fessional memories, published many years later [52]. More in detail, Haar’s
paper [108] was focussed on the extension of the homogeneous Farkas lemma
for linear systems from R
n
to an Euclidean space equipped with a scalar
product h; i (actually, the space C([0; 1] ; R) of real-valued continuous func-
tions on [0; 1] equipped with the scalar product hf; gi =
R
1
0
f (t) g (t) dt);
these systems are semi-in…nite because they involve …nitely many linear
inequalities while the variable ranges on an in…nite dimensional space. Re-
mez’s paper [161], in turn, proposed an exchange numerical method for a
class of LSIO problems arising in polynomial approximation, e.g., comput-
ing the best uniform approximation to f 2 C ([0; 1] ; R) by means of real
polynomials of degree less than n 1; i.e.,
inf
x2R
n
max
t2[0;1]
f(t)
n1
X
i=1
x
i
t
i1
;
which is equivalent to the LSIO problem
P
A
: inf
x2R
n
x
n
s.t. x
n
f(t)
P
n1
i=1
t
i1
x
i
x
n
; t 2 [0; 1] :
(1)
Finally, Dantzig reformulated a Neyman-Pearson-type problem on statisti-
cal inference (posed by the same J. Neyman in a doctoral course attended
by Dantzig) as a linear optimization problem with …nitely many constraints
and an in…nite number of variables; Dantzig observed that the feasible set of
this LSIO problem was th e convex hull of its extreme points and conceived
a geometry of columns allowing to jump from a given extreme point to a
better adjac ent one, which is a clear antecedent of the celebrated simplex
method for linear optimization problems he proposed in 1947.
The next contributions to LSIO came in the 1960s, and are due to A.
Charnes, W. Cooper and their doctoral student K. Kortanek; they conceived
LSIO as a natural extension of ordinary linear optimization (also called
linear programming, LP in short) and coined the term "semi-in…nite" in [41]
(for more details, see the description by the third author of the inception
of LSIO in [126]). In [40] and [41] the LSIO problems with …nitely many
variables were called primal. These problems can be written as
P : inf
x2R
n
c
0
x
s.t. a
0
t
x b
t
; t 2 T;
(2)
where c
0
represents the transpose of c 2 R
n
; a
t
= (a
1
(t) ; :::; a
n
(t))
0
2 R
n
;
and b
t
= b (t) 2 R for all t 2 T: As in any …eld of optimization, the …rst
theoretical results on LSIO dealt with optimality conditions and duality,
and showe d that LSIO is closer to ordinary convex optimization than to
LP as the …niteness of the optimal value does not imp ly the existence of
Recent contributions to linear semi-in…nite optimization 3
an optimal solution and a positive duality gap can occur for the so-called
Haar’s dual problem of P (term also introduc ed by Charnes, Co oper and
Kortanek),
D : sup
2R
(T )
+
X
t2T
t
b
t
s.t.
X
t2T
t
a
t
= c;
(3)
where R
(T )
+
is the positive cone in the linear space of generalized …nite se-
quences R
(T )
; whose elements are the functions 2 R
T
that vanish every-
where on T except on a …nite subset of T called supporting set of ; and
that we represent by supp : Obse rving that
inf
x2R
n
(
L (x; ) := c
0
x +
X
t2T
t
(b
t
a
0
t
x)
)
=
X
t2T
t
b
t
(4)
if 2 R
(T )
+
is a feasible solution of D and 1 otherwise, one concludes
that D is nothing else than a simpli…ed form of the classical Lagrange dual
of P :
D
L
: sup
2R
(T )
+
inf
x2R
n
L (x; ) : (5)
As in convex optimization, the …rst characterizations of the optimal solu-
tions of P and the duality theorems required some constraint quali…cation
(CQ in brief) to be ful…lled. The problem P in (2) is said to be continuous
whenever T is a compact topological space and the function t 7! (a
t
; b
t
) is
continuous on T: The approximation problem P
A
in (1) can easily b e refor-
mulated as a continuous LSIO problem by taking T = [0; 1] f1; 2g R
2
.
The continuous dual problem of a continuous LSIO problem P is
D
C
: sup
2C
0
+
(T )
R
T
b
t
d (t)
s.t.
R
T
a
t
d (t) = c;
where C
0
+
(T ) represents the cone of non-negative regular Borel measures on
T: Since the elements of R
(T )
+
can be identi…ed with the non-negative atomic
measures, the optimal value of D
C
is greater or equal than the optimal value
of D:
The …rst numerical approach to su¢ ciently smooth LSIO problems, pro-
posed by S. Gustafson and K. Kortanek in the early 1970s, consisted in the
reduction of P to a nonlinear system of equations to be solved by means of a
quasi-Newton method ([106], [107]). This approach was improved in [73] by
aggregating a …rst phase, based in discretization by grids, in order to get a
starting point for the quasi-Newton second phas e. Simplex-like methods for
particular classes of LSIO problems were proposed in [6], under the assump-
tion that T is an interval and the n + 1 functions a
1
() ; :::; a
n
() ; b () are
analytic functions on T , and in [5], under the assumption that the feasible
set of P is quasipolyhedral (meaning that its nonempty intersections with
4 M.A. Goberna, M.A. López
polytopes are polytopes). An interior cutting plane method for continuous
LSIO problems, inspired by the Elzinga-Moore method of ordinary convex
optimization, was proposed in [90] and improved by an accelerated version
in [16].
In many applications of optimization to real word problems the data
de…ning the nominal problem are uncertain due to measurement errors, pre-
diction errors, and round-o¤ errors, so that the user must choose a suitable
uncertainty model.
Parametric models are based on embedding the nominal p roblem into
some topological space of admissible perturbed problems, the so-called space
of parameters. Qualitative stability analysis provides conditions under which
su¢ ciently small perturbations of the nominal problem provoke small changes
in the optimal value, the optimal set and the feasible set; to be more precise,
conditions ensuring the lower and upper semicontinuity of the mappings as-
sociating to each parameter the corresponding optimal value, optimal set
and feasible set. These desirable stability properties are generic in certain
set of parameters when they hold for most (in some sense) elements of that
set. The …rst works on qualitative stability analysis of LSIO continuous
problems were published in the 1980s by the group of Goethe University
Frankfurt, formed by B. Brosowski and his collaborators T. Fischer and
S. Helbig, together with their frequent visitor M.I. Todorov, who provided
conditions for the semicontinuity of the above mappings, for continuous
LSIO problems, under a variety of perturbations including those a¤ecting
the right-hand-side function b () (see, e.g., [23], [24], [70], [178]). The ex-
tension of these results to (non-necessarily continu ous) LSIO problems, and
to their corresponding Haar’s dual problems, was carried out by Todorov
and the authors of this review in the second half of the 1990s ([89], [92],
[93], [94]). In [129] several results about the stability of the boundary of the
feasible set in LSIO are given.
Quantitative stability analysis, in turn, yields exact and approximate
distances, in the space of parameters, from the nominal problem P to im-
portant families of problems (e.g., from either a given consistent problem
P to the inconsistent ones or from a given bounded problem P to the class
of solvable problems), error bounds, and moduli of di¤erent Lipschitz-type
properties which are related to the complexity analysis of the numerical
methods; see, e.g., the works published during the last 15 years by the
second author of this review together with the group of University Miguel
Hernández of Elche, Spain (M.J. Cánovas, J. Parra, F. Toledo) and their
collaborator A. Dontchev ([25], [33], [34], [35], [36]). Sensitivity analysis
provides estimations of the impact of a given perturbation of the nominal
problem on the optimal value, so that it can be seen as the quantitative
stability analysis specialized to the optimal value. Results of this type for
LSIO problems can be found in some of the above mentioned works and in
the speci…c ones of the …rst author, the group of Puebla, Mexico (S. Gómez,
F. Guerra-Vázquez, M.I. Todorov) and their collaborator T. Terlaky ([76],
[100]).
Recent contributions to linear semi-in…nite optimization 5
Linear semi-in…nite optimization has attracted during the last decades
the attention, on the one hand, of optimization theorists as it is a sim-
ple, but non-trivial, extension of LP, and, on the other hand, of the linear
optimization community, typically oriented towards numerical issues, as a
primal LSIO problem can be seen as a huge LP problem. In fact, during
the 1990s, authors like M. Powell, M. Todd, L. Tunçel or R. Vanderbei ex-
plained the numerical di¢ culties encountered by the interior-point methods
on huge LP problems by analyzing the convergence of their adaptations to
simple LSIO problems ([156], [177], [181], [183]). Finally, from the model-
ing perspective, LSIO has been systematically applied in those …elds where
uncertain LP problems arise in a natural way (as it happens in engine er-
ing and …nance), specially when the user appeals to the robust approach.
For all these reasons, several survey papers and extended reviews on LSIO
have been published in the past, the last ones dated in 2005 ([74], [75]),
and 2014 ([91], exclusively focused on uncertain LSIO). Linear s emi-in…nite
optimization was also considered in two other surveys, the …rst one on semi-
in…nite optimization, published in 2007 ([139]), and the second one on the
stability analysis of (ordinary, s emi-in…nite and in…nite) linear optimization
problems, published in 2012 ([138]). It is worth mentioning another sur-
vey on non-linear semi-in…nite optimization [170] which explicitly precludes
LSIO. Coh erently with these antecedents, this review is intended to cover
the period 2007-2016, for d eterministic LSIO, and 2014-2016, for uncertain
LSIO.
2 Deterministic linear semi-in…nite optimization (2007-2016)
Let us introduce the necessary notation and basic concepts. Given a real
linear space X; we denote by 0
X
the zero vector of X; except in the particular
cases that X = R
n
or X = R
T
; whose null-vector are represented by 0
n
and
0
T
; respectively. Given a nonempty set Y X; by span Y; and conv Y we
denote the linear hull and the convex hull of Y , respectively, while cone Y
denotes the conical convex hull of Y [ f0
X
g: We also denote by X
0
the
algebraic dual of X; and by h; i the duality product (i.e., h ; xi = (x)
for all 2 X
0
and x 2 X). Obviously, (R
n
)
0
= R
n
whereas
R
T
0
' R
(T )
=
2 R
T
: supp is …nite
: Indeed,
R
N
0
is of uncountable dimension while
the dimension of R
(N)
is countable (see, e.g., [2] and [7]).
Given a topological space X and a set Y X; int Y , cl Y , and bd Y
represent the interior, the closure, and the boundary of Y , respectively.
When X is a locally convex Hausdor¤ topological space (lcs in short) and
Y X; rint Y denotes the relative interior of Y: We denote by X
the
topological dual of X; i.e., X
= X
0
\ C (X; R) : It is known (e.g., [6]) that
(R
n
)
= R
n
whereas
R
T
= R
(T )
when one considers R
T
equipped with
the product topology.
The Euclidean (respectively, l
1
) norm and the associated distance in
R
n
are denoted by kk and d (respectively, kk
1
and d
1
). We denote