scispace - formally typeset
Open AccessJournal ArticleDOI

Generalized goal programming: polynomial methods and applications

Reads0
Chats0
TLDR
This paper addresses a general Goal Programming problem with linear objectives, convex constraints, and an arbitrary componentwise nondecreasing norm to aggregate deviations with respect to targets, and a dual is derived, enabling the set of optimal solutions geometrically.
Abstract
In this paper we address a general Goal Programming problem with linear objectives, convex constraints, and an arbitrary componentwise nondecreasing norm to aggregate deviations with respect to targets. In particular, classical Linear Goal Programming problems, as well as several models in Location and Regression Analysis are modeled within this framework. In spite of its generality, this problem can be analyzed from a geometrical and a computational viewpoint, and a unified solution methodology can be given. Indeed, a dual is derived, enabling us to describe the set of optimal solutions geometrically. Moreover, Interior-Point methods are described which yield an $\varepsilon$-optimal solution in polynomial time.

read more

Content maybe subject to copyright    Report

Generalized Goal Programming:
Polynomial Methods and Applications
Emilio Carrizosa
Facultad de Matem´aticas, Universidad de Sevilla
Tarfia s/n, 41012 Seville, Spain
e-mail: ecarriz@cica.es
org Fliege
Fachbereich Mathematik, Universit¨at Dortmund
44221 Dortmund, Germany
e-mail: fliege@math.uni-dortmund.de
January 8, 2001
Abstract
In this paper we address a general Goal Programming problem with linear
objectives, convex constraints, and an arbitrary componentwise nondecreasing
norm to aggregate deviations with respect to targets. In particular, classical
Linear Goal Programming problems, as well as several models in Location
and Regression Analysis are modeled within this framework.
In spite of its generality, this problem can be analyzed from a geometrical
and a computational viewpoint, and a unified solution methodology can be
given. Indeed, a dual is derived, enabling us to describe the set of optimal
solutions geometrically. Moreover, Interior-Point methods are described which
yield an ε-optimal solution in polynomial time.
Keywords: Goal Programming, Closest points, Interior point methods,
Location, Regression.
1

Version as of December 3, 2000 2
1 Introduction
1.1 Goal Programming
The origins of Goal Programming date back to the work of Charnes, Cooper and
Ferguson [7], where an l
1
-estimation regression model was proposed to estimate
executive compensation. Since then, and thanks to its versatility and ease of use, it
has become the by far most popular technique for tackling (linear) multiple-objective
problems, as evidenced by the bulk of literature on theory and applications of the
field. See, e. g., [40, 41, 44, 45] and the categorized bibliography of applications
therein.
By a Non-Preemptive Goal Programming problem one usually means some par-
ticular instance of the following model: a polyhedron K IR
n
is given as the set of
decisions; there exist m criteria matrices, C
1
, . . . , C
m
, with C
j
in IR
n×n
j
; each deci-
sion x K is valued according to criterion C
j
by the vector C
>
j
x, to be compared
with a given target set T
j
IR
n
j
. With this, the deviation d
j
(x) of decision x with
respect to the target set T
j
is defined as
d
j
(x) = inf
z
j
T
j
γ
j
(C
>
j
x z
j
)
for some given norm γ
j
, while the overall deviation at x is measured by
γ(d
1
(x), . . . , d
m
(x)),
where γ is a norm in IR
m
assumed to be monotonic in the nonnegative orthant IR
m
+
(see [4, 25]) i. e.
γ(u) γ(v) for all u, v IR
m
+
with 0 u
i
v
i
for all i = 1, . . . , m.
Then, the solution(s) minimizing the overall deviation are sought. In other words,
one solves the convex program
inf
xK
γ(d
1
(x), . . . , d
m
(x)). (1)
As pointed out e. g. in [8, 39, 40], Non-Preemptive Goal Programming and
related models can be rephrased as minimum-distance problems. This follows from

Version as of December 3, 2000 3
the previous formulation, since (1) is equivalent to
min γ(γ
1
(C
>
1
x z
1
), . . . , γ
m
(C
>
m
x z
m
))
s.t. x K,
z
j
T
j
j = 1, . . . , m.
(2)
Denoting by
e
γ the norm in IR
n
1
× . . . × IR
n
m
defined as
e
γ(u
1
, . . . , u
m
) = γ(γ
1
(u
1
), . . . , γ
m
(u
m
)),
problem (2) can be written as the minimum
e
γ-norm problem
min
e
γ(u
1
, . . . , u
m
)
s.t. u
j
= C
>
j
x z
j
j = 1, . . . , m
(x, z) K ×
Q
1jm
T
j
(3)
In many applications, each criterion C
j
is assumed to be a vector c
j
IR
n
, so it
values x through the scalar c
>
j
x; each target set T
j
is then a subset of IR of one the
forms
T
j
= [t
j
, +), (4)
T
j
= (−∞, t
j
], (5)
T
j
= {t
j
}, (6)
or, in Goal Range Programming [20], of the form
T
j
= [t
j
, t
j
]. (7)
This corresponds to a goal constraint of type c
>
j
x t
j
, c
>
j
x t
j
, c
>
j
x = t
j
, or
c
>
j
x [t
j
, t
j
], respectively. In other words, one desires to have c
>
j
x above t
j
, below t
j
,
exactly at t
j
, or between t
j
and t
j
, respectively.
Whereas the choice of the aggregating norm γ is crucial, (although, in applica-
tions, mostly reduced to the cases l
1
or l
) the choice of the norms γ
j
to measure
deviations in the case n
j
= 1 j is irrelevant, and we can consider each γ
j
to be
equal to the absolute value function. Then, the deviations take on the more familiar
form
d
j
(x) =
max
n
t
j
c
>
j
x, 0
o
if T
j
= [t
j
, +),
max
n
c
>
j
x t
j
, 0
o
if T
j
= (−∞, t
j
],
|c
>
j
x t
j
| if T
j
= {t
j
},
max
n
t
j
c
>
j
x, 0
o
+ max
n
c
>
j
x t
j
, 0
o
if T
j
= [t
j
, t
j
].

Version as of December 3, 2000 4
From these expressions, it should become clear that target sets of type (7), (thus
also of type (6)) are used only for modeling convenience, since they can be derived
from sets of types (4) and (5): splitting criterion j into criteria j
1
, j
2
, and defining
T
1
j
= [t
j
, +) and T
2
j
= (−∞, t
j
], the deviation d
j
(x) is simply the sum of the
deviations with respect to T
1
j
and T
2
j
.
1.2 Examples
Applications of Goal Programming abound in the literature; see e. g. the list of 351
applications papers cited in [40]. However, the range of applicability of (1) is by no
means reduced to what is usually classified as Goal Programming: a vast series of
important models in different fields of Optimization can also be seen as particular
instances of (1), mainly from the perspective of minimum-distance problems. Some
of them are briefly discussed below.
Overdetermined systems of (in)equalities
If a system of linear equalities and inequalities
a
>
1
x b
1
a
>
2
x b
2
.
.
.
.
.
.
a
>
p
x b
p
a
>
p+1
x = b
p+1
.
.
. =
.
.
.
a
>
p+q
x = b
p+q
(8)
is infeasible, one can look for a so-called least infeasible solution, i. e. a point x
solving
min
x
γ(max(0, b
1
a
>
1
x), . . . , max(0, b
p
a
>
p
x), |b
p+1
a
>
p+1
x|, . . . , |b
p+q
a
>
p+q
x|)
for some norm γ monotonic in IR
p+q
+
. This is simply a Goal Programming problem in
which the vectors a
i
(i = 1, . . . , p+q) play the role of the criteria and the components
b
i
(i = 1, . . . , p+q) of the right hand side vector represent the targets, see Example 4
in Section 3.

Version as of December 3, 2000 5
When only equalities appear in (8), one obtains the problem of solving an overde-
termined system of linear equations, classical in Approximation Theory [33, 43], or,
equivalently, the Linear Regression problem [42]. Usually, γ is assumed to be an l
p
norm, mainly p = 2, (yielding the well-known Least Squares problem [3]) p = 1, or
p = [1].
Multifacility location
In Continuous Location [29, 36], distances are usually measured by gauges. For
simplicity, we will consider throughout this paper only gauges γ of the form
γ(x) = inf{t 0 : x tB}
for some nonempty convex compact B IR
m
(its unit ball) containing the origin
in its interior. In applications, this additional assumption is usually fulfilled, see,
e. g. [10, 29]. Observe that norms correspond to symmetric gauges. Moreover, since
the origin is assumed to be an interior point, the gauge takes always finite values.
See e. g. [17] for the case of gauges with values on IR
+
{+∞}.
Let F be a nonempty finite set and let 6= E F ×F . Then (F, E) is a directed
graph. Following e. g. [13, 27], F represents the set of facilities (some of which may
have fixed locations in IR
n
), whereas E represents the interactions between these
facilities.
For each edge e := (f, g) E, let γ
e
be a given gauge in IR
n
, which measures
the cost of the interaction between facility f and facility g. Let γ be a gauge in IR
E
monotonic in the non-negative orthant.
For a nonempty closed convex set K (IR
n
)
F
, consider the optimization problem
inf
(x
f
)
f F
K
γ((γ
(f,g)
(x
f
x
g
))
(f,g)E
). (9)
The most popular instance of (9) is the continuous minisum multifacility location
problem, see [36, 46, 47] and the references therein. There, the node set F is
partitioned into two sets A and V , representing respectively the fixed and the free
locations, and a family (a
f
)
fA
(IR
n
)
A
of fixed locations is given. The feasible
region K is then defined by
K =
n
x = (x
f
)
fF
(IR
n
)
F
| x
f
= a
f
for all f A
o
, (10)

Citations
More filters
Journal ArticleDOI

Steepest descent methods for multicriteria optimization

TL;DR: A steepest descent method for unconstrained multicriteria optimization and a “feasible descent direction” method for the constrained case, both of which converge to a point satisfying certain first-order necessary conditions for Pareto optimality.
Journal ArticleDOI

Gap-free computation of pareto-points by quadratic scalarizations

TL;DR: The aim of this paper is to argue in favour of parameterized quadratic objective functions, in contrast to the standard weighting approach in which parameterized linear objective functions are used.
Journal ArticleDOI

Distance rationalization of voting rules

TL;DR: A natural class of distances on elections—votewise distances—which depend on the submitted votes in a simple and transparent manner are identified, and which voting rules can be rationalized via distances of this type are investigated.
Posted Content

Dominating Sets for Convex Functions with some Applications

Abstract: textA number of optimization methods require as a first step the construction of a dominating set (a set containing an optimal solution) enjoying properties such as compactness or convexity. In this note we address the problem of constructing dominating sets for problems whose objective is a componentwise nondecreasing function of (possibly an infinite number of) convex functions, and we show how to obtain a convex dominating set in terms of dominating sets of simpler problems. The applicability of the results obtained is illustrated with the statement of new localization results in the fields of Linear Regression and Location.
References
More filters
Book

Numerical Methods for Least Squares Problems

Åke Björck
TL;DR: Theorems and statistical properties of least squares solutions are explained and basic numerical methods for solving least squares problems are described.
Book

Convex analysis and minimization algorithms

TL;DR: In this article, the cutting plane algorithm is used to construct approximate subdifferentials of convex functions, and the inner construction of the subdifferential is performed by a dual form of Bundle Methods.
Journal ArticleDOI

Optimal Estimation of Executive Compensation by Linear Programming

TL;DR: It will be shown how the methods of linear programming may be used to obtain estimates of parameters when more usual methods, such as “least squares,” are difficult or impossible to apply.
Journal ArticleDOI

Facility Location: A Survey of Applications and Methods

TL;DR: A Methodology and Analysis of Facility Location and Methods Based on Deterministic Optimization Models and Deterministic Models Addressing Congestion, and Future Research and Conclusions.
Frequently Asked Questions (5)
Q1. What is the popular instance of (9)?

(9)The most popular instance of (9) is the continuous minisum multifacility location problem, see [36, 46, 47] and the references therein. 

The feasible region K is then defined byK = { x = (xf )f∈F ∈ (IRn)F | xf = af for all f ∈ A } , (10)while the gauge γ is taken as the l1 norm, so that one minimizes the sum of all interactions between the facilities,inf xf=af ∀f∈A ∑ (f,g)∈E γ(f,g)(xf − xg). 

to construct a self-concordant barrier for the set B̃, one can use selfconcordant barriers b+i with self-concordancy parameter ϑ + i for the cones epi(γi) and a self-concordant barrier b̃ with self-concordancy parameter ϑ̃ for the unit ball of γ̃ to defineb̂(u1, t1, . . . , uk, tk) := b̃(t1, . . . , tk) + k∑ i=1 b+i (ui, ti),a self-concordant barrier for B̂ with self-concordancy parameter ϑ̃+ ∑k i=1 ϑ + i . 

In other words, x̄ is optimal for (P ) iff there exists some ū ∈ IRm satisfyingC>ū+ d ∈ E∗, (28)γ◦(ū) ≤ 1, (29)γ(Cx̄+ c) + d>x̄ = ū>(Cx̄+ c) + d>x̄, (30)= ū>c+ inf xM∈M,xE∈E ū>C(xM + xE) + d >(xM + xE). (31)But (30) holds iffCx̄+ c ∈ NB◦(ū). 

Of course, the standard logarithmic barrier bB(x) = − ∑k i=1 ln(gi − a>i x) for the polytope B can be used todefine b+B(x, t) = − ∑k i=1 ln(git−a>i x), a self-concordant barrier for the epigraph of γ with self-concordancy parameter ϑ+B = k. 2Example 10 Let γ be a gauge, A ∈ IRn×n be a regular matrix and c ∈ IRn be a vector with γ◦(A>c) < 1. Then γ̃(x) := γ(Ax) + c>x defines a gauge [35].