scispace - formally typeset
Open AccessJournal ArticleDOI

Linear Programming with Inexact Data is NP‐Hard

Reads0
Chats0
TLDR
It is proved that the problem of checking existence of optimal solutions to all linear programming problems whose data range in prescribed intervals is NP‐hard.
Abstract
We prove that the problem of checking existence of optimal solutions to all linear programming problems whose data range in prescribed intervals is NP-hard.

read more

Content maybe subject to copyright    Report

Linear Programming with Inexact Data is NP-Hard
Rohn, Ji
ˇ
r
´
ı
1995
Dostupn
´
y z http://www.nusl.cz/ntk/nusl-33636
D
´
ılo je chr
´
an
ˇ
eno podle autorsk
´
eho z
´
akona
ˇ
c. 121/2000 Sb.
Tento dokument byl sta
ˇ
zen z N
´
arodn
´
ıho
´
ulo
ˇ
zi
ˇ
st
ˇ
e
ˇ
sed
´
e literatury (NU
ˇ
SL).
Datum sta
ˇ
zen
´
ı: 10.08.2022
Dal
ˇ
s
´
ı dokumenty m
˚
u
ˇ
zete naj
´
ıt prost
ˇ
rednictv
´
ım vyhled
´
avac
´
ıho rozhran
´
ı nusl.cz .

INSTITUTE OF COMPUTER SCIENCE
ACADEMY OF SCIENCES OF THE CZECH REPUBLIC
Linear Programming with Inexact Data is
NP-Hard
Jir Rohn
Technical report No. 642
June 20, 1995
Institute of Computer Science, Academy of Sciences of the Czech Republic
Pod vo drenskou v 2, 182 07 Prague 8, Czech Republic
phone: (+422) 66414244 fax: (+422) 8585789
e-mail: uivt@uivt.cas.cz

INSTITUTE OF COMPUTER SCIENCE
ACADEMY OF SCIENCES OF THE CZECH REPUBLIC
Linear Programming with Inexact Data is
NP-Hard
1
Jir Rohn
2
Technical report No. 642
June 20, 1995
Abstract
We prove that the problem of checking whether all linear programming problems whose
data range in prescrib ed intervals have optimal solutions is NP-hard.
Keywords
Linear programming, inexact data, NP-hardness
1
This work was supp orted by the Czech Republic Grant Agency under grantGA
CR 201/95/1484
2
Faculty of Mathematics and Physics, Charles University, Prague (rohn@kam.ms.m.cuni.cz) and
Institute of Computer Science, Academy of Sciences, Prague, Czech Republic (rohn@uivt.cas.cz)

1 Intro duction
Consider a family of linear programming (LP) problems
min
f
c
T
x
Ax
=
b x
0
g
(1.1)
for all data satisfying
A
2
A
I
b
2
b
I
c
2
c
I
(1.2)
where
A
I
=
f
A
A
A
A
g
is an
m
n
interval matrix,
m
n
,and
b
I
=
f
b
b
b
b
g
c
I
=
f
c
c
c
c
g
are interval vectors of dimensions
m
and
n
, resp ectively (the inequalities are understood
componentwise). The family (1.1), (1.2) maybe interpreted as a linear programming
problem with inexact data, or as a fully parametrized parametric linear programming
problem.
The problem of existence of optimal solutions of all linear programming problems in
the family (1.1), (1.2) was addressed in 6]. There it was proved that each LP problem
(1.1) with data satisfying (1.2) has an optimal solution if and only if the LP problem
min
f
c
T
x
A
x
b Ax
bx
0
g
has an optimal solution and each of the 2
m
systems
Ax
=
b
whose eachrowiseither
of the form
(
A
x
)
i
=
b
i
or of the form
(
Ax
)
i
=
b
i
(
i
=1
:::m
) has a nonnegative solution. Hence, wehave a nitely veriable necessary
and sucient condition, but the numb er of systems to b e checked for nonnegative
solvability is exp onential in
m
.
In the main result of this pap er weshow that the problem in question is NP-hard
(cf. Garey and Johnson 1]). Hence, unless the famous conjecture "P
6
=NP" 1] is
false, there do es not exist a necessary and sucient condition for checking existence of
optimal solutions of all LP problems (1.1), (1.2) whichcouldbe veried in p olynomial
time. The pro of given in section 2 shows that even checking
feasibility
of all LP
problems in the family (1.1), (1.2) is NP-hard. Some concluding remarks are given in
section 3.
1

2 Main result
Theorem 1
The fol lowing problem is NP-hard:
Instance.
A
I
b
I
c
I
(with rational bounds).
Question.
Does each LP problem (1.1) with data (1.2) have an optimal solution?
Proof.
0) For the purpose of the proof, let us introduce
A
c
=
1
2
(
A
+
A
)
=
1
2
(
A
;
A
)
b
c
=
1
2
(
b
+
b
) and
=
1
2
(
b
;
b
), so that
A
I
=
A
c
;
A
c
+]
and
b
I
=
b
c
;
b
c
+
]
:
The proof goes through several steps.
1) First weprove that each system
Ax
=
b x
0 (2.1)
with data satisfying
A
2
A
I
b
2
b
I
(2.2)
has a solution if and only if
(
8
y
)(
A
T
c
y
+
T
j
y
j
0
)
b
T
c
y
;
T
j
y
j
0) (2.3)
holds. "Only if": Let each system (2.1) with data (2.2) have a solution, and let
A
T
c
y
+
T
j
y
j
0 for some
y
2
IR
m
. Dene a diagonal matrix
T
by
T
ii
=1 if
y
i
0,
T
ii
=
;
1if
y
i
<
0, and
T
ij
=0 if
i
6
=
j
(
i j
=1
:::m
), then
j
y
j
=
Ty
. Consider now
the system
(
A
c
+
T
)
x
=
b
c
;
Tx
0
:
(2.4)
Since
A
c
+
T
2
A
I
and
b
c
;
T
2
b
I
, the system (2.4) has a solution according to
the assumption, and (
A
c
+
T
)
T
y
=
A
T
c
y
+
T
j
y
j
0
hence Farkas lemma 3] applied
to (2.4) gives that
b
T
c
y
;
T
j
y
j
=(
b
c
;
T
)
T
y
0, which proves (2.3). "If": Assuming
that (2.3) holds, consider a system (2.1) with data satisfying (2.2). Let
A
T
y
0 for
some
y
then
A
T
c
y
+
T
j
y
j
(
A
c
+
A
;
A
c
)
T
y
=
A
T
y
0, hence (2.3) gives that
b
T
y
=(
b
c
+
b
;
b
c
)
T
y
b
T
c
y
;
T
j
y
j
0. Thus wehaveproved that for each
y
,
A
T
y
0
implies
b
T
y
0, and Fark
as lemma proves the existence of a solution to (2.1).
2) For a given
square
m
m
interval matrix
A
I
0
=
A
0
c
;
0
A
0
c
+
0
], construct an
m
2
m
interval matrix
A
I
=
A
c
;
A
c
+] (2.5)
with
A
c
=(
A
0
T
c
;
A
0
T
c
)
(2.6)
=(
0
T
0
T
)
(2.7)
and an interval
m
-vector
b
I
=
;
e e
]
(2.8)
2

Citations
More filters
Book

Linear Optimization Problems with Inexact Data

TL;DR: Theorem A: solvability of systems of interval linear equations and inequalities and optimization problems over max-algebras.
Journal ArticleDOI

Weak and strong solvability of interval linear systems of equations and inequalities

TL;DR: In this article, weak and strong solvability of general interval linear systems consisting of mixed equations and inequalities with mixed free and sign-restricted variables is studied. But the authors do not consider the problem of computing the limits of the optimal values for any form of the problem setting.

Nonnegative matrix factorization : complexity, algorithms and applications

TL;DR: This thesis explores a closely related problem, namely nonnegative matrix factorization (NMF), a low-rank matrix approximation problem with nonnegativity constraints, and makes connections with well-known problems in graph theory, combinatorial optimization and computational geometry.
Book ChapterDOI

Solvability of systems of interval linear equations and inequalities

J. Rohn
TL;DR: In this paper, the authors deal with solvability and feasibility of systems of interval linear equations and inequalities, and show that four problems are solvable in polynomial time and four are NP-hard.
Journal ArticleDOI

Interval systems of max-separable linear equations

TL;DR: In this article, the authors give necessary and sufficient conditions enabling efficient testing of weak and strong solvability over max-plus and max-min algebras over interval systems.
References
More filters
Journal ArticleDOI

Compatibility of approximate solution of linear equations with given error bounds for coefficients and right-hand sides

W. Oettli, +1 more
TL;DR: In this article, conditions are established under which a given approximate solution of a system of n linear equations withn unknowns is the exact solution of the modified system whose coefficients and right-hand sides are within a given neighborhood of those of the original system.
Journal ArticleDOI

Checking robust nonsingularity is NP-hard

TL;DR: It is shown that the question of whether A0+r1A1+···+rkAk is nonsingular for all possible choices of real numbersr1, ...,rk in the interval [0, 1].
Journal ArticleDOI

Strong solvability of interval linear programming problems

TL;DR: Notwendige and hinreichende Bedingungen für das Problem der linearen Optimierung mit Intervallkoeffizienten are given under which any linear programming problem with parameters being fixed in these intervals has a finite optimum.
Frequently Asked Questions (6)
Q1. What are the contributions in this paper?

The authors prove that the problem of checking whether all linear programming problems whose data range in prescribed intervals have optimal solutions is NP hard 

But since the problem of checking regularity of interval matrices is NP hard Poljak and Rohn Theorem the problem of checking whether each system with data satisfying has a solution is NP hard as well For an m n interval matrix AI and an intervalm vector bI consider the family of LP problemsminfcTx 

Thus the NP hardness of their problem has nothing to do with the amount of uncertainty in the data it is caused by the exponential number of vertices of an interval matrix AI Poljak and RohnNevertheless the worst case type result of Theorem does not preclude e cient solvability of many practical examples 

The problem of existence of optimal solutions of all linear programming problems in the family was addressed in There it was proved that each LP problem with data satisfying has an optimal solution if and only if the LP problemminfcTx Ax b Ax b x ghas an optimal solution and each of the m systems 

Ax b x gfor A AI b bI c e eSince the objective eTx is bounded from below a problem has an optimal solution if and only if it is feasible Hence each systemAx b xwith data satisfying A AI b bIhas a solution if and only if each LP problem with data has an optimal solution 

In fact according to part Eq some system with data does not have a solution if and only if there exists a vector y satisfyingA c A cyjyjand eT jyjwhich is equivalent to jA cyjjyjand yThen the Oettli Prager theorem see the reformulation in Lemma gives that is equivalent to existence of a singular matrix in AIA cA cThis proves the assertion Given a square m m interval matrix AI construct an m m interval matrix AI and an interval vector bI by This can be done in polynomial time According to part checking regularity of AI can be reduced in polynomial time to checking solvability of all systems