scispace - formally typeset
Open AccessJournal ArticleDOI

Minkowski's convex body theorem and integer programming

Ravi Kannan
- 01 Aug 1987 - 
- Vol. 12, Iss: 3, pp 415-440
TLDR
An algorithm for solving Integer Programming problems whose running time depends on the number n of variables as nOn by reducing an n variable problem to 2n5i/2 problems in n-i variables for some i greater than zero chosen by the algorithm.
Abstract
The paper presents an algorithm for solving Integer Programming problems whose running time depends on the number n of variables as nOn. This is done by reducing an n variable problem to 2n5i/2 problems in n-i variables for some i greater than zero chosen by the algorithm. The factor of On5/2 “per variable” improves the best previously known factor which is exponential in n. Minkowski's Convex Body theorem and other results from Geometry of Numbers play a crucial role in the algorithm. Several related algorithms for lattice problems are presented. The complexity of these problems with respect to polynomial-time reducibilities is studied.

read more

Content maybe subject to copyright    Report

NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS:
The copyright law of the United States (title 17, U.S. Code) governs the making
of photocopies or other reproductions of copyrighted material. Any copying of this
document without permission of its author may be prohibited by law.

CMU-CS-96-105
Minkowski's Convex Body Theorem and
Integer Programming
Ravi Kannan
Abstract The paper presents an algorithm for solving Integer Program-
ming problems whose running time depends on the number n of variables
in the problem as n°^
n
\ This is done by reducing an n variable problem
to na* problems in n i variables for some i greater than 1. The factor
of n
5
/
2
"per variable" improves on the best previously known factor which
is exponential in n. Minkowski's Convex Body theorem and other results
from Geometry of Numbers play a crucial role in the algorithm; they are
explained from first principles.
Supported by NSF grant ECS-8418392

Introduction
The Integer Programming (feasibility) Problem is the problem of determining whether
there is a vector of integers satisfying a given system of linear inequalities. In settling an
important open problem, H.W.Lenstra (1981,1983) showed in an elegant way that when n
the number of variables is fixed, there is a polynomial time algorithm to solve this problem.
He accomplishes this by giving a polynomial time algorithm that for any polytope P in Z
n
either finds an integer point (point with all integer coordinates) in P or finds an integer
vector v so that the maximum value of (v,x) and the minimum value of (v,x) over the
polytope P differ by less than c
n
where c is a constant independent of n. Every integer
point must lie on a hyperplane of the form (v,x) = z for some integer 2, and there are at
most c
n
* such hyperplanes intersecting P. It obviously suffices to determine for each such
hyperplane JB", whether H n P contains an integer point. Lenstra uses this to show that
an n variable problem can be reduced to c
n
problems each in n
1 variables. This raises
two questions : Can we effectively reduce an n variable problem to polynomially many
n
1 variable problems ? Can the reduction be done efficiently so as to achieve a better
complexity for Integer Programming ? Both these questions are answered affirmatively in
this paper.
If an n variable problem is reduced to polynomially many n
1 variable problems, the
best complexity we can achieve is n
en
for some constant c, so we are at liberty to take
this amount of time for the reduction to one less variable. Furthermore, the same result
is obviously achieved if we reduce an n variable problem to problems in n
i variables
for some i between 1 and n. Indeed, the greater the t the better since then we reduce the
number of variables by a larger amount. This paper presents an algorithm which either
finds an integer point in the given polytope P in Z
n
or finds for some i, 1 < i < n, an
n i dimensional subspace V with the following property : the number of translates of
V containing integer points that intersect P is at most n*\ Each such translate leads to
a n i dimensional problem. So, it can be shown that there is a factor of 0(n
5
/
2
) per
variable in the running time. In this sense, it reduces an n variable problem effectively to
0(n
5
/
2
) problems in n
1 variables. The algorithm for finding the subspace V uses at most
0(n
n
s) arithmetic operations where s is the length of description of the polytope. The
dependence on n of the complete integer programming algorithm is shown to be O(n
on
).
This paper is the final journal version of the preliminary paper Kannan (1983). Since
the appearance of the preliminary version, Hastad (1985) has observed using results of
Lenstra and Schnorr (1984) that for any polytope P of positive volume in £
n
, if P does
not contain an integer point, then, there exists an integer vector v such that the maximum
and minimum of (v, x) over P differ by at most 0(n
5
/
2
). This is an interesting existence
result. But, there is no finite algorithm known that with P as input either gives us an
integer point in P or the vector v. If we relax the 0(n
5
^
2
) to 0(n
3
), then we can get such an
algorithm using the techniques of this paper ; it uses 0(n
n
s) arithmetic operations. This
gives a way of reducing an n variable Integer Program to 0(n
z
) problems in n
1 variables.

However, the resulting algorithm for Integer Programming has, obviously, asymptotically
worse complexity, so it is not presented here.
This paper uses several concepts and results from Geometry of Numbers, the most
crucial of them being Minkowski's convex body theorem. This elegant classical theorem
turns out to be crucial in effectively reducing an n variable problem to polynomially many
n
1 variable problems rather than an exponential number of them. Section 1 contains a
brief introduction to Geometry of Numbers to make the paper self-contained for Operations
Researchers and Computer Scientists.
The integer programming algorithm will be presented after two other algorithms : one
for finding the shortest (in Euclidean length) non-zero integer linear combination of a
given set of vectors and the other for finding the integer linear combination of a set of
vectors that is closest (in Euclidean distance) to another given vector. These are called
the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) respectively.
The algorithms for both problems take 0(n
n
s) arithmetic operations on n dimensional
problems where s is the length of the input. The algorithm for the SVP is needed as a
subroutine in the integer programming algorithm whereas the algorithm for the CVP is
not directly needed, but has ideas that will be useful in integer programming.
It is well-known that Integer Programming is NP-hard. It has been shown recently
that CVP is NP-hard. At present, it is not known whether SVP is NP-hard or admits a
polynomial time algorithm (or both !). The last section of the paper provides another,
more natural proof that CVP is NP-hard. Further, it relates the complexity of the SVP to
an approximate version of the CVP. It is hoped that this is a beginning towards proving
the NP-hardness of the SVP which remains an important open problem.
Summary of the paper
Operations Researchers are usually interested in solving the Integer Programming Op-
timality problem - i.e., the problem of maximizing a linear function over the set of integer
solutions (solutions with all integer coordinates) to a system of linear inequalities. This
question can be reduced by elementary means to the Integer Programming feasibility ques-
tion which is the problem of determining whether there is an integer point inside a given
polyhedron. This paper deals only with the feasibility question and this will be called the
Integer Programming Problem. Computationally it can be stated as : Given m x n
and m x 1 matrices A and b respectively of integers, find whether there exists an n x 1
vector x of integers satisfying the m inequalities Ax < b. The case of n = 1 can be trivially
solved in polynomial time. For the case of n = 2, Hirschberg and Wong (1976), Kannan
(1980) and Scarf (1981) gave polynomial time algorithms.
Central to H. W.Lenstra's algorithm for general n is an algorithm for finding a "reduced
basis " of a "lattice"(both terms to be explained later). Lenstra's (1981) original basis
reduction algorithm takes polynomial-time only when the number of dimensions is fixed.
After his result, Lovasz devised a basis reduction algorithm which runs in polynomial time
even when n the number of dimensions varies . This algorithm combined with an earlier
result of A.K.Lenstra's (1981) that reduced factoring a polynomial to finding a short vector
2

in a lattice yields a polynomial time algorithm for factoring polynomials over the rationals.
All these ideas were first published in an important paper of Lenstra, Lenstra and Lovasz
(1983).
This paper is referred to henceforth as the LLL paper. Here, the following result
from the LLL paper is used : Given a set of vectors 61,62?
&n>
we can find in polynomial
time a nonzero integer linear combination of them whose length is at most 2
n
/
2
times
the length of any (other) nonzero integer linear combination. In addition, we will need a
technical result from H.W.Lenstra's paper which is due to Lovasz. This result is stated in
the section on integer programming.
Section 1 introduces lattices and proves Minkowski's theorem. Section 2 presents an
algorithm for finding a "more reduced basis"
1
of a lattice than the LLL algorithm. While
the end product of this algorithm is better because it is "more reduced", it also takes
more time (0(n
n
s) arithmetic operations) than the LLL algorithm. The first vector of the
"more reduced basis" will be a shortest nonzero vector in the lattice. This solves the SVP
mentioned in the abstract. Section 2 closes with a proof of correctness and a bound on
the number of arithmetic operations. Section 3, the most technical section of the paper,
proves bounds on the size of numbers produced by the algorithm in section 2.
The second major algorithm in the paper is for solving the CVP and is given in section
4.
It uses as a subroutine the algorithm for finding the "more reduced basis". After
these, the algorithm for Integer Programming is given. It performs 0(n2
n
s) arithmetic
operations for an n variable problem and produces numbers with 0(n
2n
s) bits where s is
the length of the input. This is section 5. In a recent paper, Frank and Tardos (1985) show
that all the numbers can be kept polynomially bounded in their number of bits. Their
improvement also brings down the number of arithmetic operations of the algorithm to
0{nl
n
s).
Here is a brief overview of the algorithms : The algorithm for the SVP first solves it
approximately, then enumerates a bounded number of candidates for the shortest nonzero
vector and chooses the best. Minkowski's theorem implies that this set of candidates
suffices. In the algorithm for the CVP and integer programming, the original problem is
transformed so that by appealing to the Minkowski's theorem, the transformed problem
can be reduced to a bounded number of lower dimensional problems.
The last section of the paper contains some results on complexity. The Closest Vector
Problem is shown to be NP-hard by reducing 3-dimensional matching to it. Then the
Yes/No question that corresponds to the Shortest Vector Problem in a natural way is
defined - it is namely the question of whether there is a nonzero integer linear combination
of a set of given vectors of length less than or equal to a given number. The SVP is
shown to be polynomial-time reducible to the Yes/No question. Then using a technique
called "homogenization" from polyhedral theory, it is shown that the problem of solving
the CVP to within a factor of y/n/2 is polynomial-time reducible to the Yes/No question.
I conjecture that this approximate version of the CVP is NP-hard. If the conjecture is
proved , it would be the case that the Yes/No question is NP-complete in the sense of Cook
(1971) and the reduction essentially is a Cook (Turing) reduction rather than a many-one
1
(2.6) gives a definition of the "more reduced basis". It is a very natural concept.
3

Citations
More filters
Book

Approximation Algorithms

TL;DR: Covering the basic techniques used in the latest research work, the author consolidates progress made so far, including some very recent and promising results, and conveys the beauty and excitement of work in the field.
Book

Invitation to fixed-parameter algorithms

TL;DR: This paper discusses Fixed-Parameter Algorithms, Parameterized Complexity Theory, and Selected Case Studies, and some of the techniques used in this work.
Journal ArticleDOI

Closest point search in lattices

TL;DR: An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented and is shown to be substantially faster than other known methods.
Book

Parameterized Algorithms

TL;DR: This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area, providing a toolbox of algorithmic techniques.
Journal ArticleDOI

Lattice basis reduction: improved practical algorithms and solving subset sum problems

TL;DR: Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.
References
More filters

Reducibility Among Combinatorial Problems.

TL;DR: Throughout the 1960s I worked on combinatorial optimization problems including logic circuit design with Paul Roth and assembly line balancing and the traveling salesman problem with Mike Held, which made me aware of the importance of distinction between polynomial-time and superpolynomial-time solvability.
Proceedings ArticleDOI

The complexity of theorem-proving procedures

TL;DR: It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a tautology.
Journal ArticleDOI

Factoring Polynomials with Rational Coefficients

TL;DR: This paper presents a polynomial-time algorithm to solve the following problem: given a non-zeroPolynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q (X).

Factoring polynomials with rational coeficients

TL;DR: In this paper, a polynomial-time algorithm was proposed to decompose a primitive polynomials into irreducible factors in Z(X) if the greatest common divisor of its coefficients is 1.
Related Papers (5)
Frequently Asked Questions (15)
Q1. What are the contributions in "Minkowski's convex body theorem and integer programming" ?

Both these questions are answered affirmatively in this paper. This paper presents an algorithm which either finds an integer point in the given polytope P in Z or finds for some i, 1 < i < n, an n — i dimensional subspace V with the following property: the number of translates of V containing integer points that intersect P is at most n * \\ This paper is the final journal version of the preliminary paper Kannan ( 1983 ). If the authors relax the 0 ( n 5 ^ 2 ) to 0 ( n 3 ), then they can get such an algorithm using the techniques of this paper ; it uses 0 ( ns ) arithmetic operations. This paper uses several concepts and results from Geometry of Numbers, the most crucial of them being Minkowski 's convex body theorem. Section 1 contains a brief introduction to Geometry of Numbers to make the paper self-contained for Operations Researchers and Computer Scientists. The last section of the paper provides another, more natural proof that C V P is NP-hard. This paper deals only wi th the feasibility question and this will be called t h e I n t e g e r P r o g r a m m i n g P r o b l e m. If an n variable problem is reduced to polynomially many n — 1 variable problems, the best complexity the authors can achieve is n for some constant c, so they are at liberty to take this amount of t ime for the reduction to one less variable. Furthermore, the same result is obviously achieved if the authors reduce an n variable problem to problems in n — i variables for some i between 1 and n. Further, it relates the complexity of the SVP to an approximate version of the CVP. 

1 5 ) With polynomially many calls to a subroutine accepting the language L2 - SHORTEST and polynomial additional time, the authors can find a shortest nonzero vector in a lattice. 

If the authors only want an existential result and are not interested in finding the subspace V , the authors can do better than f in the exponent. 

In very special cases when the binary matroid is graphic, the problem is the shortest path problem for graphs, which is , of course, polynomial time solvable. 

Lattices over GF(2) which are of course just vector spaces are of particular interest in coding theory and cryptography, so, The authorstate the "Shortest Vector Problem " for such lattices below : 

The shortest vector v = ( v i , V 2 , . . . , v n + i ) of V must clearly satisfy |t>n+i| ^ |Ai(Z,)| because there is a vector of length |Ai(L) | in L and hence in V\\ 

It was conjectured in Lenstra (1981) that the problem of finding a shortest vector in a lattice L = 1 , ( 6 ! , . . . , 6 n ) given 6 i , . . . , 6 n is NP-hard. 

This is because the algorithm obtains the affine transformation that rounds out the polytope {x : Ax < 6} by mapping (ra+1) of its vertices (in n dimensions) to ( 0 , 0 , 0 , . . . , 0 ) , ( 1 , 0 , . . . 0 ) , ( 0 , 1 , . . . 

By going through the construction to "round out" a polytope due to Lovasz , one finds that this increases the number of bits by at most a factor of n2. 

Towards this end, first note that L*has the property that if Y is a shortest vector of L*, then for any other shortest vector Y' of L*, = |17| (by (6.13)) . 

Arguing as in the case of shortest vector problem, (lemma (2.14)) , this gives us a bound of Mn/d(L) on the number of possible n—tuples ( a x , a 2 , . . . an) to enumerate. 

The question is : Given n 0,1 vectors 61,625 • • • 5 6 n find the (Hamming) shortest nonzero linear combination of them where all operations are done modulo 2. 

Let V be as defined in the last paragraph and let v = ( v i , . . . , v n + i ) be a shortest nonzero vector of V with v n + i < 0. / / v n + i = 0, then, |60 - 6| > .8 |Ai(L) | for all b in L. 

Another interesting open problem is to devise polynomial time algorithms that comewithin a subexponential factor of the shortest vector. 

A n wherem < n v ^ M o / M ; ) (4.4) /=«Since the basis was reduced in the sense of (2.7) and (2.8), 6,-(t) is the length of the shortest vector in the lattice Lt(&i,&2> ••-&n)-