scispace - formally typeset
Open AccessJournal ArticleDOI

Sieve Algorithms for the Shortest Vector Problem are Practical

Phong Q. Nguyen, +1 more
- 01 Jan 2008 - 
- Vol. 2, Iss: 2, pp 181-207
Reads0
Chats0
TLDR
It is shown that AKS can actually be made practical: a heuristic variant of AKS whose running time is polynomial-time operations, and whose space requirement isPolynomially many bits is presented.
Abstract
The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2 O(n) , which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2 O(n) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+! ) n polynomial-time operations, and whose space requirement is(4/3+! ) n/2 polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions.

read more

Content maybe subject to copyright    Report

c
de Gruyter 2008
J. Math. Crypt. 2 (2008), 181–207 DOI 10.1515 / JMC.2008.009
Sieve algorithms for the shortest vector problem
are practical
Phong Q. Nguyen and Thomas Vidick
Communicated by Tran van Trung
Abstract. The most famous lattice problem is the Shortest Vector Problem (SVP), which has many
applications in cryptology. The best approximation algorithms known for SVP in high dimension
rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the
best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed
by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space
complexity 2
O(n)
, which is theoretically much lower than the super-exponential complexity of all
alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has
ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr
claimed in 2003 that the constant hidden in the 2
O(n)
complexity was at least 30. In this paper,
we show that AKS can actually be made practical: we present a heuristic variant of AKS whose
running time is (4/3+ε)
n
polynomial-time operations, and whose space requirement is (4/3+ε)
n/2
polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to
dimension 50, but is slower than classical alternative SVP algorithms in these dimensions.
Keywords. Lattices, AKS Algorithm, sieve, LLL, enumeration.
AMS classification. 11Y16, 11H06.
1 Introduction
Lattices are discrete subgroups of R
m
. A lattice L can be represented by a basis, that
is, a set of n m linearly independent vectors b
1
, . . . , b
n
in R
m
such that L is equal
to the set L(b
1
, . . . , b
n
) = {
P
n
i=1
x
i
b
i
, x
i
Z} of all integer linear combinations of
the b
i
s. The integer n is called the dimension of the lattice L, and helps to measure
the hardness of lattice problems. Every lattice has a shortest vector, that is, a non-
zero vector whose Euclidean norm is minimal among all other non-zero lattice vectors.
The shortest vector problem (SVP) asks for such a vector: it is the most famous lattice
problem, and is one of the very few potentially hard problems currently in use in public-
key cryptography (see [26, 24] for surveys on lattice-based cryptosystems, and [15,
27] for recent developments). SVP is also well known for its applications in public-
key cryptanalysis (see [26]): knapsack cryptosystems, RSA in special settings, DSA
signatures in special settings, etc.
SVP algorithms can be classified in two categories: exact algorithms [21, 20, 4]
(which provably output a shortest vector), and approximation algorithms [23, 31, 12,
13] (which output a non-zero lattice vector whose norm is provably not much bigger
than that of a shortest vector). In high dimension (higher than 100), only approxima-
tion algorithms are practical, but both categories are in fact complementary: all exact

182 Phong Q. Nguyen and Thomas Vidick
algorithms known first apply an approximation algorithm (such as LLL [23]) as a pre-
processing, while all approximation algorithms known make intensive use of an exact
algorithm in low dimension. More precisely, the celebrated LLL approximation algo-
rithm [23] relies essentially on finding shortest vectors in dimension two, while the best
approximation algorithm known (as well as its predecessors by Schnorr [31] and Gama
et al. [12]), that of Gama and Nguyen [13], call (polynomially many times) an exact
algorithm in dimension k, where the blocksize k is chosen in such a way that the cost
of the overall algorithm remains polynomial. The heuristic BKZ algorithm [33] (im-
plemented in NTL [34] and often used by cryptanalysts, see [14] for an experimental
assessment) also crucially relies on an exact SVP algorithm in small dimension (typ-
ically chosen around 20): note however that recent experiments [14] suggest that the
running-time bottleneck for BKZ on high-dimensional lattices is caused by the large
number of calls, rather than the cost of the exact algorithm, which is why the BKZ
blocksize is usually much smaller than the highest possible dimension for exact SVP.
Thus, the best practical and/or theoretical SVP approximation algorithms all require an
efficient exact SVP algorithm in low dimension.
It is therefore very important to know what is the best exact SVP algorithm in low
dimension (say, less than 60), and to determine what is the highest dimension for which
one solve exact SVP in the worst case. Because SVP is known to be NP-hard under
randomized reductions [3], exact algorithms are not expected to be polynomial time.
Surprisingly, there are so far essentially only two different algorithms for exact SVP:
The deterministic enumeration algorithm discovered by Kannan and Pohst [28,
21] and its many variants [21, 20, 11, 33], which are essentially all surveyed
in [1]. This algorithm enumerates a super-exponential number of potential short-
est vectors, given a reduced basis. If the basis is only LLL-reduced, the running
time is 2
O(n
2
)
polynomial-time operations, but Kannan [21] showed that one can
perform suitable preprocessing in such a way that the overall running time (in-
cluding preprocessing) is 2
O(n log n)
polynomial-time operations (see [18, 20] for
a better constant than [21], and see [19] for a worst-case lattice basis). The al-
gorithm used in practice is the Schnorr–Euchner variant [33] of the enumeration
strategy, where the basis is either LLL reduced or BKZ reduced: here, the running
time is therefore 2
O(n
2
)
polynomial-time operations.
The randomized sieve algorithm [4] proposed in 2001 by Ajtai, Kumar and Si-
vakumar (AKS), whose running time is 2
O(n)
polynomial-time operations. One
drawback of AKS is that it has space complexity 2
O(n)
, whereas enumeration
algorithms only require polynomial space. Even though this is a big drawback,
one might still hope that this could be preferable to the 2
O(n
2
)
time complexity of
currently used practical SVP enumeration algorithms.
Although the exponential complexity of AKS seems a priori much better than the
super-exponential complexity of enumeration algorithms, no implementation and no
practical analysis of AKS have ever been reported. This can perhaps be explained as
follows:

Sieve algorithms for the shortest vector problem are practical 183
AKS is a fairly technical algorithm, which is very different from all other lattice
algorithms. Ajtai, Kumar and Sivakumar use many parameters in their descrip-
tion [4], and their analysis does not explain what could be the optimal choice for
these parameters. In particular, no explicit value of the O() constant in the 2
O(n)
complexity of AKS is given in the original paper [4].
It was widely believed that AKS was impractical, because AKS uses exponential
space and the complexity constants were thought to be large. Schnorr claimed
in [32] that the O() constant was at least 30, but Regev’s alternative analysis [29]
showed that it was at most 16.
OUR RESULTS. We show that sieve algorithms for the shortest vector problem are in
fact practical, by developing an efficient heuristic variant of AKS which experimentally
finds shortest vectors in dimension 50. Our variant runs in (4/3 + )
n
polynomial-
time operations and uses (4/3 + )
n/2
polynomially many bits, where the 4/3 constant
is derived from a sphere covering problem. Interestingly, the 4/3 constant is intuitively
tight on the average, and seems to be supported by our experiments. To understand the
principles of sieve algorithms, we first present a concrete analysis of the original AKS
algorithm [4]. By choosing the AKS parameters carefully, we obtain a probabilistic
algorithm which outputs a shortest vector with probability exponentially close to 1
within 2
5.9n
polynomial-time operations. Though this shows that the original AKS
algorithm is much more efficient than previously thought, this does not guarantee the
practicality of the algorithm. Still, this concrete analysis is useful for the following
reasons:
The analysis is a worst-case analysis: the 2
5.9n
running time does not reflect the
true potential of sieve algorithms. For instance, the analysis also shows that the
same algorithm approximates the shortest vector to within a constant factor 5
using only 2
3n
polynomial-time operations. More generally, the analysis sug-
gests that the real-life constants may be much smaller than the constants of the
worst-case analysis. Note that many lattice algorithms typically perform better in
practice than what their worst-case analysis suggests: see for instance [25] for the
case of the LLL algorithm and [14] for the BKZ algorithm, where the experimen-
tal constants differ from the worst-case theoretical constants.
The analysis explains what are the essential ingredients of sieve algorithms, which
is crucial to develop faster variants.
However, our heuristic sieve algorithm turns out to be slower (up to dimension 50)
than the 2
O(n
2
)
Schnorr–Euchner enumeration algorithm (with LLL preprocessing). In
practice, the running time is very close to the 2
O(n log n)
Kannan–Helfrich enumeration
algorithm [21, 20]. This shows that O() constants and the exact cost of polynomial-
time operations matter a lot in assessing the actual performance of lattice algorithms.
We hope our results make it clear why sieve algorithms have an exponential running
time, what is the expected value of the exponentiation constant in practice, and why
they do not beat super-exponential enumeration techniques in practice.

184 Phong Q. Nguyen and Thomas Vidick
ROAD MAP. The paper is organized as follows. In Section 2, we provide necessary
background on lattices. In Section 3, we recall the AKS algorithm [4], and provide
a concrete analysis of its complexity. In Section 4, we present and analyze a faster
heuristic variant of AKS, and provide experimental results.
2 Background
Let k.k and h., .i be the Euclidean norm and inner product of R
n
. Vectors will be
written in bold. We denote the n-dimensional ball of center v R
n
and radius R by
B
n
(v, R) = {x R
n
, kx vk R}, and we let B
n
(R) = B
n
(O, R). For a matrix
M whose name is a capital letter, we will usually denote its coefficients by m
i,j
. For
any finite set S, let |S| denote its number of elements. For any X R
n
, we denote by
vol (X) the volume of X. We refer to the survey [26] for a bibliography on lattices.
LATTICES. In this paper, by the term lattice, we mean a discrete subgroup of some
R
m
. The simplest lattice is Z
n
, and for any linearly independent vectors b
1
, . . . , b
n
, the
set L(b
1
, . . . , b
n
) = {
P
n
i=1
m
i
b
i
m
i
Z} is a lattice. It turns out that in any lattice
L, not just Z
n
, there must exist linearly independent vectors b
1
, . . . , b
n
L such that
L = L(b
1
, . . . , b
n
). Any such n-tuple of vectors b
1
, . . . , b
n
is called a basis of L: a
lattice can be represented by a basis, that is, a row matrix. The dimension of a lattice L
is the dimension n of the linear span of L. The lattice is full-rank if n is the dimension
of the space. The first minimum λ
1
(L) is the norm of a shortest non-zero vector of L.
ORTHOGONALIZATION. Given a basis B = [b
1
, . . . , b
n
], there exists a unique lower-
triangular n × n matrix µ with ones on the diagonal and an orthogonal family B
=
[b
1
, . . . , b
n
] such that B = µB
. They can be computed using Gram–Schmidt orthog-
onalization, and will be called the GSO of B.
SIZE REDUCTION. A basis [b
1
, . . . , b
n
] is size reduced with factor η 1/2 if its
GSO family satisfies |µ
i,j
| η for all 1 j < i. An individual vector b
i
is size
reduced if |µ
i,j
| η for all 1 j < i. Size reduction usually refers to η = 1/2, and
is typically achieved by successively size-reducing individual vectors. Size reduction
was introduced by Hermite.
LLL REDUCTION. A basis [b
1
, . . . , b
n
] of a lattice L is LLL reduced [23] with factor
δ for 1/4 < δ 1 if its GSO satisfies |µ
i,j
| 1/2 for all i > j, as well as the (n 1)
Lovász conditions (δ µ
2
i+1,i
) kb
i
k
2
b
i+1
2
. The first vector of such bases has the
following properties: kb
1
k α
(n1)/4
vol (L)
1/n
and kb
1
k α
(n1)/2
λ
1
(L), where
α = 1/(δ 1/4). If no δ is given, it will mean the original choice δ = 3/4 of [23],
in which case α = 2. It is well known that LLL-reduced bases can be computed in
polynomial time if 1/4 < δ < 1.
HKZ REDUCTION. A basis [b
1
, . . . , b
n
] of a lattice L is Hermite–Korkine–Zolotarev
(HKZ) reduced if it is size reduced and if b
i
is a shortest vector of the projected lattice
π
i
(L) for all 1 i n, where π
i
is the orthogonal projection on the subspace orthogo-
nal to Span(b
1
, . . . , b
i1
). In particular, the first basis vector is a shortest vector of the
lattice.

Sieve algorithms for the shortest vector problem are practical 185
BABAIS ROUNDING . Given a basis [b
1
, . . . , b
n
] of a full-rank lattice L, and any
target vector t Q
n
, linear algebra gives in polynomial time a lattice vector v L,
such that v t =
P
n
i=1
x
i
b
i
where |x
i
| 1/2, and therefore kv tk
P
n
i=1
kb
i
k/2.
This is Babai’s rounding algorithm [6], which approximates the closest vector problem
within an exponential factor, if the input basis is LLL reduced.
RANDOM LATTICES. There is a beautiful (albeit mathematically involved) theory of
random lattices, which was initiated by Siegel to provide an alternative proof of the
Minkowski–Hlawka theorem (see [17]). Recently, efficient methods [16, 2] have been
developed to generate provably random lattices. Random lattices are very interesting
for experiments, because they do not seem to have any exceptional property which
could a priori be exploited by algorithms. We therefore used random lattices in our
experiments, as was done in [25, 14] using [16].
3 Revisiting the AKS sieve algorithm
In this section we recall and analyze in detail the sieve algorithm by Ajtai, Kumar and
Sivakumar [4] for the shortest vector problem. The goal is to understand AKS and to
see how small the complexity constants of AKS can be, since no practical analysis of
AKS has been reported. All the main ideas of this section have previously appeared
either in the original paper [4] or in Regev’s analysis [29], but the analysis will be
useful to understand our heuristic variant of AKS, described in Section 4. We will
prove the following concrete version of the AKS algorithm:
Theorem 3.1. There is a randomized algorithm which, given as input any polynomial-
size basis of an integer lattice L Z
n
, outputs a shortest vector of L with probability
exponentially close to 1, using at most 2
5.90n
polynomial-time operations on numbers
of polynomial size, and with space requirement at most 2
2.95n
polynomial-size regis-
ters. Furthermore, the same algorithm can be used to output a non-zero lattice vec-
tor of norm less than 5λ
1
(L) with probability exponentially close to 1, in less than
2
3n
polynomial-time operations on numbers of polynomial size, using at most 2
1.5n
polynomial-size registers.
The analysis of [4] does not give any concrete value of the involved constants, while
the analysis [29] uses fixed values of constants (for ease of exposition) which are not
claimed to be optimal: more precisely, [29] shows a less efficient version of Theo-
rem 3.1 where 5.9 is replaced by 16, because the most expensive step of [29] is a
quadratic-time stage on roughly 2
8n
points. We hope the presentation clarifies the
main ideas of sieve algorithms, and provides intuition on how the various constants are
related to each other. Like [4, 29], we will use real numbers for ease of exposition, but
in practice all numbers will actually be represented using polynomial precision.
3.1 Overview
Let L Z
n
be a full-rank integer lattice. Let S R
n
be a subset containing λ
1
(L).
Enumeration algorithms [21, 20] enumerate all vectors in L S for a choice of S well

Citations
More filters
Proceedings ArticleDOI

Trapdoors for hard lattices and new cryptographic constructions

TL;DR: In this article, the authors show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to within certain polynomial factors).
Journal Article

Trapdoors for Hard Lattices and New Cryptographic Constructions.

TL;DR: In this article, the authors show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to within certain polynomial factors).
Book ChapterDOI

BKZ 2.0: better lattice security estimates

TL;DR: An efficient simulation algorithm is proposed to model the behaviour of BKZ in high dimension with high blocksize ≥50, which can predict approximately both the output quality and the running time, thereby revising lattice security estimates.
Book ChapterDOI

Lattice-based cryptography

TL;DR: Some of the recent progress on lattice-based cryptography is described, starting from the seminal work of Ajtai, and ending with some recent constructions of very efficient cryptographic schemes.
Book ChapterDOI

Predicting lattice reduction

TL;DR: The goal of this paper is to provide an assessment of lattice reduction algorithms' behaviour based on extensive experiments performed with the NTL library, and to suggest several conjectures on the worst case and the actual behaviour of lattICE reduction algorithms.
References
More filters
Book

Sphere packings, lattices, and groups

TL;DR: The second edition of this book continues to pursue the question: what is the most efficient way to pack a large number of equal spheres in n-dimensional Euclidean space?
Journal ArticleDOI

Factoring Polynomials with Rational Coefficients

TL;DR: This paper presents a polynomial-time algorithm to solve the following problem: given a non-zeroPolynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q (X).

Factoring polynomials with rational coeficients

TL;DR: In this paper, a polynomial-time algorithm was proposed to decompose a primitive polynomials into irreducible factors in Z(X) if the greatest common divisor of its coefficients is 1.
Proceedings ArticleDOI

Locality-sensitive hashing scheme based on p-stable distributions

TL;DR: A novel Locality-Sensitive Hashing scheme for the Approximate Nearest Neighbor Problem under lp norm, based on p-stable distributions that improves the running time of the earlier algorithm and yields the first known provably efficient approximate NN algorithm for the case p<1.
Journal ArticleDOI

Closest point search in lattices

TL;DR: An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented and is shown to be substantially faster than other known methods.
Related Papers (5)
Frequently Asked Questions (16)
Q1. What are the contributions in this paper?

In this paper, the authors assess the practicality of the best ( theoretical ) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar ( AKS ) in 2001. In this paper, the authors show that AKS can actually be made practical: they present a heuristic variant of AKS whose running time is ( 4/3+ε ) polynomial-time operations, and whose space requirement is ( 4/3+ε ) polynomially many bits. 

The crucial point in assessing the complexity of the algorithm is to estimate the number of vectors that are lost at each iteration of the sieve, either as centers or through a collision. 

When the size of the manipulated vectors becomes approximately 1.5λ1, the authors see a sharp decline in the number of vectors, which is due to a large number of collisions, as can be verified on Figure 3. 

Since the running time of the sieve is quadratic, the total running time of the algorithm is expected to be of order (4/3 + )n, because at the limit γ → 1 the authors have cH = √ 4/3. 

Then the expected number of different vectors picked is N − N(1 − 1/N)p, so the expected number of vectors lost through collisions is p − N + N(1 − 1/N)p. 

the authors argue that under a natural heuristic assumption on the distribution of lattice vectors used by the algorithm, the result will be a very short – if not shortest – lattice vector. 

The Schnorr–Euchner enumeration [33] is the most efficient way known of enumerating all the candidates: it is implemented by all main lattice software. 

If the basis is only LLL-reduced, the running time is 2O(n2) polynomial-time operations, but Kannan [21] showed that one can perform suitable preprocessing in such a way that the overall running time (including preprocessing) is 2O(n log n) polynomial-time operations (see [18, 20] for a better constant than [21], and see [19] for a worst-case lattice basis). 

The expected fraction of Cn(γ) that is not covered by NC balls of radius γ centered at randomly chosen points of Cn(γ) is (1−Ωn(γ))NC . 

Then the fraction of the sky that is occupied by the star is sinn π/3 = √ 3/4 n , and about √ 4/3 n stars are therefore required to cover the whole sky. 

The first property is that the approximate CVP algorithm used in Step 2 is additive with respect to the lattice: ApproxCVP(x+z, B) = z+ApproxCVP(x, B) for every x ∈ 

the second constraint can be dropped if the authors are only interested in obtaining a non-zero vector of norm less than R∞. Using the values of the constants given in Lemmas 3.2, 3.4 and 3.5 and optimizing over γ and ξ, the authors find that, in the case where the authors want a shortest vector, the best choice of parameters is γ = 0.518 and ξ = 0.7λ1(L), which yields a value c0 < 2.95. 

The authors first find an HKZ-reduced basis of the lattice π2(L), which is lifted to a weakly-reduced basis [b1,b2, . . . ,bn] of L such that [π2(b2), . . . , π2(bn)] is an HKZ-reduced basis of π2(L) and ‖b∗1‖ ≤ 2‖b∗2‖. 

This heuristic algorithm takesAlgorithm 4 Finding short lattice vectors based on sieving Input: An LLL-reduced basis B = [b1, . . . ,bn] of a lattice L, a sieve factor γ suchthat 2/3 < γ < 1, and a number N . 

This depends on the number of iterations of the lattice sieve: the main loop of Algorithm 4 repeats until the set S of all lattice vectors currently under consideration is empty. 

its complexity is 2O(n2) polynomial-time operations, but here, the complexity 2O(n 2) should in practice be replaced by 1.02n 2+O(n) for reasons explained above.