scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Gauss' algorithm revisited

01 Dec 1991-Journal of Algorithms (Academic Press)-Vol. 12, Iss: 4, pp 556-572
TL;DR: An algorithm which reduces integer lattices in the two-dimensional case and finds a basis of a lattice consisting of its two successive minima and generalizes the worst-case input configuration of the centered Euclidean algorithm to dimension two is exhibited.
About: This article is published in Journal of Algorithms.The article was published on 1991-12-01. It has received 61 citations till now. The article focuses on the topics: Gauss & Euclidean algorithm.
Citations
More filters
Book
16 Apr 2001
TL;DR: This book provides a unique overview of the tools and techniques used in average case analysis of algorithms.
Abstract: From the Publisher: While most algorithm designs are finalized toward worst case scenarios where they have to cope efficiently with unrealistic inputs, the average case solution is a probabilistic approach that allows for the possibility that a simple algorithm would suffice. This book provides a unique overview of the tools and techniques used in average case analysis of algorithms.

532 citations

Journal ArticleDOI
TL;DR: A general framework is developed for studying nested-lattice-based PNC schemes-called lattice network coding (LNC) schemes for short-by making a direct connection between C&F and module theory and several generalized constructions of LNC schemes are given.
Abstract: The problem of designing new physical-layer network coding (PNC) schemes via lattice partitions is considered. Building on a recent work by Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, we take an algebraic approach to show its potential in non-asymptotic settings. We first relate Nazer-Gastpar's approach to the fundamental theorem of finitely generated modules over a principle ideal domain. Based on this connection, we generalize their code construction and simplify their encoding and decoding methods. This not only provides a transparent understanding of their approach, but more importantly, it opens up the opportunity to design efficient and practical PNC schemes. Finally, we apply our framework for PNC to a Gaussian relay network and demonstrate its advantage over conventional PNC schemes.

233 citations


Cites background from "Gauss' algorithm revisited"

  • ..., [54]), are described in [55] and [56]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a general framework is developed for studying nested-lattice-based PNC schemes, called lattice network coding (LNC) schemes for short, by making a direct connection between C&F and module theory.
Abstract: The problem of designing physical-layer network coding (PNC) schemes via nested lattices is considered. Building on the compute-and-forward (C&F) relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, an algebraic approach is taken to show its potential in practical, nonasymptotic, settings. A general framework is developed for studying nested-lattice-based PNC schemes-called lattice network coding (LNC) schemes for short-by making a direct connection between C&F and module theory. In particular, a generic LNC scheme is presented that makes no assumptions on the underlying nested lattice code. C&F is reinterpreted in this framework, and several generalized constructions of LNC schemes are given. The generic LNC scheme naturally leads to a linear network coding channel over modules, based on which noncoherent network coding can be achieved. Next, performance/complexity tradeoffs of LNC schemes are studied, with a particular focus on hypercube-shaped LNC schemes. The error probability of this class of LNC schemes is largely determined by the minimum intercoset distances of the underlying nested lattice code. Several illustrative hypercube-shaped LNC schemes are designed based on Constructions A and D, showing that nominal coding gains of 3 to 7.5 dB can be obtained with reasonable decoding complexity. Finally, the possibility of decoding multiple linear combinations is considered and related to the shortest independent vectors problem. A notion of dominant solutions is developed together with a suitable lattice-reduction-based algorithm.

170 citations

Proceedings ArticleDOI
12 May 2008
TL;DR: It is demonstrated that in typical communication scenarios the worst-case complexity of the LLL algorithm is not even finite and that the probability for an atypically large number of LLL iterations decays exponentially.
Abstract: Lattice reduction by means of the LLL algorithm has been previously suggested as a powerful preprocessing tool that allows to improve the performance of suboptimal detectors and to reduce the complexity of optimal MIMO detectors. The complexity of the LLL algorithm is often cited as polynomial in the dimension of the lattice. In this paper we argue that this statement is not correct when made in the MIMO context. Specifically, we demonstrate that in typical communication scenarios the worst-case complexity of the LLL algorithm is not even finite. For i.i.d. Rayleigh fading channels, we further prove that the average LLL complexity is polynomial and that the probability for an atypically large number of LLL iterations decays exponentially.

118 citations

Journal ArticleDOI
TL;DR: This article studies a greedy lattice basis reduction algorithm for the Euclidean norm, and shows that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm.
Abstract: Lattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are twofold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: The output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O(1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm.

112 citations


Cites methods from "Gauss' algorithm revisited"

  • ...Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm....

    [...]

  • ...In dimension two, this method is very close to the argument given by Semaev [2001], which is itself very different from previous analyzes of Lagrange s algo­rithm [Kaib and Schnorr 1996; Lagarias 1980; Vall´ee 1991]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This paper presents a polynomial-time algorithm to solve the following problem: given a non-zeroPolynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q (X).
Abstract: In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q(X). It is well known that this is equivalent to factoring primitive polynomials feZ(X) into irreducible factors in Z(X). Here we call f~ Z(X) primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. (8). Its running time, measured in bit operations, is O(nl2+n9(log(fD3).

3,513 citations

01 Jan 1982
TL;DR: In this paper, a polynomial-time algorithm was proposed to decompose a primitive polynomials into irreducible factors in Z(X) if the greatest common divisor of its coefficients is 1.
Abstract: In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q(X). It is well known that this is equivalent to factoring primitive polynomials feZ(X) into irreducible factors in Z(X). Here we call f~ Z(X) primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. (8). Its running time, measured in bit operations, is O(nl2+n9(log(fD3).

3,248 citations

Journal ArticleDOI
TL;DR: It is shown that the integer linear programming problem with a fixed number of variables is polynomially solvable.
Abstract: It is shown that the integer linear programming problem with a fixed number of variables is polynomially solvable. The proof depends on methods from geometry of numbers.

1,256 citations

Proceedings ArticleDOI
01 Dec 1983
TL;DR: The proposed algorithm first finds a “more orthogonal” basis for a lattice than those of Lenstra (1981) and Lenstra, Lenstra and Lovasz (1982), but in time 0(ndn poly (length of input)).
Abstract: The integer programming problem is: Given m×n and m×l matrices A and b respectively of integers, find whether, there exists an all integer n×l vector x satisfying the m inequalities A×≤b. In settling an important open problem, Lenstra (1981) showed in an elegant way that when n, the number of dimensions is fixed, there is a polynomial-time algorithm to solve this problem. His algorithm achieves a running-time of 0(cn3•p(length of data)) where p is some polynomial and c a constant independent of n. Since such an algorithm has several important applications - cryptography (Shamir (1982)), diophantine approximations (Lagarias (1982)), coding theory (Conway and Sloane (1982), etc. it is important to improve the running time. We present an algorithm here that has a running time of 0(n9nL log L) where L is the length of the input. Whereas Lenstra's algorithm in the worst case reduces an n-dimensional problem to cn2−(n−) dimensional problems, our algorithm effectively reduces an n-dimensional problem to at most polynomially many (n−1) dimensional problems, thus achieving our time bound. The algorithm we propose, first finds a “more orthogonal” basis for a lattice (see the next section for the definition of a lattice) than those of Lenstra (1981) and Lenstra, Lenstra and Lovasz (1982), but in time 0(ndn poly (length of input)). It then uses an enumeration technique to solve integer programming and related problems. While this paper presents mainly the theoretical improvements that can be made in the algorithms, we discuss in section 6 why in practice our estimates of running time may be overly pessimistic. The last part of the paper discusses some complexity issues. It is an interesting open problem as to whether finding the Euclidean shortest non-zero vector of a given lattice is NP-hard. (See Lenstra (1981), Van Emde Boas (1981) and Lagarias (1982)).

466 citations

Journal ArticleDOI
TL;DR: This method gives a polynomial time attack on knapsack public key cryptosystems that can be expected to break them if they transmit information at rates below dc (n), as n → ∞.
Abstract: The subset sum problem is to decide whether or not the 0-l integer programming problem Sni=l aixi = M, ∀I, xI = 0 or 1, has a solution, where the ai and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public-key cryptosystems of knapsack type. An algorithm is proposed that searches for a solution when given an instance of the subset sum problem. This algorithm always halts in polynomial time but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. The performance of the proposed algorithm is analyzed. Let the density d of a subset sum problem be defined by d = n/log2(maxiai). Then for “almost all” problems of density d

460 citations