scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Efficient encoding of low-density parity-check codes

01 Feb 2001-IEEE Transactions on Information Theory (IEEE)-Vol. 47, Iss: 2, pp 638-656
TL;DR: It is shown how to exploit the sparseness of the parity-check matrix to obtain efficient encoders and it is shown that "optimized" codes actually admit linear time encoding.
Abstract: Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. We consider the encoding problem for LDPC codes. More generally we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. For the (3,6)-regular LDPC code, for example, the complexity of encoding is essentially quadratic in the block length. However, we show that the associated coefficient can be made quite small, so that encoding codes even of length n/spl sime/100000 is still quite practical. More importantly, we show that "optimized" codes actually admit linear time encoding.

Content maybe subject to copyright    Report

Citations
More filters
Book
06 Oct 2003
TL;DR: A fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.
Abstract: Fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.

8,091 citations

Journal ArticleDOI
TL;DR: This work designs low-density parity-check codes that perform at rates extremely close to the Shannon capacity and proves a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution.
Abstract: We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.

3,520 citations


Cites background from "Efficient encoding of low-density p..."

  • ...An alternative solution for practical purposes, which does not require cascades, is presented in [4]....

    [...]

Proceedings Article
01 Jan 2004
TL;DR: For a given integer k, and any real /spl epsiv/>0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + /spl Epsiv/) is sufficient to recover the original k symbols, with high probability as mentioned in this paper.
Abstract: This paper exhibits a class of universal Raptor codes: for a given integer k, and any real /spl epsiv/>0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + /spl epsiv/) is sufficient to recover the original k symbols, with high probability. Each output symbol is generated using O(log(1//spl epsiv/)) operations, and the original symbols are recovered from the collected ones with O(klog(1//spl epsiv/)) operations.

1,522 citations

Journal ArticleDOI
TL;DR: Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.
Abstract: We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.

1,507 citations

Journal ArticleDOI
TL;DR: In this paper, the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively was derived for the binary erasure channel (BEC).
Abstract: In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area.

959 citations

References
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Efficient encoding of low-density p..." refers background in this paper

  • ...R. L. Urbanke was with Bell Labs, Lucent Technologies, Murray Hill, NJ 07974 USA....

    [...]

  • ...Inmanyways,LDPCcodescanbeconsideredseriouscompetitors to turbo codes....

    [...]

Book
01 Dec 1986
TL;DR: Introduction and Preliminaries.
Abstract: Introduction and Preliminaries. Problems, Algorithms, and Complexity. LINEAR ALGEBRA. Linear Algebra and Complexity. LATTICES AND LINEAR DIOPHANTINE EQUATIONS. Theory of Lattices and Linear Diophantine Equations. Algorithms for Linear Diophantine Equations. Diophantine Approximation and Basis Reduction. POLYHEDRA, LINEAR INEQUALITIES, AND LINEAR PROGRAMMING. Fundamental Concepts and Results on Polyhedra, Linear Inequalities, and Linear Programming. The Structure of Polyhedra. Polarity, and Blocking and Anti--Blocking Polyhedra. Sizes and the Theoretical Complexity of Linear Inequalities and Linear Programming. The Simplex Method. Primal--Dual, Elimination, and Relaxation Methods. Khachiyana s Method for Linear Programming. The Ellipsoid Method for Polyhedra More Generally. Further Polynomiality Results in Linear Programming. INTEGER LINEAR PROGRAMMING. Introduction to Integer Linear Programming. Estimates in Integer Linear Programming. The Complexity of Integer Linear Programming. Totally Unimodular Matrices: Fundamental Properties and Examples. Recognizing Total Unimodularity. Further Theory Related to Total Unimodularity. Integral Polyhedra and Total Dual Integrality. Cutting Planes. Further Methods in Integer Linear Programming. References. Indexes.

7,005 citations

Book
01 Jan 1991
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Abstract: The use of randomness is now an accepted tool in Theoretical Computer Science but not everyone is aware of the underpinnings of this methodology in Combinatorics - particularly, in what is now called the probabilistic Method as developed primarily by Paul Erdoős over the past half century. Here I will explore a particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets. A central point will be the evolution of these problems from the purely existential proofs of Erdős to the algorithmic aspects of much interest to this audience.

6,594 citations

Journal ArticleDOI
TL;DR: This work designs low-density parity-check codes that perform at rates extremely close to the Shannon capacity and proves a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution.
Abstract: We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.

3,520 citations

Journal ArticleDOI
TL;DR: The results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon.
Abstract: We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.

3,393 citations


"Efficient encoding of low-density p..." refers background in this paper

  • ...R. L. Urbanke was with Bell Labs, Lucent Technologies, Murray Hill, NJ 07974 USA....

    [...]