scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Selective avoidance of cycles in irregular LDPC code construction

TL;DR: A Viterbi-like algorithm is proposed that selectively avoids small cycle clusters that are isolated from the rest of the graph and yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability.
Abstract: This letter explains the effect of graph connectivity on error-floor performance of low-density parity-check (LDPC) codes under message-passing decoding A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes Using an easily computed estimate of EMD, we propose a Viterbi-like algorithm that selectively avoids small cycle clusters that are isolated from the rest of the graph This algorithm is different from conventional girth conditioning by emphasizing the connectivity as well as the length of cycles The algorithm yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a method to design regular (2, dc)- LDPC codes over GF(q) with both good waterfall and error floor properties is presented, based on the algebraic properties of their binary image.
Abstract: In this paper, a method to design regular (2, dc)- LDPC codes over GF(q) with both good waterfall and error floor properties is presented, based on the algebraic properties of their binary image. First, the algebraic properties of rows of the parity check matrix H associated with a code are characterized and optimized to improve the waterfall. Then the algebraic properties of cycles and stopping sets associated with the underlying Tanner graph are studied and linked to the global binary minimum distance of the code. Finally, simulations are presented to illustrate the excellent performance of the designed codes.

305 citations


Cites background or methods from "Selective avoidance of cycles in ir..."

  • ...For our analysis, we assume the knowledge of the structure of the graph GH (randomly designed or optimized using instances of the PEG algorithm [13] or other good construction algorithms [ 23 ])....

    [...]

  • ...The global performance is however not only dependent on the cycle structure, but also on the stopping sets (inherently present in the structure of GH ) [7][ 23 ] that are not reduced to a single cycle....

    [...]

Journal ArticleDOI
TL;DR: This paper discusses construction of protograph-based low-density parity-check codes and examines implementation strategies for high throughput decoding derived from first principles of belief propagation on bipartite graphs.
Abstract: This paper discusses construction of protograph-based low-density parity-check (LDPC) codes. Emphasis is placed on protograph ensembles whose typical minimum distance grows linearly with block size. Asymptotic performance analysis for both weight enumeration and iterative decoding threshold determination is provided and applied to a series of code constructions. Construction techniques that yield both low thresholds and linear minimum distance growth are introduced by way of example throughout. The paper also examines implementation strategies for high throughput decoding derived from first principles of belief propagation on bipartite graphs.

283 citations


Cites methods from "Selective avoidance of cycles in ir..."

  • ...Protographs were lifted using the ACE algorithm [34] to find circulants for each edge of the protograph....

    [...]

  • ...The resulting graph was then lifted using the ACE algorithm [34] to determine circulant permutations of size 181....

    [...]

Journal ArticleDOI
TL;DR: This work derives the exact relationships that the component LDPC code profiles in the relay coding scheme must satisfy to act as constraints for the density evolution algorithm which is used to search for good relay code profiles.
Abstract: We propose Low Density Parity Check (LDPC) code designs for the half-duplex relay channel. Our designs are based on the information theoretic random coding scheme for decode-and-forward relaying. The source transmission is decoded with the help of side information in the form of additional parity bits from the relay. We derive the exact relationships that the component LDPC code profiles in the relay coding scheme must satisfy. These relationships act as constraints for the density evolution algorithm which is used to search for good relay code profiles. To speed up optimization, we outline a Gaussian approximation of density evolution for the relay channel. The asymptotic noise thresholds of the discovered relay code profiles are a fraction of a decibel away from the achievable lower bound for decode-and-forward relaying. With random component LDPC codes, the overall relay coding scheme performs within 1.2 dB of the theoretical limit.

276 citations

Posted Content
TL;DR: In this article, the authors introduce the concept of graph-cover decoding, which is a theoretical tool that can be used to show connections between linear programming decoding and message-passing iterative decoding.
Abstract: The goal of the present paper is the derivation of a framework for the finite-length analysis of message-passing iterative decoding of low-density parity-check codes. To this end we introduce the concept of graph-cover decoding. Whereas in maximum-likelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graph-cover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graph-cover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and message-passing iterative decoding. Namely, on the one hand it turns out that graph-cover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like message-passing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graph-cover decoding can serve as a model to explain the behavior of message-passing iterative decoding. Understanding the behavior of graph-cover decoding is tantamount to understanding the so-called fundamental polytope. Therefore, we give some characterizations of this polytope and explain its relation to earlier concepts that were introduced to understand the behavior of message-passing iterative decoding for finite-length codes.

260 citations

Journal ArticleDOI
TL;DR: This paper describes and analyzes low-density parity-check code families that support variety of different rates while maintaining the same fundamental decoder architecture and proposes a design method that maintains good graphical properties and hence low error floors for all rates.
Abstract: This paper describes and analyzes low-density parity-check code families that support variety of different rates while maintaining the same fundamental decoder architecture. Such families facilitate the decoding hardware design and implementation for applications that require communication at different rates, for example to adapt to changing channel conditions. Combining rows of the lowest-rate parity-check matrix produces the parity-check matrices for higher rates. An important advantage of this approach is that all effective code rates have the same blocklength. This approach is compatible with well known techniques that allow low-complexity encoding and parallel decoding of these LDPC codes. This technique also allows the design of programmable analog LDPC decoders. The proposed design method maintains good graphical properties and hence low error floors for all rates.

252 citations

References
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Selective avoidance of cycles in ir..." refers background in this paper

  • ...can avoid increases only logarithmically with block size (see [1])....

    [...]

  • ...REGULAR low-density parity-check (LDPC) codes were proposed by Gallager in the early 1960s [1]....

    [...]

Journal ArticleDOI
29 Jun 1997
TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations

Journal ArticleDOI
TL;DR: This work designs low-density parity-check codes that perform at rates extremely close to the Shannon capacity and proves a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution.
Abstract: We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.

3,520 citations


"Selective avoidance of cycles in ir..." refers background or methods in this paper

  • ...this letter, we repeated the irregular code-construction method described in [8], and extended their simulation to a higher SNR...

    [...]

  • ...As a benchmark, Richardson and Urbanke’s -bit code [8] (referred to here as the RU code) is included in Fig....

    [...]

  • ...We used the ACE algorithm to construct (10000,5000) codes that have the irregular degree distribution given in [8] with maxFig....

    [...]

  • ...6 dB worse than their irregular counterparts [8]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,246 citations