scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Reduced complexity iterative decoding of low-density parity check codes based on belief propagation

TL;DR: Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed, which greatly simplifies the decoding complexity of belief propagation.
Abstract: Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed. Both versions are implemented with real additions only, which greatly simplifies the decoding complexity of belief propagation in which products of probabilities have to be computed. Also, these two algorithms do not require any knowledge about the channel characteristics. Both algorithms yield a good performance-complexity trade-off and can be efficiently implemented in software as well as in hardware, with possibly quantized received values.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Long extended finite-geometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.
Abstract: This paper presents a geometric approach to the construction of low-density parity-check (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and their Tanner (1981) graphs have girth 6. Finite-geometry LDPC codes can be decoded in various ways, ranging from low to high decoding complexity and from reasonably good to very good performance. They perform very well with iterative decoding. Furthermore, they can be put in either cyclic or quasi-cyclic form. Consequently, their encoding can be achieved in linear time and implemented with simple feedback shift registers. This advantage is not shared by other LDPC codes in general and is important in practice. Finite-geometry LDPC codes can be extended and shortened in various ways to obtain other good LDPC codes. Several techniques of extension and shortening are presented. Long extended finite-geometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.

1,401 citations

Journal ArticleDOI
TL;DR: The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
Abstract: Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.

989 citations


Cites background from "Reduced complexity iterative decodi..."

  • ...It can be shown [20], [13] that for two statistically independent binary random variables U and V , the so-called “tanh-rule” is given by L(U ⊕ V ) = 2 tanh−1 ( tanh ( L(U) 2 ) tanh ( L(V ) 2 )) ....

    [...]

Journal ArticleDOI
TL;DR: This review gives both sides of the story, with the current best theory of quantum security, and an extensive survey of what makes quantum cryptosystem safe in practice.
Abstract: Some years ago quantum hacking became popular: devices implementing the unbreakable quantum cryptography were shown to have imperfections which could be exploited by attackers. Security has been thoroughly enhanced, as a consequence of both theoretical and experimental advances. This review gives both sides of the story, with the current best theory of quantum security, and an extensive survey of what makes quantum cryptosystem safe in practice.

761 citations

Journal ArticleDOI
TL;DR: This letter addresses the problem of decoding nonbinary low-density parity-check (LDPC) codes over finite fields GF(q), with reasonable complexity and good performance, and introduces a simplified decoder which is inspired by the min-sum decoder for binary LDPC codes.
Abstract: In this letter, we address the problem of decoding nonbinary low-density parity-check (LDPC) codes over finite fields GF(q), with reasonable complexity and good performance. In the first part of the letter, we recall the original belief propagation (BP) decoding algorithm and its Fourier domain implementation. We show that the use of tensor notations for the messages is very convenient for the algorithm description and understanding. In the second part of the letter, we introduce a simplified decoder which is inspired by the min-sum decoder for binary LDPC codes. We called this decoder extended min-sum (EMS). We show that it is possible to greatly reduce the computational complexity of the check-node processing by computing approximate reliability measures with a limited number of values in a message. By choosing appropriate correction factors or offsets, we show that the EMS decoder performance is quite good, and in some cases better than the regular BP decoder. The optimal values of the factor and offset correction are obtained asymptotically with simulated density evolution. Our simulations on ultra-sparse codes over very-high-order fields show that nonbinary LDPC codes are promising for applications which require low frame-error rates for small or moderate codeword lengths. The EMS decoder is a good candidate for practical hardware implementations of such codes

741 citations


Cites methods from "Reduced complexity iterative decodi..."

  • ...In this section, we present a reduced-complexity decoding algorithm for LDPC codes over GF , based on a generalization of the MS algorithm used for binary LDPC codes [11], [14], [16], [18], [26]....

    [...]

Journal ArticleDOI
TL;DR: A belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm is proposed.
Abstract: In this paper, we propose a belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm. The normalization factors can be obtained not only by simulation, but also, importantly, theoretically. This new BP-based algorithm is much simpler to implement than BP decoding as it requires only additions of the normalized received values and is universal, i.e., the decoding is independent of the channel characteristics. Some simulation results are given, which show this new decoding approach can achieve an error performance very close to that of BP on the additive white Gaussian noise channel, especially for low-density parity check (LDPC) codes whose check sums have large weights. The principle of normalization can also be used to improve the performance of the max-log-MAP algorithm in turbo decoding, and some coding gain can be achieved if the code length is long enough.

660 citations


Cites methods or result from "Reduced complexity iterative decodi..."

  • ...The UMP BP-based algorithm derived above is essentially the same as that in [ 14 ], but different in representation....

    [...]

  • ...In [ 14 ], a reduced complexity iterative decoding algorithm based on BP algorithm has been proposed for LDPC codes and is referred to as uniformly most powerful(UMP) BP-based algorithm....

    [...]

  • ...Finally, it is worth mentioning that since the performance degradation of the reduced complexity algorithm reported in [16] for LDPC convolutional codes is nearly the same as that reported in [ 14 ] for LDPC block codes, it is expected that similar gains as those observed in Figs....

    [...]

References
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Reduced complexity iterative decodi..." refers methods in this paper

  • ...As in [4], this probabilistic decoding algorithm is based on evaluating the likelihood ratios associated with each information bit from information provided by disjoint parity check equations....

    [...]

Journal ArticleDOI
29 Jun 1997
TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations


"Reduced complexity iterative decodi..." refers background in this paper

  • ...LDPC codes are specified by a parity-check matrix containing mostly zeros and only a small number of ones....

    [...]

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,246 citations

Journal ArticleDOI
TL;DR: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels, showing that performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes.

3,032 citations


"Reduced complexity iterative decodi..." refers background in this paper

  • ...LDPC codes are specified by a parity-check matrix containing mostly zeros and only a small number of ones....

    [...]