scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Low-Latency Reweighted Belief Propagation Decoding for LDPC Codes

06 Aug 2012-IEEE Communications Letters (IEEE)-Vol. 16, Iss: 10, pp 1660-1663
TL;DR: Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.
Abstract: In this paper we propose a novel message passing algorithm which exploits the existence of short cycles to obtain performance gains by reweighting the factor graph. The proposed decoding algorithm is called variable factor appearance probability belief propagation (VFAP-BP) algorithm and is suitable for wireless communications applications with low-latency and short blocks. Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a cell-free massive multiple-input multiple-output (CF-mMIMO) architecture with joint list-based detection with soft interference cancellation (soft-IC) and access points selection is proposed.
Abstract: This paper proposes a cell-free massive multiple-input multiple-output (CF-mMIMO) architecture with joint list-based detection with soft interference cancelation (soft-IC) and access points (APs) selection. In particular, we derive a new closed-form expression for the minimum mean-square error receive filter while taking the uplink transmit powers and APs selection into account. This is achieved by optimizing the receive combining vector by minimizing the mean square error between the detected symbol estimate and transmitted symbol, after canceling the multi-user interference (MUI). By using low-density parity check (LDPC) codes, an iterative detection and decoding (IDD) scheme based on a message passing is devised. In order to perform joint detection at the central processing unit (CPU), the access points locally estimate the channel and send their received sample data to the CPU via the front haul links. In order to enhance the system's bit error rate performance, the detected symbols are iteratively exchanged between the joint detector and the LDPC decoder in log likelihood ratio form. Furthermore, we draw insights into the derived detector as the number of IDD iterations increase. Finally, the proposed list detector is compared with existing detection techniques.
Journal ArticleDOI
TL;DR: This work presents a study of the evolution of the messages on check nodes during the iterative decoding process when using the LDPC decoding algorithm normalized min sum (NMS), to see the destructive effect of short cycles and justify the effectiveness of the girth aware normalized minSum (GA-NMS) decoding LDPC codes algorithm in terms of correction of the errors.
Abstract: Recently, short block codes are in great demand due to the emergent applications requiring the transmission of a short data unit and can guarantee speedy communication, with a minimum of latency and complexity which are among the technical challenges in today’s wireless services and systems. In the context of channel coding using low density parity check (LDPC) codes, the shorter length LDPC block codes are more likely to have short cycles with lengths of 4 and 6. The effect of the cycle with the minimum size is that this one prevents the propagation of the information in the Tanner graph during the iterative process. Therefore, the message decoded by short block code is assumed to be of poor quality due to short cycles. In this work, we present a study of the evolution of the messages on check nodes during the iterative decoding process when using the LDPC decoding algorithm normalized min sum (NMS), to see the destructive effect of short cycles and justify the effectiveness of the girth aware normalized min sum (GA-NMS) decoding LDPC codes algorithm in terms of correction of the errors, particularly for the codes with short cycles 4 and 6. In addition to this, the GA-NMS algorithm is evaluated in terms of bit error rate performance and convergence behavior, using wireless regional area networks (WRAN) LDPC code, which is considered as a short block code.
Posted Content
TL;DR: In this article, the authors investigated the construction of polar codes by Gaussian approximation (GA) and developed an approach based on piecewise GA, which obtained a function that replaces the original GA function with a more accurate approximation, which results in significant gain in performance.
Abstract: In this paper, we investigate the construction of polar codes by Gaussian approximation (GA) and develop an approach based on piecewise Gaussian approximation (PGA). In particular, with the piecewise approach we obtain a function that replaces the original GA function with a more accurate approximation, which results in significant gain in performance. The proposed PGA construction of polar codes is presented in its integral form as well as an alternative approximation that does not rely on the integral form. Simulations results show that the proposed PGA construction outperforms the standard GA for several examples of polar codes and rates.
Posted Content
TL;DR: Simulation results show that significant improvements in terms of bit error rate (BER) and sum-rate performances can be achieved by the proposed LR-FlexCoBF precoding algorithm.
Abstract: The application of precoding algorithms in multi-user massive multiple-input multiple-output (MU-Massive-MIMO) systems is restricted by the dimensionality constraint that the number of transmit antennas has to be greater than or equal to the total number of receive antennas. In this paper, a lattice reduction (LR)-aided flexible coordinated beamforming (LR-FlexCoBF) algorithm is proposed to overcome the dimensionality constraint in overloaded MU-Massive-MIMO systems. A random user selection scheme is integrated with the proposed LR-FlexCoBF to extend its application to MU-Massive-MIMO systems with arbitary overloading levels. Simulation results show that significant improvements in terms of bit error rate (BER) and sum-rate performances can be achieved by the proposed LR-FlexCoBF precoding algorithm.
Posted Content
TL;DR: A distributed reduced-rank joint iterative estimation algorithm is developed, which has the ability to achieve significantly reduced communication overhead and improved performance when compared with existing techniques.
Abstract: This paper proposes a novel distributed reduced--rank scheme and an adaptive algorithm for distributed estimation in wireless sensor networks. The proposed distributed scheme is based on a transformation that performs dimensionality reduction at each agent of the network followed by a reduced-dimension parameter vector. A distributed reduced-rank joint iterative estimation algorithm is developed, which has the ability to achieve significantly reduced communication overhead and improved performance when compared with existing techniques. Simulation results illustrate the advantages of the proposed strategy in terms of convergence rate and mean square error performance.
References
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...I. INTRODUCTION LOW-DENSITY parity-check (LDPC) codes are recog-nized as a class of linear block codes which can achieve near-Shannon capacity with linear-time encoding and parallelizable decoding algorithms....

    [...]

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,246 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...The advantages of LDPC codes arise from the sparse (low-density) paritycheck matrices which can be uniquely depicted by graphical representations, referred to as Tanner graphs [3]....

    [...]

Journal ArticleDOI
TL;DR: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels, showing that performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes.

3,032 citations

Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

498 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...Recently, Wymeersch et al. [5], [6] introduced the uniformly reweighted BP (URW-BP) algorithm which exploits BP’s distributed nature and reduces the factor appearance probability (FAP) in [4] to a constant value....

    [...]

  • ...Additionally, the BP algorithm is capable of producing the exact inference solutions if the graphical model is acyclic (i.e., a tree), while it does not guarantee to converge if the graph possesses short cycles which significantly deteriorate the overall performance [4]....

    [...]

Journal ArticleDOI
TL;DR: A Viterbi-like algorithm is proposed that selectively avoids small cycle clusters that are isolated from the rest of the graph and yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability.
Abstract: This letter explains the effect of graph connectivity on error-floor performance of low-density parity-check (LDPC) codes under message-passing decoding A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes Using an easily computed estimate of EMD, we propose a Viterbi-like algorithm that selectively avoids small cycle clusters that are isolated from the rest of the graph This algorithm is different from conventional girth conditioning by emphasizing the connectivity as well as the length of cycles The algorithm yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability

401 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Specifically, check nodes having a large number of short cycles are more likely to form clusters of small cycles, which significantly obstruct the convergence of BP algorithm within limited iterations [7]....

    [...]