scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Low-Latency Reweighted Belief Propagation Decoding for LDPC Codes

06 Aug 2012-IEEE Communications Letters (IEEE)-Vol. 16, Iss: 10, pp 1660-1663
TL;DR: Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.
Abstract: In this paper we propose a novel message passing algorithm which exploits the existence of short cycles to obtain performance gains by reweighting the factor graph. The proposed decoding algorithm is called variable factor appearance probability belief propagation (VFAP-BP) algorithm and is suitable for wireless communications applications with low-latency and short blocks. Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This paper conceive a modified non-binary decoding algorithm for homogeneous Calderbank-Shor-Steane-type QLDPC codes, which is capable of alleviating the problems imposed by the unavoidable length-four cycles.
Abstract: The near-capacity performance of classical low-density parity check (LDPC) codes and their efficient iterative decoding makes quantum LDPC (QLPDC) codes a promising candidate for quantum error correction. In this paper, we present a comprehensive survey of QLDPC codes from the perspective of code design as well as in terms of their decoding algorithms. We also conceive a modified non-binary decoding algorithm for homogeneous Calderbank–Shor–Steane-type QLDPC codes, which is capable of alleviating the problems imposed by the unavoidable length-four cycles. Our modified decoder outperforms the state-of-the-art decoders in terms of their word error rate performance, despite imposing a reduced decoding complexity. Finally, we intricately amalgamate our modified decoder with the classic uniformly reweighted belief propagation for the sake of achieving an improved performance.

75 citations

Posted Content
TL;DR: The operational requirements ofmassive MIMO systems are discussed along with their operation in time-division duplexing mode, resource allocation and calibration requirements, and transmit and receiver processing algorithms are examined in light of the specific needs of massive MIMo systems.
Abstract: This article presents a tutorial on multiuser multiple-antenna wireless systems with a very large number of antennas, known as massive multi-input multi-output (MIMO) systems. Signal processing challenges and future trends in the area of massive MIMO systems are presented and key application scenarios are detailed. A linear algebra approach is considered for the description of the system and data models of massive MIMO architectures. The operational requirements of massive MIMO systems are discussed along with their operation in time-division duplexing mode, resource allocation and calibration requirements. In particular, transmit and receiver processing algorithms are examined in light of the specific needs of massive MIMO systems. Simulation results illustrate the performance of transmit and receive processing algorithms under scenarios of interest. Key problems are discussed and future trends in the area of massive MIMO systems are pointed out.

40 citations


Cites background from "Low-Latency Reweighted Belief Propa..."

  • ...perform message passing with reduced delays [89]- [91] are of paramount importance in future wireless systems....

    [...]

Journal ArticleDOI
TL;DR: This letter provides a proof of the convergence and the optimality of the reweighted message passing (ReMP) algorithm when applied to solve BWBM problems in a distributed fashion.
Abstract: Many assignment problems, and channel allocation in OFDMA networks is a typical example, can be formulated as bipartite weighted b-matching (BWBM) problems. In this letter we provide a proof of the convergence and the optimality of the reweighted message passing (ReMP) algorithm when applied to solve BWBM problems in a distributed fashion. To this aim, we first show that the ReMP rule is a contraction mapping under a maximum mapping norm. Then, we show that the fixed convergence point is an optimal solution for the original assignment problem.

27 citations


Cites background from "Low-Latency Reweighted Belief Propa..."

  • ...ReMP belongs to the family of reweighted MP algorithms firstly analyzed in [20] and successively extended to different application domains like LDPC codes [21] and wireless cooperative estimation and detection [22]....

    [...]

Journal ArticleDOI
TL;DR: Simulations show that the proposed technique has a complexity comparable to the conventional P-DF detector while it obtains a performance close to the maximum-likelihood detector at a low to medium signal-to-noise ratio range.
Abstract: In this study, a novel low-complexity adaptive decision feedback (DF) detection with parallel DF (P-DF) and P-DF constellation constraints (P-DFCC) is proposed for multiuser multi-input–multi-output (MIMO) systems. The authors propose a constrained constellation map which introduces a number of selected points served as the feedback candidates for interference cancellation. By introducing a reliability checking, a higher degree of freedom is introduced to refine the unreliable estimates. The P-DFCC is followed by an adaptive receive filter to estimate the transmitted symbol. To reduce the complexity of computing the filters with time-varying MIMO channels, an adaptive recursive least squares algorithm is employed in the proposed P-DFCC scheme. An iterative detection and decoding (Turbo) scheme is considered with the proposed P-DFCC algorithm. Simulations show that the proposed technique has a complexity comparable to the conventional P-DF detector while it obtains a performance close to the maximum-likelihood detector at a low to medium signal-to-noise ratio range. .

23 citations

Journal ArticleDOI
TL;DR: A novel informed dynamic scheduling strategy for decoding LDPC codes, denoted reliability-based residual belief propagation (Rel-RBP), is developed by exploiting the reliability of the message and the residuals of the possible updates to choose the messages to be used by the decoding algorithm.
Abstract: Low-density parity-check (LDPC) codes have excellent performance for a wide range of applications at reasonable complexity. LDPC codes with short blocks avoid the high latency of codes with large block lengths, making them potential candidates for ultra reliable low-latency applications of future wireless standards. In this work, a novel informed dynamic scheduling (IDS) strategy for decoding LDPC codes, denoted reliability-based residual belief propagation (Rel-RBP), is developed by exploiting the reliability of the message and the residuals of the possible updates to choose the messages to be used by the decoding algorithm. A different measure for each iteration of the IDS schemes is also presented, which underlies the high cost of those algorithms in terms of computational complexity and motivates the development of the proposed strategy. Simulations show that Rel-RBP speeds up the decoding at reduced complexity and results in error rate performance gains over prior work.

16 citations

References
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...I. INTRODUCTION LOW-DENSITY parity-check (LDPC) codes are recog-nized as a class of linear block codes which can achieve near-Shannon capacity with linear-time encoding and parallelizable decoding algorithms....

    [...]

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,246 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...The advantages of LDPC codes arise from the sparse (low-density) paritycheck matrices which can be uniquely depicted by graphical representations, referred to as Tanner graphs [3]....

    [...]

Journal ArticleDOI
TL;DR: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels, showing that performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes.

3,032 citations

Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

498 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...Recently, Wymeersch et al. [5], [6] introduced the uniformly reweighted BP (URW-BP) algorithm which exploits BP’s distributed nature and reduces the factor appearance probability (FAP) in [4] to a constant value....

    [...]

  • ...Additionally, the BP algorithm is capable of producing the exact inference solutions if the graphical model is acyclic (i.e., a tree), while it does not guarantee to converge if the graph possesses short cycles which significantly deteriorate the overall performance [4]....

    [...]

Journal ArticleDOI
TL;DR: A Viterbi-like algorithm is proposed that selectively avoids small cycle clusters that are isolated from the rest of the graph and yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability.
Abstract: This letter explains the effect of graph connectivity on error-floor performance of low-density parity-check (LDPC) codes under message-passing decoding A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes Using an easily computed estimate of EMD, we propose a Viterbi-like algorithm that selectively avoids small cycle clusters that are isolated from the rest of the graph This algorithm is different from conventional girth conditioning by emphasizing the connectivity as well as the length of cycles The algorithm yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability

401 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Specifically, check nodes having a large number of short cycles are more likely to form clusters of small cycles, which significantly obstruct the convergence of BP algorithm within limited iterations [7]....

    [...]