scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Low-Latency Reweighted Belief Propagation Decoding for LDPC Codes

06 Aug 2012-IEEE Communications Letters (IEEE)-Vol. 16, Iss: 10, pp 1660-1663
TL;DR: Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.
Abstract: In this paper we propose a novel message passing algorithm which exploits the existence of short cycles to obtain performance gains by reweighting the factor graph. The proposed decoding algorithm is called variable factor appearance probability belief propagation (VFAP-BP) algorithm and is suitable for wireless communications applications with low-latency and short blocks. Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: In this article, the authors proposed low-complexity robust adaptive beamforming (RAB) techniques that based on shrinkage methods, where the only prior knowledge required by the proposed algorithms are the angular sector in which the actual steering vector is located and the antenna array geometry.
Abstract: In this paper, we propose low-complexity robust adaptive beamforming (RAB) techniques that based on shrinkage methods. The only prior knowledge required by the proposed algorithms are the angular sector in which the actual steering vector is located and the antenna array geometry. We firstly present a Low-Complexity Shrinkage-Based Mismatch Estimation (LOCSME) algorithm to estimate the desired signal steering vector mismatch, in which the interference-plus-noise covariance (INC) matrix is estimated with Oracle Approximating Shrinkage (OAS) method and the weights are computed with matrix inversions. We then develop low-cost stochastic gradient (SG) recursions to estimate the INC matrix and update the beamforming weights, resulting in the proposed LOCSME-SG algorithm. Simulation results show that both LOCSME and LOCSME-SG achieve very good output signal-to-interference-plus-noise ratio (SINR) compared to previously reported adaptive RAB algorithms.
Posted Content
TL;DR: Simulations for 5G scenarios show that the proposed polar codes perform comparable to Low-Density Parity-Check (LDPC) codes.
Abstract: This paper presents a puncturing technique based on the channel polarization index for the design of rate-compatible polar codes in the fifth generation (5G) of wireless systems. The proposed strategy consists of two steps: we first generate the codeword message; and then we reduce the length of the codeword based on the channel polarization index where channels with the smallest channel polarization indices are punctured. We consider the proposed punctured polar codes with the successive cancellation (SC) decoder and the Cyclic Redundancy Check (CRC) aided SC list (CA-SCL) decoder and punctured bits known to both the encoder and the decoder. The Polar Spectra (PS) are then used to performance analysis the puncturing technique. Simulations for 5G scenarios show that the proposed polar codes perform comparable to Low-Density Parity-Check (LDPC) codes.

Cites methods from "Low-Latency Reweighted Belief Propa..."

  • ...Even though the LDPC codes are decoded with the standard sum-product algorithm other decoders such as [19] can also be used....

    [...]

Posted Content
TL;DR: Simulations show that the proposed UW-GFDM system outperforms prior work and allows the application of a highly efficient linear minimum mean squared error (LMMSE) smoother for noise reduction at the receiver.
Abstract: In this paper, we propose the use of a deterministic sequence, known as unique word (UW), instead of the cyclic prefix (CP) in generalized frequency division multiplexing (GFDM) systems. The UW consists of known sequences that, if not null, can be used advantageously for synchronization and channel estimation purposes. In addition, UW allows the application of a highly efficient linear minimum mean squared error (LMMSE) smoother for noise reduction at the receiver. To avoid the conditions of non-orthogonality caused by the insertion of the UW and performance degradation in time varying frequency-selective channels, we use frequency-shift offset quadrature amplitude modulation (FS-OQAM). We present a signal model of a UW-GFDM system considering a single and multiple UWs. We then develop an LMMSE receive filter for signal reception of the proposed UW-GFDM system. Simulations show that the proposed UW-GFDM system outperforms prior work.
Posted Content
TL;DR: This work proposes a knowledge-aided IDD system that employs a minimum mean-square error detector with refined iterative processing and a reweighted belief propagation (BP) decoding algorithm, which exploits the knowledge of short cycles in the graph structure and reweighting factors derived from the expansion of hypergraphs.
Abstract: In this work, we consider the problem of reduced latency of low-density parity-check (LDPC) codes with iterative detection and decoding (IDD) receiver in multiuser multiple-antenna systems The proposed knowledge-aided IDD (KA-IDD) system employs a minimum mean-square error detector with refined iterative processing and a reweighted belief propagation (BP) decoding algorithm We present reweighted BP decoding algorithms, which exploit the knowledge of short cycles in the graph structure and reweighting factors derived from the expansion of hypergraphs Simulation results show that the proposed KA-IDD scheme and algorithms outperform prior art and require a reduced number of decoding iterations

Cites methods from "Low-Latency Reweighted Belief Propa..."

  • ...The proposed KA-IDD scheme and BP algorithms are inspired by the reweighted BP decoding algorithms in [27], [28], which exploit the graphical distributions of the Tanner graph, iterative processing and weight optimization....

    [...]

  • ...[27] upgraded the reweighted BP algorithm from pairwise graphs to hypergraphs and reduced the set of reweighted parameters to a constant, whereas Liu and de Lamare considered the use of two possible values in [28]....

    [...]

Posted Content
TL;DR: These algorithms use an adaptive step-size to accelerate the learning and can offer an excellent tradeoff between convergence speed and steady state, which allows them to solve nonlinear filtering and estimation problems with a large number of parameters without requiring a large computational cost.
Abstract: In the last decade, a considerable research effort has been devoted to developing adaptive algorithms based on kernel functions. One of the main features of these algorithms is that they form a family of universal approximation techniques, solving problems with nonlinearities elegantly. In this paper, we present data-selective adaptive kernel normalized least-mean square (KNLMS) algorithms that can increase their learning rate and reduce their computational complexity. In fact, these methods deal with kernel expansions, creating a growing structure also known as the dictionary, whose size depends on the number of observations and their innovation. The algorithms described herein use an adaptive step-size to accelerate the learning and can offer an excellent tradeoff between convergence speed and steady state, which allows them to solve nonlinear filtering and estimation problems with a large number of parameters without requiring a large computational cost. The data-selective update scheme also limits the number of operations performed and the size of the dictionary created by the kernel expansion, saving computational resources and dealing with one of the major problems of kernel adaptive algorithms. A statistical analysis is carried out along with a computational complexity analysis of the proposed algorithms. Simulations show that the proposed KNLMS algorithms outperform existing algorithms in examples of nonlinear system identification and prediction of a time series originating from a nonlinear difference equation.

Additional excerpts

  • ...k=1 ak (i) ε+ κ (ck, ck) κ (x (i) ,ck) (27) This is an important result because it controls the growing network created by the algorithm [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73]....

    [...]

References
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...I. INTRODUCTION LOW-DENSITY parity-check (LDPC) codes are recog-nized as a class of linear block codes which can achieve near-Shannon capacity with linear-time encoding and parallelizable decoding algorithms....

    [...]

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,246 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...The advantages of LDPC codes arise from the sparse (low-density) paritycheck matrices which can be uniquely depicted by graphical representations, referred to as Tanner graphs [3]....

    [...]

Journal ArticleDOI
TL;DR: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels, showing that performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes.

3,032 citations

Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

498 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...Recently, Wymeersch et al. [5], [6] introduced the uniformly reweighted BP (URW-BP) algorithm which exploits BP’s distributed nature and reduces the factor appearance probability (FAP) in [4] to a constant value....

    [...]

  • ...Additionally, the BP algorithm is capable of producing the exact inference solutions if the graphical model is acyclic (i.e., a tree), while it does not guarantee to converge if the graph possesses short cycles which significantly deteriorate the overall performance [4]....

    [...]

Journal ArticleDOI
TL;DR: A Viterbi-like algorithm is proposed that selectively avoids small cycle clusters that are isolated from the rest of the graph and yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability.
Abstract: This letter explains the effect of graph connectivity on error-floor performance of low-density parity-check (LDPC) codes under message-passing decoding A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes Using an easily computed estimate of EMD, we propose a Viterbi-like algorithm that selectively avoids small cycle clusters that are isolated from the rest of the graph This algorithm is different from conventional girth conditioning by emphasizing the connectivity as well as the length of cycles The algorithm yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability

401 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Specifically, check nodes having a large number of short cycles are more likely to form clusters of small cycles, which significantly obstruct the convergence of BP algorithm within limited iterations [7]....

    [...]