Journal ArticleDOI

# Low-Latency Reweighted Belief Propagation Decoding for LDPC Codes

06 Aug 2012--Vol. 16, Iss: 10, pp 1660-1663

TL;DR: Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

AbstractIn this paper we propose a novel message passing algorithm which exploits the existence of short cycles to obtain performance gains by reweighting the factor graph. The proposed decoding algorithm is called variable factor appearance probability belief propagation (VFAP-BP) algorithm and is suitable for wireless communications applications with low-latency and short blocks. Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

Topics: Factor graph (68%), Sequential decoding (67%), Belief propagation (64%), List decoding (62%)

Content maybe subject to copyright    Report

##### Citations
More filters

Journal ArticleDOI
Abstract: Generalized spatial modulation (GSM) uses $n_{t}$ transmit antenna elements but fewer transmit radio frequency (RF) chains, $n_{rf}$ . Spatial modulation (SM) and spatial multiplexing are special cases of GSM with $n_{rf}=1$ and $n_{rf}=n_{t}$ , respectively. In GSM, in addition to conveying information bits through $n_{rf}$ conventional modulation symbols (for example, QAM), the indices of the $n_{rf}$ active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR) . The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of $10^{-3}$ . Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.

134 citations

Journal ArticleDOI
Abstract: We propose iterative detection and decoding (IDD) algorithms with Low-Density Parity-Check (LDPC) codes for Multiple Input Multiple Output (MIMO) systems operating in block-fading and fast Rayleigh fading channels. Soft-input soft-output minimum mean-square error receivers with successive interference cancellation are considered. In particular, we devise a novel strategy to improve the bit error rate (BER) performance of IDD schemes, which takes into account the soft \textit{a posteriori} output of the decoder in a block-fading channel when Root-Check LDPC codes are used. A MIMO IDD receiver with soft information processing that exploits the code structure and the behavior of the log likelihood ratios is also developed. Moreover, we present a scheduling algorithm for decoding LDPC codes in block-fading channels. Simulations show that the proposed techniques result in significant gains in terms of BER for both block-fading and fast-fading channels.

82 citations

Journal ArticleDOI
TL;DR: This paper presents cost-effective low-rank techniques for designing robust adaptive beamforming algorithms based on the exploitation of the cross-correlation between the array observation data and the output of the beamformer, resulting in the proposed orthogonal Krylov subspace projection mismatch estimation (OKSPME) method.
Abstract: This paper presents cost-effective low-rank techniques for designing robust adaptive beamforming (RAB) algorithms. The proposed algorithms are based on the exploitation of the cross-correlation between the array observation data and the output of the beamformer. First, we construct a general linear equation considered in large dimensions whose solution yields the steering vector mismatch. Then, we employ the idea of the full orthogonalization method (FOM), an orthogonal Krylov subspace based method, to iteratively estimate the steering vector mismatch in a reduced-dimensional subspace, resulting in the proposed orthogonal Krylov subspace projection mismatch estimation (OKSPME) method. We also devise adaptive algorithms based on stochastic gradient (SG) and conjugate gradient (CG) techniques to update the beamforming weights with low complexity and avoid any costly matrix inversion. The main advantages of the proposed low-rank and mismatch estimation techniques are their cost-effectiveness when dealing with high-dimension subspaces or large sensor arrays. Simulations results show excellent performance in terms of the output signal-to-interference-plus-noise ratio (SINR) of the beamformer among all the compared RAB methods.

77 citations

• ...…[92], [93], [94], [95], [96], [97], [98], [99], [100], [101], [102], [103], [104], [105], [106], [107], [108], [109], [110], [111], [112], [114], [115], [116], [117], [118], [119],[120], [121], [122], [123], [124], [125], [126], [131], [128], [129], [130], [131], [132], [13], [134], [135], [136]....

[...]

Journal ArticleDOI
TL;DR: A novel strategy to improve the bit error rate (BER) performance of IDD schemes is devised, which takes into account the soft a posteriori output of the decoder in a block-fading channel when root-check LDPC codes are used.
Abstract: We propose iterative detection and decoding (IDD) algorithms with low-density parity-check (LDPC) codes for multiple-input multiple-output (MIMO) systems operating in block-fading and fast Rayleigh fading channels. Soft-input–soft-output minimum-mean-square-error (MMSE) receivers with successive interference cancelation are considered. In particular, we devise a novel strategy to improve the bit error rate (BER) performance of IDD schemes, which takes into account the soft a posteriori output of the decoder in a block-fading channel when root-check LDPC codes are used. A MIMO IDD receiver with soft information processing that exploits the code structure and the behavior of the log-likelihood ratios is also developed. Moreover, we present a scheduling algorithm for decoding LDPC codes in block-fading channels. Simulations show that the proposed techniques result in significant gains in terms of BER for both block-fading and fast-fading channels.

76 citations

### Cites methods from "Low-Latency Reweighted Belief Propa..."

• ...Recent LDPC techniques [5]–[11] that improve the coding gain and have low-complexity encoding and...

[...]

Journal ArticleDOI
, Boya Qin1
TL;DR: Simulation results are presented for time-varying wireless environments and show that the proposed JPDF minimum-SER receive processing strategy and algorithms achieve a superior performance than existing methods with a reduced computational complexity.
Abstract: In this work, we propose a novel adaptive reduced-rank receive processing strategy based on joint preprocessing, decimation and filtering (JPDF) for large-scale multiple-antenna systems. In this scheme, a reduced-rank framework is employed for linear receive processing and multiuser interference suppression based on the minimization of the symbol-error-rate (SER) cost function. We present a structure with multiple processing branches that performs a dimensionality reduction, where each branch contains a group of jointly optimized preprocessing and decimation units, followed by a linear receive filter. We then develop stochastic gradient (SG) algorithms to compute the parameters of the preprocessing and receive filters, along with a low-complexity decimation technique for both binary phase shift keying (BPSK) and $M$ -ary quadrature amplitude modulation (QAM) symbols. In addition, an automatic parameter selection scheme is proposed to further improve the convergence performance of the proposed reduced-rank algorithms. Simulation results are presented for time-varying wireless environments and show that the proposed JPDF minimum-SER receive processing strategy and algorithms achieve a superior performance than existing methods with a reduced computational complexity.

73 citations

##### References
More filters

Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

10,950 citations

### "Low-Latency Reweighted Belief Propa..." refers background in this paper

• ...I. INTRODUCTION LOW-DENSITY parity-check (LDPC) codes are recog-nized as a class of linear block codes which can achieve near-Shannon capacity with linear-time encoding and parallelizable decoding algorithms....

[...]

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,078 citations

### "Low-Latency Reweighted Belief Propa..." refers background in this paper

• ...Finally, Section V concludes the paper....

[...]

• ...The advantages of LDPC codes arise from the sparse (low-density) paritycheck matrices which can be uniquely depicted by graphical representations, referred to as Tanner graphs [3]....

[...]

Journal ArticleDOI
TL;DR: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels, showing that performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes.

2,978 citations

Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

491 citations

### "Low-Latency Reweighted Belief Propa..." refers background in this paper

• ...Finally, Section V concludes the paper....

[...]

• ...Recently, Wymeersch et al. [5], [6] introduced the uniformly reweighted BP (URW-BP) algorithm which exploits BP’s distributed nature and reduces the factor appearance probability (FAP) in [4] to a constant value....

[...]

• ...Additionally, the BP algorithm is capable of producing the exact inference solutions if the graphical model is acyclic (i.e., a tree), while it does not guarantee to converge if the graph possesses short cycles which significantly deteriorate the overall performance [4]....

[...]

Journal ArticleDOI
, C. Jones1
TL;DR: A Viterbi-like algorithm is proposed that selectively avoids small cycle clusters that are isolated from the rest of the graph and yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability.
Abstract: This letter explains the effect of graph connectivity on error-floor performance of low-density parity-check (LDPC) codes under message-passing decoding A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes Using an easily computed estimate of EMD, we propose a Viterbi-like algorithm that selectively avoids small cycle clusters that are isolated from the rest of the graph This algorithm is different from conventional girth conditioning by emphasizing the connectivity as well as the length of cycles The algorithm yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability

377 citations

### "Low-Latency Reweighted Belief Propa..." refers background in this paper

• ...Specifically, check nodes having a large number of short cycles are more likely to form clusters of small cycles, which significantly obstruct the convergence of BP algorithm within limited iterations [7]....

[...]