scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Low-Latency Reweighted Belief Propagation Decoding for LDPC Codes

06 Aug 2012-IEEE Communications Letters (IEEE)-Vol. 16, Iss: 10, pp 1660-1663
TL;DR: Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.
Abstract: In this paper we propose a novel message passing algorithm which exploits the existence of short cycles to obtain performance gains by reweighting the factor graph. The proposed decoding algorithm is called variable factor appearance probability belief propagation (VFAP-BP) algorithm and is suitable for wireless communications applications with low-latency and short blocks. Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: A hybrid algorithm to set a signal-to-interference and noise ratio (SINR) threshold at the node to determine the type of signal stored at the relay node is proposed and obtained a significant improvement in secrecy rate over previously reported algorithms.
Abstract: In this paper, we investigate opportunistic relay and jammer cooperation schemes in multiple-input multiple-output (MIMO) buffer-aided relay networks. The network consists of one source, an arbitrary number of relay nodes, legitimate users and eavesdroppers, with the constraints of physical layer security. We propose an algorithm to select a set of relay nodes to enhance the legitimate users' transmission and another set of relay nodes to perform jamming of the eavesdroppers. With Inter-Relay interference (IRI) taken into account, interference cancellation can be implemented to assist the transmission of the legitimate users. Secondly, IRI can also be used to further increase the level of harm of the jamming signal to the eavesdroppers. By exploiting the fact that the jamming signal can be stored at the relay nodes, we also propose a hybrid algorithm to set a signal-to-interference and noise ratio (SINR) threshold at the node to determine the type of signal stored at the relay node. With this separation, the signals with high SINR are delivered to the users as conventional relay systems and the low SINR performance signals are stored as potential jamming signals. Simulation results show that the proposed techniques obtain a significant improvement in secrecy rate over previously reported algorithms.

1 citations


Additional excerpts

  • ...schemes [43], [44], [45], [46], [47], [48], [49], [50], [51] , [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68],[69], [70], [71], [72], [73], [74], [75], [80], [77], [78], [79], [80], [81], [82], [83], [84], [85]....

    [...]

Journal ArticleDOI
TL;DR: In this paper , a general robust subband adaptive filtering (GR-SAF) scheme is proposed to minimize the mean square deviation under the random-walk model with individual weight uncertainty.
Abstract: —In this paper, we propose a general robust subband adaptive filtering (GR-SAF) scheme against impulsive noise by minimizing the mean square deviation under the random-walk model with individual weight uncertainty. Specifically, by choos- ing different scaling factors such as from the M-estimate and maximum correntropy robust criteria in the GR-SAF scheme, we can easily obtain different GR-SAF algorithms. Importantly, the proposed GR-SAF algorithm can be reduced to a variable regularization robust normalized SAF algorithm, thus having fast convergence rate and low steady-state error. Simulations in the contexts of system identification with impulsive noise and echo cancellation with double-talk have verified that the proposed GR- SAF algorithms outperforms its counterparts.

1 citations

Book ChapterDOI
01 Jan 2016
TL;DR: An algorithm for decoding low-density parity-check (LDPC) code that improve performance and reduce latency time and is suitable for wireless communications applications is developed.
Abstract: In this paper we develop an algorithm for decoding low-density parity-check (LDPC) code that improve performance and reduce latency time. This algorithm is called Variable Factor Appearance Probability Min Sum (VFAP-MS) and is inspired from the Variable Factor Appearance Probability Belief Propagation (VFAP-BP) algorithm. The presented algorithm exploit the existence of short cycles in the code and strategy for reweighting check nodes, and is suitable for wireless communications applications. Simulation results show that the VFAP-MS algorithm outperforms the standard MS described in the literature.

1 citations

01 Jan 2014
TL;DR: This book chapter reviews signal detection and parameter estimation techniques for multiuser multiple-antenna wireless systems with a very large number of antennas, known as massive multi-input multi-output (MIMO) systems.
Abstract: This book chapter reviews signal detection and parameter estimation techniques for multiuser multiple-antenna wireless systems with a very large number of antennas, known as massive multi-input multi-output (MIMO) systems. We consider both centralized antenna systems (CAS) and distributed antenna systems (DAS) architectures in which a large number of antenna elements are employed and focus on the uplink of a mobile cellular system. In particular, we focus on receive processing techniques that include signal detection and parameter estimation problems and discuss the specific needs of massive MIMO systems. Simulation results illustrate the performance of detection and estimation algorithms under several scenarios of interest. Key problems are discussed and future trends in massive MIMO systems are pointed out.

1 citations


Cites background from "Low-Latency Reweighted Belief Propa..."

  • ...The development of IDD schemes and decoding algorithms that perform message passing with reduced delays [60]–[62] are of great importance in massive MIMO systems....

    [...]

  • ...Therefore, we novel message passing algorithms with smarter strategies to exchange information should be investigated along with their application to IDD schemes [60]–[62]....

    [...]

Posted Content
TL;DR: In this paper, an iterative robust minimum mean square error (RMMSE) precoder based on generalized loading is developed to mitigate interference in the presence of imperfect channel state information (CSI).
Abstract: We consider the downlink of a cell-free massive multiple-input multiple-output (MIMO) system with \textcolor{red}{single}-antenna access points (APs) and single-antenna users. An iterative robust minimum mean-square error (RMMSE) precoder based on generalized loading is developed to mitigate interference in the presence of imperfect channel state information (CSI). An achievable rate analysis is carried out and optimal and uniform power allocation schemes are developed based on the signal-to-interference-plus-noise ratio. An analysis of the computational costs of the proposed RMMSE and existing schemes is also presented. Numerical results show the improvement provided by the proposed RMMSE precoder against linear minimum mean-square error, zero-forcing and conjugate beamforming precoders in the presence of imperfect CSI.
References
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...I. INTRODUCTION LOW-DENSITY parity-check (LDPC) codes are recog-nized as a class of linear block codes which can achieve near-Shannon capacity with linear-time encoding and parallelizable decoding algorithms....

    [...]

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,246 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...The advantages of LDPC codes arise from the sparse (low-density) paritycheck matrices which can be uniquely depicted by graphical representations, referred to as Tanner graphs [3]....

    [...]

Journal ArticleDOI
TL;DR: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels, showing that performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes.

3,032 citations

Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

498 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Finally, Section V concludes the paper....

    [...]

  • ...Recently, Wymeersch et al. [5], [6] introduced the uniformly reweighted BP (URW-BP) algorithm which exploits BP’s distributed nature and reduces the factor appearance probability (FAP) in [4] to a constant value....

    [...]

  • ...Additionally, the BP algorithm is capable of producing the exact inference solutions if the graphical model is acyclic (i.e., a tree), while it does not guarantee to converge if the graph possesses short cycles which significantly deteriorate the overall performance [4]....

    [...]

Journal ArticleDOI
TL;DR: A Viterbi-like algorithm is proposed that selectively avoids small cycle clusters that are isolated from the rest of the graph and yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability.
Abstract: This letter explains the effect of graph connectivity on error-floor performance of low-density parity-check (LDPC) codes under message-passing decoding A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes Using an easily computed estimate of EMD, we propose a Viterbi-like algorithm that selectively avoids small cycle clusters that are isolated from the rest of the graph This algorithm is different from conventional girth conditioning by emphasizing the connectivity as well as the length of cycles The algorithm yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacity-approaching capability

401 citations


"Low-Latency Reweighted Belief Propa..." refers background in this paper

  • ...Specifically, check nodes having a large number of short cycles are more likely to form clusters of small cycles, which significantly obstruct the convergence of BP algorithm within limited iterations [7]....

    [...]