scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2012"


Journal ArticleDOI
TL;DR: Simulation results show that CA-SCL/SCS can provide significant gain over the turbo codes used in 3GPP standard with code rate 1/2 and code length 1024 at the block error probability (BLER) of 10-4.
Abstract: CRC (cyclic redundancy check)-aided decoding schemes are proposed to improve the performance of polar codes. A unified description of successive cancellation decoding and its improved version with list or stack is provided and the CRC-aided successive cancellation list/stack (CA-SCL/SCS) decoding schemes are proposed. Simulation results in binary-input additive white Gaussian noise channel (BI-AWGNC) show that CA-SCL/SCS can provide significant gain of 0.5 dB over the turbo codes used in 3GPP standard with code rate 1/2 and code length 1024 at the block error probability (BLER) of 10-4. Moreover, the time complexity of CA-SCS decoder is much lower than that of turbo decoder and can be close to that of successive cancellation (SC) decoder in the high SNR regime.

722 citations


Journal ArticleDOI
TL;DR: It is shown that Gaussian approximation for density evolution enables one to accurately predict the performance of polar codes and concatenated codes based on them.
Abstract: Polar codes are shown to be instances of both generalized concatenated codes and multilevel codes. It is shown that the performance of a polar code can be improved by representing it as a multilevel code and applying the multistage decoding algorithm with maximum likelihood decoding of outer codes. Additional performance improvement is obtained by replacing polar outer codes with other ones with better error correction performance. In some cases this also results in complexity reduction. It is shown that Gaussian approximation for density evolution enables one to accurately predict the performance of polar codes and concatenated codes based on them.

664 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble.
Abstract: We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a-priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble which fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in that ensemble have that property. The quantifier universal refers to the single ensemble/code which is good for all channels if we assume that the channel is known at the receiver. The key technical result is a proof that under belief propagation decoding spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems.

321 citations


Book ChapterDOI
15 Apr 2012
TL;DR: The ball collision technique of Bernstein, Lange and Peters was used to reduce the complexity of Stern's information set decoding algorithm to 20.0556n by as mentioned in this paper, and this bound was improved by May, Meurer and Thomae.
Abstract: Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWE-based schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern's information set decoding algorithm to 20.0556n. Using representations this bound was improved to 20.0537n by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time 20.0494n.

295 citations


Journal ArticleDOI
TL;DR: The structure of LDPC convolutional code ensembles is suitable to obtain performance close to the theoretical limits over the memoryless erasure channel, both for the BP decoder and windowed decoding but the same structure imposes limitations on the performance over erasure channels with memory.
Abstract: We consider a windowed decoding scheme for LDPC convolutional codes that is based on the belief-propagation (BP) algorithm. We discuss the advantages of this decoding scheme and identify certain characteristics of LDPC convolutional code ensembles that exhibit good performance with the windowed decoder. We will consider the performance of these ensembles and codes over erasure channels with and without memory. We show that the structure of LDPC convolutional code ensembles is suitable to obtain performance close to the theoretical limits over the memoryless erasure channel, both for the BP decoder and windowed decoding. However, the same structure imposes limitations on the performance over erasure channels with memory.

231 citations


Proceedings ArticleDOI
04 Mar 2012
TL;DR: The first layered decoding for LDPC convolutional codes designed for application in high speed optical transmission systems was successfully realized.
Abstract: We successfully realized layered decoding for LDPC convolutional codes designed for application in high speed optical transmission systems. A relatively short code with 20% redundancy was FPGA-emulated with a Q-factor of 5.7dB at BER of 10−15.

150 citations


Journal ArticleDOI
TL;DR: A list successive cancellation decoding algorithm to boost the performance of polar codes is proposed and simulation results of LSC decoding in the binary erasure channel and binary-input additive white Gaussian noise channel show a significant performance improvement.
Abstract: A list successive cancellation (LSC) decoding algorithm to boost the performance of polar codes is proposed. Compared with traditional successive cancellation decoding algorithms, LSC simultaneously produces at most L locally best candidates during the decoding process to reduce the chance of missing the correct codeword. The complexity of the proposed algorithm is O ( LN log N ), where N and L are the code length and the list size, respectively. Simulation results of LSC decoding in the binary erasure channel and binary-input additive white Gaussian noise channel show a significant performance improvement.

149 citations


Journal ArticleDOI
TL;DR: Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.
Abstract: In this paper we propose a novel message passing algorithm which exploits the existence of short cycles to obtain performance gains by reweighting the factor graph. The proposed decoding algorithm is called variable factor appearance probability belief propagation (VFAP-BP) algorithm and is suitable for wireless communications applications with low-latency and short blocks. Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

92 citations


Posted Content
TL;DR: In this paper, a simple proof of threshold saturation that applies to a broad class of coupled scalar recursions is presented, which is based on potential functions and was motivated mainly by the ideas of Takeuchi et al.
Abstract: Low-density parity-check (LDPC) convolutional codes (or spatially-coupled codes) have been shown to approach capacity on the binary erasure channel (BEC) and binary-input memoryless symmetric channels. The mechanism behind this spectacular performance is the threshold saturation phenomenon, which is characterized by the belief-propagation threshold of the spatially-coupled ensemble increasing to an intrinsic noise threshold defined by the uncoupled system. In this paper, we present a simple proof of threshold saturation that applies to a broad class of coupled scalar recursions. The conditions of the theorem are verified for the density-evolution (DE) equations of irregular LDPC codes on the BEC, a class of generalized LDPC codes, and the joint iterative decoding of LDPC codes on intersymbol-interference channels with erasure noise. Our approach is based on potential functions and was motivated mainly by the ideas of Takeuchi et al. The resulting proof is surprisingly simple when compared to previous methods.

84 citations


Patent
Bin Li1, Hui Shen1
24 Dec 2012
TL;DR: In this paper, a reliable subset is extracted from an information bit set of the Polar codes, where reliability of information bits in the reliable subsets is higher than reliability of other information bits.
Abstract: Embodiments of the present invention provide a method and a device for decoding Polar codes. A reliable subset is extracted from an information bit set of the Polar codes, where reliability of information bits in the reliable subset is higher than reliability of other information bits. The method includes: obtaining a probability value or an LLR of a current decoding bit of the Polar codes; when the current decoding bit belongs to the reliable subset, performing judgment according to the probability value or the LLR of the current decoding bit to determine a decoding value of the current decoding bit, keeping the number of decoding paths of the Polar codes unchanged, and modifying probability values of all the decoding paths by using the probability value or the LLR of the current decoding bit. The probability values of the decoding paths are obtained by calculation according to the probability value or the LLR of the decoding bit of the Polar codes. In the embodiments of the present invention, the information bits in the reliable subset are judged without splitting the decoding path, thereby reducing overall decoding complexity.

83 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: An encoding and decoding scheme achieving the Chong-Motani-Garg inner bound for a two sender two receiver interference channel with classical input and quantum output and proves its inner bounds using a non-commutative union bound to analyse the decoding error probability, and a geometric notion of approximate interesection of two conditionally typical subspaces.
Abstract: We construct an encoding and decoding scheme achieving the Chong-Motani-Garg inner bound [1] for a two sender two receiver interference channel with classical input and quantum output. This automatically gives a similar inner bound for sending classical information through an interference channel with quantum inputs and outputs without entanglement assistance. Our result matches the best known inner bound for the interference channel in the classical setting. Achieving the Chong-Motani-Garg inner bound, which is known to be equivalent to the Han-Kobayashi inner bound [3], answers an open question raised recently by Fawzi et al. [4]. Our encoding strategy is the standard random encoding strategy. Our decoding strategy is a sequential strategy where a receiver loops through all candidate messages trying to project the received state onto a ‘typical’ subspace for the candidate message under consideration, stopping if the projection succeeds for a message, which is then declared as the guess of the receiver for the sent message. On the way to our main result, we show that random encoding and sequential decoding strategies suffice to achieve rates up to the mutual information for a single sender single receiver channel, and the standard inner bound for a two sender single receiver multiple access channel, for channels with classical input and quantum output. Besides conceptual simplicity, a sequential decoding strategy is space efficient, and may have additional efficiency advantages in some settings. We prove our inner bounds using two new technical tools — a non-commutative union bound to analyse the decoding error probability, and a geometric notion of approximate interesection of two conditionally typical subspaces.

Journal ArticleDOI
TL;DR: A new and effective algorithm to generate parity inequalities derived from certain additional redundant parity check (RPC) constraints that can eliminate pseudocodewords produced by the LP decoder, often significantly improving the decoder error-rate performance.
Abstract: Linear programming (LP) decoding approximates maximum-likelihood (ML) decoding of a linear block code by relaxing the equivalent ML integer programming (IP) problem into a more easily solved LP problem The LP problem is defined by a set of box constraints together with a set of linear inequalities called “parity inequalities” that are derived from the constraints represented by the rows of a parity-check matrix of the code and can be added iteratively and adaptively In this paper, we first derive a new necessary condition and a new sufficient condition for a violated parity inequality constraint, or “cut,” at a point in the unit hypercube Then, we propose a new and effective algorithm to generate parity inequalities derived from certain additional redundant parity check (RPC) constraints that can eliminate pseudocodewords produced by the LP decoder, often significantly improving the decoder error-rate performance The cut-generating algorithm is based upon a specific transformation of an initial parity-check matrix of the linear block code We also design two variations of the proposed decoder to make it more efficient when it is combined with the new cut-generating algorithm Simulation results for several low-density parity-check (LDPC) codes demonstrate that the proposed decoding algorithms significantly narrow the performance gap between LP decoding and ML decoding

Proceedings ArticleDOI
01 Sep 2012
TL;DR: Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.
Abstract: We describe a new family of integer lattices built from construction A and non-binary LDPC codes. An iterative message-passing algorithm suitable for decoding in high dimensions is proposed. This family of lattices, referred to as LDA lattices, follows the recent transition of Euclidean codes from their classical theory to their modern approach as announced by the pioneering work of Loeliger (1997), Erez, Litsyn, and Zamir (2004–2005). Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: This paper answers the question in the affirmative by giving a method to polarize all discrete memoryless channels and sources and yields codes that retain the low encoding and decoding complexity of binary polar codes.
Abstract: An open problem in polarization theory is whether all memoryless channels and sources with composite (that is, non-prime) alphabet sizes can be polarized with deterministic, Arikan-like methods. This paper answers the question in the affirmative by giving a method to polarize all discrete memoryless channels and sources. The method yields codes that retain the low encoding and decoding complexity of binary polar codes.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: This work demonstrates how a sequential decoding approach can achieve the Holevo limit for both the contexts of optical communication and “quantum reading".
Abstract: An important practical open question has been to design explicit, structured optical receivers that achieve the Holevo limit in the contexts of optical communication and “quantum reading.” The Holevo limit is an achievable rate that is higher than the Shannon limit of any known optical receiver. We demonstrate how a sequential decoding approach can achieve the Holevo limit for both of these settings. A crucial part of our scheme for both settings is a non-destructive “vacuum-or-not” measurement that projects an n-symbol modulated codeword onto the n-fold vacuum state or its orthogonal complement, such that the post-measurement state is either the n-fold vacuum or has the vacuum removed from the support of the n symbols' joint quantum state. The sequential decoder for optical communication requires the additional ability to perform multimode optical phase-space displacements — realizable using a beamsplitter and a laser, while the sequential decoder for quantum reading also requires the ability to perform phase-shifting (realizable using a phase plate) and online squeezing (a phase-sensitive amplifier).

Journal ArticleDOI
TL;DR: It is shown how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve.
Abstract: In this paper the decoding capabilities of convolutional codes over the erasure channel are studied. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. It is shown how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, two subclasses of MDP codes are defined: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. It is shown that complete-MDP convolutional codes perform in many cases better than comparable MDS block codes of the same rate over the erasure channel.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: This paper analyzes a class of spatially-coupled generalized LDPC codes and observes that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding.
Abstract: A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we analyze a class of spatially-coupled generalized LDPC codes and observe that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding. These codes can be seen as generalized product codes and are closely related to braided block codes.

Journal ArticleDOI
TL;DR: The high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel and the q-ary symmetric channel is derived and leads to the result that strictly sparse signals can be reconstructed efficiently with high probability using a constant oversampling ratio.
Abstract: This paper considers the performance of (j, k)-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the q-ary symmetric channel (q-SC). For the BEC and a fixed j, the density evolution (DE) threshold of iterative decoding scales like Θ(k-1) and the critical stopping ratio scales like Θ(k-j/(j-2)). For the q-SC and a fixed j, the DE threshold of verification decoding depends on the details of the decoder and scales like Θ(k-1) for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the compressed sensing (CS) of strictly sparse signals. A DE-based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly sparse signals can be reconstructed efficiently with high probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set-based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: In this paper, polar code based sphere decoding algorithm is proposed with the optimal performance and proposed technique exploits two properties of polar coding to reduce decoding complexity.
Abstract: Polar codes are known as the first provable code construction to achieve Shannon capacity for arbitrary symmetric binary-input channels. Although, there exist efficient sub-optimal decoders with reduced complexity for polar codes, the complexity of the optimum ML decoder increases exponentially. Hence the optimum decoder is infeasible for the practical implementation of polar coding. In this paper, our motivation is about developing efficient ML decoder with reduced complexity. In this purpose, polar code based sphere decoding algorithm is proposed with the optimal performance. Additionally, proposed technique exploits two properties of polar coding to reduce decoding complexity. By this way, the reduced complexity of optimal decoding is only cubic, not exponential.

Proceedings ArticleDOI
08 Oct 2012
TL;DR: It has been observed that LDPC convolutional codes perform better than the block codes from which they are derived even at low latency, as well as in terms of their complexity as a function of Eb/N0.
Abstract: We compare LDPC block and LDPC convolutional codes with respect to their decoding performance under low decoding latencies. Protograph based regular LDPC codes are considered with rather small lifting factors. LDPC block and convolutional codes are decoded using belief propagation. For LDPC convolutional codes, a sliding window decoder with different window sizes is applied to continuously decode the input symbols. We show the required E b /N 0 to achieve a bit error rate of 10−5 for the LDPC block and LDPC convolutional codes for the decoding latency of up to approximately 550 information bits. It has been observed that LDPC convolutional codes perform better than the block codes from which they are derived even at low latency. We demonstrate the trade off between complexity and performance in terms of lifting factor and window size for a fixed value of latency. Furthermore, the two codes are also compared in terms of their complexity as a function of E b /N 0 . Convolutional codes with Viterbi decoding are also compared with the two above mentioned codes.

Journal ArticleDOI
TL;DR: The recently introduced threaded cyclic-division-algebra-based codes are shown to take a particularly concise form as a non-monotonic function of the multiplexing gain, which describes the minimum known complexity of any decoder that can provably achieve a gap to maximum likelihood performance that vanishes in the high SNR limit.
Abstract: In the setting of quasi-static multiple-input multiple-output channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full-rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise, and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic-division-algebra-based codes—the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff—the SD complexity exponent is shown to take a particularly concise form as a non-monotonic function of the multiplexing gain. To date, the SD complexity exponent also describes the minimum known complexity of any decoder that can provably achieve a gap to maximum likelihood performance that vanishes in the high SNR limit.

Patent
20 Jul 2012
TL;DR: In this article, a Reed-Solomon (RS) error-correction system is described, where a decision-codeword corresponds to an inner code and an RS code is the outer code.
Abstract: Systems and methods are provided for implementing various aspects of a Reed-Solomon (RS) error-correction system. A detector can provide a decision-codeword from a channel and can also provide soft-information for the decision-codeword. If the decision-codeword corresponds to an inner code and an RS code is the outer code, a soft-information map can process the soft-information for the decision-codeword to produce soft-information for a RS decision-codeword. A RS decoder can employ the Berlekamp-Massey algorithm (BMA), list decoding, and a Chien search, and can include a pipelined architecture. A threshold-based control circuit can be used to predict whether list decoding will be needed and can suspend the list decoding operation if it predicts that list decoding is not needed.

Journal ArticleDOI
TL;DR: Simulation results show that the newly proposed STBC can well address the rate-performance-complexity tradeoff of the MIMO systems.
Abstract: A partial interference cancellation (PIC) group decoding based space-time block code (STBC) design criterion was recently proposed by Guo and Xia, where the decoding complexity and the code rate traeoff is dealt when the full diversity is achieved. In this paper, two designs of STBC are proposed for any number of transmit antennas that can obtain full diversity when a PIC group decoding (with a particular grouping scheme) is applied at receiver. With the PIC group decoding and an appropriate grouping scheme for the decoding, the proposed STBC are shown to obtain the same diversity gain as the ML decoding, but have a low decoding complexity. The first proposed STBC is designed with multiple diagonal layers and it can obtain the full diversity for two-layer design with the PIC group decoding and the rate is up to 2 symbols per channel use. With PIC-SIC group decoding, the first proposed STBC can obtain full diversity for any number of layers and the rate can be full. The second proposed STBC can obtain full diversity and a rate up to 9/4 with the PIC group decoding. Some code design examples are given and simulation results show that the newly proposed STBC can well address the rate-performance-complexity tradeoff of the MIMO systems.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: Fundamental information-theoretic bounds are provided on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes and for bounded transmit-power schemes, showing that there is a fundamental tradeoff between the transmit and encoding/decoding power.
Abstract: We provide fundamental information-theoretic bounds on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes. These bounds hold for all codes and all encoding and decoding algorithms implemented within the paradigm of our VLSI model. This model essentially views computation on a 2-D VLSI circuit as a computation on a network of connected nodes. The bounds are derived based on analyzing information flow in the circuit. They are then used to show that there is a fundamental tradeoff between the transmit and encoding/decoding power, and that the total (transmit + encoding + decoding) power must diverge to infinity at least as fast as cube-root of log 1/p e , where P e is the average block-error probability. On the other hand, for bounded transmit-power schemes, the total power must diverge to infinity at least as fast as square-root of log 1/P e due to the burden of encoding/decoding.

Journal ArticleDOI
TL;DR: In this article, an iterative hard reliability-based majority-logic decoding (IHRB-MLGD) algorithm was proposed to achieve significant coding gain with small hardware overhead.
Abstract: Non-binary low-density parity-check (NB-LDPC) codes can achieve better error-correcting performance than their binary counterparts at the cost of higher decoding complexity when the codeword length is moderate. The recently developed iterative reliability-based majority-logic NB-LDPC decoding has better performance-complexity tradeoffs than previous algorithms. This paper first proposes enhancement schemes to the iterative hard reliability-based majority-logic decoding (IHRB-MLGD). Compared to the IHRB algorithm, our enhanced (E-)IHRB algorithm can achieve significant coding gain with small hardware overhead. Then low-complexity partial-parallel NB-LDPC decoder architectures are developed based on these two algorithms. Many existing NB-LDPC code construction methods lead to quasi-cyclic or cyclic codes. Both types of codes are considered in our design. Moreover, novel schemes are developed to keep a small proportion of messages in order to reduce the memory requirement without causing noticeable performance loss. In addition, a shift-message structure is proposed by using memories concatenated with variable node units to enable efficient partial-parallel decoding for cyclic NB-LDPC codes. Compared to previous designs based on the Min-max decoding algorithm, our proposed decoders have at least tens of times lower complexity with moderate coding gain loss.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: It is shown that one source of the error floors observed in the literature may be the message quantization rule used in the iterative decoder implementation, and a new quantization method is proposed to overcome the limitations of standard quantization rules.
Abstract: The error floor phenomenon observed with LDPC codes and their graph-based, iterative, message-passing (MP) decoders is commonly attributed to the existence of error-prone substructures in a Tanner graph representation of the code. Many approaches have been proposed to lower the error floor by designing new LDPC codes with fewer such substructures or by modifying the decoding algorithm. In this paper, we show that one source of the error floors observed in the literature may be the message quantization rule used in the iterative decoder implementation. We then propose a new quantization method to overcome the limitations of standard quantization rules. Performance simulation results for two LDPC codes commonly found to have high error floors when used with the fixed-point min-sum decoder and its variants demonstrate the validity of our findings and the effectiveness of the proposed quantization algorithm.

Journal ArticleDOI
TL;DR: The results indicate that sequential decoding substantially extends (beyond what is possible with Viterbi decoding) the range of latency values over which convolutional codes prove advantageous compared to LDPC block codes.
Abstract: This paper compares the performance of convolutional codes to that of LDPC block codes with identical decoding latencies. The decoding algorithms considered are the Viterbi algorithm and stack sequential decoding for convolutional codes and iterative message passing for LDPC codes. It is shown that, at very low latencies, convolutional codes with Viterbi decoding offer the best performance, whereas for high latencies LDPC codes dominate - and sequential decoding of convolutional codes offers the best performance over a range of intermediate latency values. The "crossover latencies" - i.e., the latency values at which the best code/decoding selection changes - are identified for a variety of code rates (1/2, 2/3, 3/4, and 5/6) and target bit/frame error rates. For sequential decoding, both blockwise and continuous resynchronization procedures are used to allow the decoder to recover the correct path. The results indicate that sequential decoding substantially extends (beyond what is possible with Viterbi decoding) the range of latency values over which convolutional codes prove advantageous compared to LDPC block codes.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: It is shown that when the message repetitions in the extended Gallager-B algorithm are scheduled optimally, a small complexity overhead with respect to a reliable decoder provides large gains in fault tolerance.
Abstract: We consider the decoding of regular low density parity-check codes with a Gallager-B message-passing algorithm built exclusively from faulty computing devices. We propose an extension of the Gallager-B algorithm where messages can be repeated to provide increased fault tolerance, and use EXIT functions to derive its average performance. Thresholds are obtained both for the channel quality and the faultiness of the decoder. We argue that decoding complexity is central to the analysis of faulty decoding and compare the complexity of decoding with a faulty decoder instead of a reliable decoder, for a fixed channel condition and residual error rate. Finally, we show that when the message repetitions in the extended Gallager-B algorithm are scheduled optimally, a small complexity overhead with respect to a reliable decoder provides large gains in fault tolerance.

Journal ArticleDOI
TL;DR: In this paper, a review and categorization of decoding methods based on mathematical programming approaches for binary linear codes over binary-input memoryless symmetric channels is presented, including linear, integer and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory.
Abstract: Mathematical programming is a branch of applied mathematics and has recently been used to derive new decoding approaches, challenging established but often heuristic algorithms based on iterative message passing. Concepts from mathematical programming used in the context of decoding include linear, integer, and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory. This paper reviews and categorizes decoding methods based on mathematical programming approaches for binary linear codes over binary-input memoryless symmetric channels.

Journal ArticleDOI
TL;DR: A novel understanding of LP decoding is obtained, which allows us to establish a 0.05 fraction of correctable errors for rate-½ codes; this comes very close to the performance of iterative decoders and is significantly higher than the best previously noted correctable bit error rate for LP decoding.
Abstract: Linear programming (LP) decoding for low-density parity-check codes (and related domains such as compressed sensing) has received increased attention over recent years because of its practical performance-coming close to that of iterative decoding algorithms-and its amenability to finite-blocklength analysis. Several works starting with the work of Feldman showed how to analyze LP decoding using properties of expander graphs. This line of analysis works for only low error rates, about a couple of orders of magnitude lower than the empirically observed performance. It is possible to do better for the case of random noise, as shown by Daskalakis and Koetter and Vontobel. Building on work of Koetter and Vontobel, we obtain a novel understanding of LP decoding, which allows us to establish a 0.05 fraction of correctable errors for rate-½ codes; this comes very close to the performance of iterative decoders and is significantly higher than the best previously noted correctable bit error rate for LP decoding. Our analysis exploits an explicit connection between LP decoding and message-passing algorithms and, unlike other techniques, directly works with the primal linear program. An interesting byproduct of our method is a notion of a “locally optimal” solution that we show to always be globally optimal (i.e., it is the nearest codeword). Such a solution can in fact be found in near-linear time by a “reweighted” version of the min-sum algorithm, obviating the need for LP. Our analysis implies, in particular, that this reweighted version of the min-sum decoder corrects up to a 0.05 fraction of errors.