scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2012"


Journal ArticleDOI
TL;DR: Simulation results show that CA-SCL/SCS can provide significant gain over the turbo codes used in 3GPP standard with code rate 1/2 and code length 1024 at the block error probability (BLER) of 10-4.
Abstract: CRC (cyclic redundancy check)-aided decoding schemes are proposed to improve the performance of polar codes. A unified description of successive cancellation decoding and its improved version with list or stack is provided and the CRC-aided successive cancellation list/stack (CA-SCL/SCS) decoding schemes are proposed. Simulation results in binary-input additive white Gaussian noise channel (BI-AWGNC) show that CA-SCL/SCS can provide significant gain of 0.5 dB over the turbo codes used in 3GPP standard with code rate 1/2 and code length 1024 at the block error probability (BLER) of 10-4. Moreover, the time complexity of CA-SCS decoder is much lower than that of turbo decoder and can be close to that of successive cancellation (SC) decoder in the high SNR regime.

722 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive SC-List decoder for polar codes with CRC was proposed, which iteratively increases the list size until at least one survival path can pass CRC.
Abstract: In this letter, we propose an adaptive SC (Successive Cancellation)-List decoder for polar codes with CRC. This adaptive SC-List decoder iteratively increases the list size until at least one survival path can pass CRC. Simulation shows that the adaptive SC-List decoder provides significant complexity reduction. We also demonstrate that polar code (2048, 1024) with 24-bit CRC decoded by our proposed adaptive SC-List decoder with very large maximum list size can achieve a frame error rate FER ≤ 10-3{-3} at Eb/No = 1.1dB, which is about 0.25dB from the information theoretic limit at this block length.

316 citations


Book ChapterDOI
15 Apr 2012
TL;DR: The ball collision technique of Bernstein, Lange and Peters was used to reduce the complexity of Stern's information set decoding algorithm to 20.0556n by as mentioned in this paper, and this bound was improved by May, Meurer and Thomae.
Abstract: Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWE-based schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern's information set decoding algorithm to 20.0556n. Using representations this bound was improved to 20.0537n by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time 20.0494n.

295 citations


Journal ArticleDOI
TL;DR: A list successive cancellation decoding algorithm to boost the performance of polar codes is proposed and simulation results of LSC decoding in the binary erasure channel and binary-input additive white Gaussian noise channel show a significant performance improvement.
Abstract: A list successive cancellation (LSC) decoding algorithm to boost the performance of polar codes is proposed. Compared with traditional successive cancellation decoding algorithms, LSC simultaneously produces at most L locally best candidates during the decoding process to reduce the chance of missing the correct codeword. The complexity of the proposed algorithm is O ( LN log N ), where N and L are the code length and the list size, respectively. Simulation results of LSC decoding in the binary erasure channel and binary-input additive white Gaussian noise channel show a significant performance improvement.

149 citations


Journal ArticleDOI
TL;DR: Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.
Abstract: In this paper we propose a novel message passing algorithm which exploits the existence of short cycles to obtain performance gains by reweighting the factor graph. The proposed decoding algorithm is called variable factor appearance probability belief propagation (VFAP-BP) algorithm and is suitable for wireless communications applications with low-latency and short blocks. Simulation results show that the VFAP-BP algorithm outperforms the standard BP algorithm, and requires a significantly smaller number of iterations when decoding either general or commercial LDPC codes.

92 citations


Journal Article
TL;DR: In this article, the authors give a linear algebraic list decoding algorithm that can correct a fraction of errors approaching the code distance, while pinning down the candidate messages to a well-structured affine space of dimension a constant factor smaller than the code dimension.
Abstract: We consider Reed-Solomon (RS) codes whose evaluation points belong to a subfield, and give a linear-algebraic list decoding algorithm that can correct a fraction of errors approaching the code distance, while pinning down the candidate messages to a well-structured affine space of dimension a constant factor smaller than the code dimension. By pre-coding the message polynomials into a subspace-evasive set, we get a Monte Carlo construction of a subcode of Reed-Solomon codes that can be list decoded from a fraction (1-R-e) of errors in polynomial time (for any fixed e > 0) with a list size of O(1/e). Our methods extend to algebraic-geometric (AG) codes, leading to a similar claim over constant-sized alphabets. This matches parameters of recent results based on folded variants of RS and AG codes. but our construction here gives subcodes of Reed-Solomon and AG codes themselves (albeit with restrictions on the evaluation points).Further, the underlying algebraic idea also extends nicely to Gabidulin's construction of rank-metric codes based on linearized polynomials. This gives the first construction of positive rate rank-metric codes list decodable beyond half the distance, and in fact gives codes of rate R list decodable up to the optimal (1-R-e) fraction of rank errors. A similar claim holds for the closely related subspace codes studied by Koetter and Kschischang.We introduce a new notion called subspace designs as another way to pre-code messages and prune the subspace of candidate solutions. Using these, we also get a deterministic construction of a polynomial time list decodable subcode of RS codes. By using a cascade of several subspace designs, we extend our approach to AG codes, which gives the first deterministic construction of an algebraic code family of rate R with efficient list decoding from 1-R-e fraction of errors over an alphabet of constant size (that depends only on e). The list size bound is almost a constant (governed by log* (block length)), and the code can be constructed in quasi-polynomial time.

91 citations


Patent
Bin Li1, Hui Shen1
24 Dec 2012
TL;DR: In this paper, a reliable subset is extracted from an information bit set of the Polar codes, where reliability of information bits in the reliable subsets is higher than reliability of other information bits.
Abstract: Embodiments of the present invention provide a method and a device for decoding Polar codes. A reliable subset is extracted from an information bit set of the Polar codes, where reliability of information bits in the reliable subset is higher than reliability of other information bits. The method includes: obtaining a probability value or an LLR of a current decoding bit of the Polar codes; when the current decoding bit belongs to the reliable subset, performing judgment according to the probability value or the LLR of the current decoding bit to determine a decoding value of the current decoding bit, keeping the number of decoding paths of the Polar codes unchanged, and modifying probability values of all the decoding paths by using the probability value or the LLR of the current decoding bit. The probability values of the decoding paths are obtained by calculation according to the probability value or the LLR of the decoding bit of the Polar codes. In the embodiments of the present invention, the information bits in the reliable subset are judged without splitting the decoding path, thereby reducing overall decoding complexity.

83 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: An encoding and decoding scheme achieving the Chong-Motani-Garg inner bound for a two sender two receiver interference channel with classical input and quantum output and proves its inner bounds using a non-commutative union bound to analyse the decoding error probability, and a geometric notion of approximate interesection of two conditionally typical subspaces.
Abstract: We construct an encoding and decoding scheme achieving the Chong-Motani-Garg inner bound [1] for a two sender two receiver interference channel with classical input and quantum output. This automatically gives a similar inner bound for sending classical information through an interference channel with quantum inputs and outputs without entanglement assistance. Our result matches the best known inner bound for the interference channel in the classical setting. Achieving the Chong-Motani-Garg inner bound, which is known to be equivalent to the Han-Kobayashi inner bound [3], answers an open question raised recently by Fawzi et al. [4]. Our encoding strategy is the standard random encoding strategy. Our decoding strategy is a sequential strategy where a receiver loops through all candidate messages trying to project the received state onto a ‘typical’ subspace for the candidate message under consideration, stopping if the projection succeeds for a message, which is then declared as the guess of the receiver for the sent message. On the way to our main result, we show that random encoding and sequential decoding strategies suffice to achieve rates up to the mutual information for a single sender single receiver channel, and the standard inner bound for a two sender single receiver multiple access channel, for channels with classical input and quantum output. Besides conceptual simplicity, a sequential decoding strategy is space efficient, and may have additional efficiency advantages in some settings. We prove our inner bounds using two new technical tools — a non-commutative union bound to analyse the decoding error probability, and a geometric notion of approximate interesection of two conditionally typical subspaces.

77 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.
Abstract: We describe a new family of integer lattices built from construction A and non-binary LDPC codes. An iterative message-passing algorithm suitable for decoding in high dimensions is proposed. This family of lattices, referred to as LDA lattices, follows the recent transition of Euclidean codes from their classical theory to their modern approach as announced by the pioneering work of Loeliger (1997), Erez, Litsyn, and Zamir (2004–2005). Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.

70 citations


Journal ArticleDOI
TL;DR: A new connection is made between computer science techniques used to study low-degree polynomials and coding theory questions to resolve the weight distribution and list-decoding size of Reed-Muller codes for all distances.
Abstract: The weight distribution and list-decoding size of Reed-Muller codes are studied in this work. Given a weight parameter, we are interested in bounding the number of Reed-Muller codewords with weight up to the given parameter; and given a received word and a distance parameter, we are interested in bounding the size of the list of Reed-Muller codewords that are within that distance from the received word. Obtaining tight bounds for the weight distribution of Reed-Muller codes has been a long standing open problem in coding theory, dating back to 1976. In this work, we make a new connection between computer science techniques used to study low-degree polynomials and these coding theory questions. This allows us to resolve the weight distribution and list-decoding size of Reed-Muller codes for all distances. Previous results could only handle bounded distances: Azumi, Kasami, and Tokura gave bounds on the weight distribution which hold up to 2.5 times the minimal distance of the code; and Gopalan, Klivans, and Zuckerman gave bounds on the list-decoding size which hold up to the Johnson bound.

69 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: This paper answers the question in the affirmative by giving a method to polarize all discrete memoryless channels and sources and yields codes that retain the low encoding and decoding complexity of binary polar codes.
Abstract: An open problem in polarization theory is whether all memoryless channels and sources with composite (that is, non-prime) alphabet sizes can be polarized with deterministic, Arikan-like methods. This paper answers the question in the affirmative by giving a method to polarize all discrete memoryless channels and sources. The method yields codes that retain the low encoding and decoding complexity of binary polar codes.

Journal ArticleDOI
TL;DR: To provide dimming control of on-off keying, the proposed coding scheme yields a codewords with the codeword weight adapted to the dimming requirement unlike existing coding schemes which have a limited support of this feature.
Abstract: This letter presents an error correction scheme for dimmable visible light communication systems. To provide dimming control of on-off keying, the proposed coding scheme yields a codeword with the codeword weight adapted to the dimming requirement. It also allows an arbitrary value of the dimming requirement unlike existing coding schemes which have a limited support of this feature. To this end, turbo codes are employed together with puncturing and scrambling techniques to match the Hamming weight of codewords with the desired dimming rate. We demonstrate the decoding performance of the proposed coding scheme under iterative decoding and compare it with other existing error correction schemes. The simulation results prove that the proposed scheme has superior decoding performance, arbitrary dimming rate support, and diverse code rate options.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: This paper analyzes a class of spatially-coupled generalized LDPC codes and observes that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding.
Abstract: A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we analyze a class of spatially-coupled generalized LDPC codes and observe that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding. These codes can be seen as generalized product codes and are closely related to braided block codes.

Journal ArticleDOI
TL;DR: The high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel and the q-ary symmetric channel is derived and leads to the result that strictly sparse signals can be reconstructed efficiently with high probability using a constant oversampling ratio.
Abstract: This paper considers the performance of (j, k)-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the q-ary symmetric channel (q-SC). For the BEC and a fixed j, the density evolution (DE) threshold of iterative decoding scales like Θ(k-1) and the critical stopping ratio scales like Θ(k-j/(j-2)). For the q-SC and a fixed j, the DE threshold of verification decoding depends on the details of the decoder and scales like Θ(k-1) for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the compressed sensing (CS) of strictly sparse signals. A DE-based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly sparse signals can be reconstructed efficiently with high probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set-based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: In this paper, polar code based sphere decoding algorithm is proposed with the optimal performance and proposed technique exploits two properties of polar coding to reduce decoding complexity.
Abstract: Polar codes are known as the first provable code construction to achieve Shannon capacity for arbitrary symmetric binary-input channels. Although, there exist efficient sub-optimal decoders with reduced complexity for polar codes, the complexity of the optimum ML decoder increases exponentially. Hence the optimum decoder is infeasible for the practical implementation of polar coding. In this paper, our motivation is about developing efficient ML decoder with reduced complexity. In this purpose, polar code based sphere decoding algorithm is proposed with the optimal performance. Additionally, proposed technique exploits two properties of polar coding to reduce decoding complexity. By this way, the reduced complexity of optimal decoding is only cubic, not exponential.

Journal Article
TL;DR: This work highlights that constructing an explicit subspace-evasive subset that has small intersection with low-dimensional subspaces-an interesting problem in pseudorandomness in its own right-could lead to explicit codes with better list-decoding guarantees.
Abstract: Folded Reed-Solomon (RS) codes are an explicit family of codes that achieve the optimal tradeoff between rate and list error-correction capability: specifically, for any e > 0, Guruswami and Rudra presented an nO(1/ e) time algorithm to list decode appropriate folded RS codes of rate R from a fraction 1-R-e of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices for a statement of the above form. Here, we give a simple linear-algebra-based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list-decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in quadratic time. We also consider a closely related family of codes, called (order m) derivative codes and defined over fields of large characteristic, which consist of the evaluations of f as well as its first m-1 formal derivatives at N distinct field elements. We show how our linear-algebraic methods for folded RS codes can be used to show that derivative codes can also achieve the above optimal tradeoff. The theoretical drawback of our analysis for folded RS codes and derivative codes is that both the decoding complexity and proven worst-case list-size bound are nΩ(1/ e). By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list-size bound of O(1/ e2) which is quite close to the existential O(1/ e) bound (however, the decoding complexity remains nΩ(1/ e)). Our work highlights that constructing an explicit subspace-evasive subset that has small intersection with low-dimensional subspaces-an interesting problem in pseudorandomness in its own right-could lead to explicit codes with better list-decoding guarantees.

Journal ArticleDOI
TL;DR: The recently introduced threaded cyclic-division-algebra-based codes are shown to take a particularly concise form as a non-monotonic function of the multiplexing gain, which describes the minimum known complexity of any decoder that can provably achieve a gap to maximum likelihood performance that vanishes in the high SNR limit.
Abstract: In the setting of quasi-static multiple-input multiple-output channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full-rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise, and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic-division-algebra-based codes—the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff—the SD complexity exponent is shown to take a particularly concise form as a non-monotonic function of the multiplexing gain. To date, the SD complexity exponent also describes the minimum known complexity of any decoder that can provably achieve a gap to maximum likelihood performance that vanishes in the high SNR limit.

Patent
20 Jul 2012
TL;DR: In this article, a Reed-Solomon (RS) error-correction system is described, where a decision-codeword corresponds to an inner code and an RS code is the outer code.
Abstract: Systems and methods are provided for implementing various aspects of a Reed-Solomon (RS) error-correction system. A detector can provide a decision-codeword from a channel and can also provide soft-information for the decision-codeword. If the decision-codeword corresponds to an inner code and an RS code is the outer code, a soft-information map can process the soft-information for the decision-codeword to produce soft-information for a RS decision-codeword. A RS decoder can employ the Berlekamp-Massey algorithm (BMA), list decoding, and a Chien search, and can include a pipelined architecture. A threshold-based control circuit can be used to predict whether list decoding will be needed and can suspend the list decoding operation if it predicts that list decoding is not needed.

Book ChapterDOI
02 Dec 2012
TL;DR: In this article, a coding-theoretic viewpoint was brought to bear on the problem of noisy RSA key recovery, and a new algorithm was proposed to solve the motivating cold boot problem.
Abstract: Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were extended by Henecka, May and Meurer (Crypto 2010) to produce an algorithm that works when all the key bits are subject to error. In this paper, we bring a coding-theoretic viewpoint to bear on the problem of noisy RSA key recovery. This viewpoint allows us to cast the previous work as part of a more general framework. In turn, this enables us to explain why the previous algorithms do not solve the motivating cold boot problem, and to design a new algorithm that does (and more). In addition, we are able to use concepts and tools from coding theory --- channel capacity, list decoding algorithms, and random coding techniques --- to derive bounds on the performance of the previous and our new algorithm.

Journal ArticleDOI
TL;DR: Numerical results illustrate that the designed LDPC codes achieve a near-optimum performance (very close to the Singleton bound, at least down to a codeword error rate level 10-8) with an affordable decoding complexity.
Abstract: This paper investigates efficient maximum-likelihood (ML) decoding of low-density parity-check (LDPC) codes over erasure channels. A set of algorithms, referred to as pivoting algorithms, is developed. The aim is to limit the average number of pivots (or reference variables) from which all the other erased symbols are recovered iteratively. The suggested algorithms exhibit different trade-offs between complexity of the pivoting phase and average number of pivots. Moreover, a systematic procedure to design LDPC code ensembles for efficient ML decoding is proposed. Numerical results illustrate that the designed LDPC codes achieve a near-optimum performance (very close to the Singleton bound, at least down to a codeword error rate level 10-8) with an affordable decoding complexity. For one of the presented codes and algorithms, a software implementation has been developed which is capable to provide data rates above 1.5 Gbps on a commercial computing platform.

Journal ArticleDOI
TL;DR: A simple simultaneous nonunique decoding rule is shown to achieve this optimal rate region regardless of the relative strengths of signal, interference, and noise, implying that the Han-Kobayashi bound cannot be improved merely by using the optimal maximum likelihood decoder.
Abstract: The optimal rate region for interference networks is characterized when encoding is restricted to random code ensembles with superposition coding and time sharing. A simple simultaneous nonunique decoding rule, under which each receiver decodes for the intended message as well as the interfering messages, is shown to achieve this optimal rate region regardless of the relative strengths of signal, interference, and noise. This result implies that the Han-Kobayashi bound, the best known inner bound on the capacity region of the two-user-pair interference channel, cannot be improved merely by using the optimal maximum likelihood decoder.

Journal ArticleDOI
TL;DR: Simulation results show that the newly proposed STBC can well address the rate-performance-complexity tradeoff of the MIMO systems.
Abstract: A partial interference cancellation (PIC) group decoding based space-time block code (STBC) design criterion was recently proposed by Guo and Xia, where the decoding complexity and the code rate traeoff is dealt when the full diversity is achieved. In this paper, two designs of STBC are proposed for any number of transmit antennas that can obtain full diversity when a PIC group decoding (with a particular grouping scheme) is applied at receiver. With the PIC group decoding and an appropriate grouping scheme for the decoding, the proposed STBC are shown to obtain the same diversity gain as the ML decoding, but have a low decoding complexity. The first proposed STBC is designed with multiple diagonal layers and it can obtain the full diversity for two-layer design with the PIC group decoding and the rate is up to 2 symbols per channel use. With PIC-SIC group decoding, the first proposed STBC can obtain full diversity for any number of layers and the rate can be full. The second proposed STBC can obtain full diversity and a rate up to 9/4 with the PIC group decoding. Some code design examples are given and simulation results show that the newly proposed STBC can well address the rate-performance-complexity tradeoff of the MIMO systems.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: Fundamental information-theoretic bounds are provided on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes and for bounded transmit-power schemes, showing that there is a fundamental tradeoff between the transmit and encoding/decoding power.
Abstract: We provide fundamental information-theoretic bounds on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes. These bounds hold for all codes and all encoding and decoding algorithms implemented within the paradigm of our VLSI model. This model essentially views computation on a 2-D VLSI circuit as a computation on a network of connected nodes. The bounds are derived based on analyzing information flow in the circuit. They are then used to show that there is a fundamental tradeoff between the transmit and encoding/decoding power, and that the total (transmit + encoding + decoding) power must diverge to infinity at least as fast as cube-root of log 1/p e , where P e is the average block-error probability. On the other hand, for bounded transmit-power schemes, the total power must diverge to infinity at least as fast as square-root of log 1/P e due to the burden of encoding/decoding.

Journal ArticleDOI
TL;DR: In this article, an iterative hard reliability-based majority-logic decoding (IHRB-MLGD) algorithm was proposed to achieve significant coding gain with small hardware overhead.
Abstract: Non-binary low-density parity-check (NB-LDPC) codes can achieve better error-correcting performance than their binary counterparts at the cost of higher decoding complexity when the codeword length is moderate. The recently developed iterative reliability-based majority-logic NB-LDPC decoding has better performance-complexity tradeoffs than previous algorithms. This paper first proposes enhancement schemes to the iterative hard reliability-based majority-logic decoding (IHRB-MLGD). Compared to the IHRB algorithm, our enhanced (E-)IHRB algorithm can achieve significant coding gain with small hardware overhead. Then low-complexity partial-parallel NB-LDPC decoder architectures are developed based on these two algorithms. Many existing NB-LDPC code construction methods lead to quasi-cyclic or cyclic codes. Both types of codes are considered in our design. Moreover, novel schemes are developed to keep a small proportion of messages in order to reduce the memory requirement without causing noticeable performance loss. In addition, a shift-message structure is proposed by using memories concatenated with variable node units to enable efficient partial-parallel decoding for cyclic NB-LDPC codes. Compared to previous designs based on the Min-max decoding algorithm, our proposed decoders have at least tens of times lower complexity with moderate coding gain loss.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: It is shown that one source of the error floors observed in the literature may be the message quantization rule used in the iterative decoder implementation, and a new quantization method is proposed to overcome the limitations of standard quantization rules.
Abstract: The error floor phenomenon observed with LDPC codes and their graph-based, iterative, message-passing (MP) decoders is commonly attributed to the existence of error-prone substructures in a Tanner graph representation of the code. Many approaches have been proposed to lower the error floor by designing new LDPC codes with fewer such substructures or by modifying the decoding algorithm. In this paper, we show that one source of the error floors observed in the literature may be the message quantization rule used in the iterative decoder implementation. We then propose a new quantization method to overcome the limitations of standard quantization rules. Performance simulation results for two LDPC codes commonly found to have high error floors when used with the fixed-point min-sum decoder and its variants demonstrate the validity of our findings and the effectiveness of the proposed quantization algorithm.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: In this paper, it was shown that simultaneous non-unique decoding is rate-optimal for the general K-sender, L-receiver discrete memoryless interference channel when encoding is restricted to randomly generated codebooks, superposition coding, and time sharing.
Abstract: It is shown that simultaneous nonunique decoding is rate-optimal for the general K-sender, L-receiver discrete memoryless interference channel when encoding is restricted to randomly generated codebooks, superposition coding, and time sharing. This result implies that the Han-Kobayashi inner bound for the two-user-pair interference channel cannot be improved simply by using a better decoder such as the maximum likelihood decoder. It also generalizes and extends previous results by Baccelli, El Gamal, and Tse on Gaussian interference channels with point-to-point Gaussian random codebooks and shows that the Cover-van der Meulen inner bound with no common auxiliary random variable on the capacity region of the broadcast channel can be improved to include the superposition coding inner bound simply by using simultaneous nonunique decoding. The key to proving the main result is to show that after a maximal set of messages has been recovered, the remaining signal at each receiver is distributed essentially independently and identically.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: A folded version of Gabidulin codes analogous to the folded Reed-Solomon codes of Guruswami and Rudra is introduced along with a list-decoding algorithm for such codes that achieves the information theoretic bound on the decoding radius of a rank-metric code.
Abstract: Subspace codes and rank-metric codes can be used to correct errors and erasures in network, with linear network coding. Both types of codes have been extensively studied in the past five years. Subspace codes were introduced by Koetter and Kschischang to correct errors and erasures in networks where topology is unknown (the non-coherent case). In this model, the codewords are vector subspaces of a fixed ambient space; thus codes for this model are collections of such subspaces. In a previous work, we have developed a family of subspace codes, based upon the Koetter-Kschichang construction, which are efficiently list decodable. Using these codes, we achieved a better decoding radius than Koetter-Kschischang codes at low rates. Herein, we introduce a new family of subspace codes based upon a different approach which leads to a linear-algebraic list-decoding algorithm. The resulting error-correction radius can be expressed as follows: for any integer s, our list-decoder using s + 1-variate interpolation polynomials guarantees successful recovery of the message sub-space provided the normalized dimension of errors is at most s(1 − sR). The same list-decoding algorithm can be used to correct erasures as well as errors. The size of output list is at most Qs − 1, where Q is the size of the field that message symbols are chosen from. Rank-metric codes are suitable for error correction in the case where the network topology and the underlying network code are known (the coherent case). Gabidulin codes are a well-known class of algebraic rank-metric codes that meet the Singleton bound on the minimum rank-distance of a code. In this paper, we introduce a folded version of Gabidulin codes analogous to the folded Reed-Solomon codes of Guruswami and Rudra along with a list-decoding algorithm for such codes. Our list-decoding algorithm makes it possible to recover the message provided that the normalized rank of error is at most 1 − R − ∊, for any ∊ > 0. Notably this achieves the information theoretic bound on the decoding radius of a rank-metric code.

Journal ArticleDOI
TL;DR: The results indicate that sequential decoding substantially extends (beyond what is possible with Viterbi decoding) the range of latency values over which convolutional codes prove advantageous compared to LDPC block codes.
Abstract: This paper compares the performance of convolutional codes to that of LDPC block codes with identical decoding latencies. The decoding algorithms considered are the Viterbi algorithm and stack sequential decoding for convolutional codes and iterative message passing for LDPC codes. It is shown that, at very low latencies, convolutional codes with Viterbi decoding offer the best performance, whereas for high latencies LDPC codes dominate - and sequential decoding of convolutional codes offers the best performance over a range of intermediate latency values. The "crossover latencies" - i.e., the latency values at which the best code/decoding selection changes - are identified for a variety of code rates (1/2, 2/3, 3/4, and 5/6) and target bit/frame error rates. For sequential decoding, both blockwise and continuous resynchronization procedures are used to allow the decoder to recover the correct path. The results indicate that sequential decoding substantially extends (beyond what is possible with Viterbi decoding) the range of latency values over which convolutional codes prove advantageous compared to LDPC block codes.

Journal Article
TL;DR: In this paper, it was shown that a random binary linear code with probability arbitrarily close to 1 is list decodable at radius 1--1/q -e with list size L = O(1/e2) and rate R = Ωq(e2/(log3( 1/e)) up to the polylogarithmic factor in 1/ e and constant factors depending on q, and that the desired average distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to
Abstract: @q, with probability arbitrarily close to 1, is list decodable at radius 1--1/q -- e with list size L = O(1/e2) and rate R = Ωq(e2/(log3(1/e))). Up to the polylogarithmic factor in 1/e and constant factors depending on q, this matches the lower bound L = Ωq(1/e2) for the list size and upper bound R = Oq(e2) for the rate. Previously only existence (and not abundance) of such codes was known for the special case q = 2 (Guruswami, Hastad, Sudan and Zuckerman, 2002).In order to obtain our result, we employ a relaxed version of the well known Johnson bound on list decoding that translates the average Hamming distance between codewords to list decoding guarantees. We furthermore prove that the desired average-distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to the Euclidean norm (RIP-2). For the case of random binary linear codes, this matrix coincides with a random submatrix of the Hadamard-Walsh transform matrix that is well studied in the compressed sensing literature.Finally we improve the analysis of Rudelson and Vershynin (2008) on the number of random frequency samples required for exact reconstruction of k-sparse signals of length N. Specifically we improve the number of samples from O(k log (N) log2 (k)(log k+log log N)) to O(k log(N) log3(k)). The proof involves bounding the expected supremum of a related Gaussian process by using an improved analysis of the metric defined by the process. This improvement is crucial for our application in list decoding.

Proceedings ArticleDOI
19 May 2012
TL;DR: A new construction of algebraic codes which are efficiently list decodable from a fraction 1-R-ε of adversarial errors where R is the rate of the code, for any desired positive constant ε, matching the existential bound for random codes up to constant factors.
Abstract: We give a new construction of algebraic codes which are efficiently list decodable from a fraction 1-R-e of adversarial errors where R is the rate of the code, for any desired positive constant e. The worst-case list size output by the algorithm is O(1/e), matching the existential bound for random codes up to constant factors. Further, the alphabet size of the codes is a constant depending only on e --- it can be made exp(~O(1/e2)) which is not much worse than the non-constructive exp(1/e) bound of random codes. The code construction is Monte Carlo and has the claimed list decoding property with high probability. Once the code is (efficiently) sampled, the encoding/decoding algorithms are deterministic with a running time Oe(Nc) for an absolute constant $c$, where N is the code's block length. Our construction is based on a careful combination of a linear-algebraic approach to list decoding folded codes from towers of function fields, with a special form of subspace-evasive sets. Instantiating this with the explicit "asymptotically good" Garcia-Stichtenoth (GS for short) tower of function fields yields the above parameters. To illustrate the method in a simpler setting, we also present a construction based on Hermitian function fields, which offers similar guarantees with a list-size and alphabet size polylogarithmic in the block length N. In comparison, algebraic codes achieving the optimal trade-off between list decodability and rate based on folded Reed-Solomon codes have a decoding complexity of NΩ(1/e), an alphabet size of NΩ(1/e2), and a list size of O(1/e2) (even after combination with subspace-evasive sets). Thus we get an improvement over the previous best bounds in all three aspects simultaneously, and are quite close to the existential random coding bounds. Along the way, we shed light on how to use automorphisms of certain function fields to enable list decoding of the folded version of the associated algebraic-geometric codes.