scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2015"


Journal ArticleDOI
TL;DR: Simulations show that the resulting performance is very close to that of maximum-likelihood decoding, even for moderate values of L, and it is shown that such a genie can be easily implemented using simple CRC precoding.
Abstract: We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, $L$ decoding paths are considered concurrently at each decoding stage, where $L$ is an integer parameter. At the end of the decoding process, the most likely among the $L$ paths is selected as the single codeword at the decoder output. Simulations show that the resulting performance is very close to that of maximum-likelihood decoding, even for moderate values of $L$ . Alternatively, if a genie is allowed to pick the transmitted codeword from the list, the results are comparable with the performance of current state-of-the-art LDPC codes. We show that such a genie can be easily implemented using simple CRC precoding. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths for each information bit, and then uses a pruning procedure to discard all but the $L$ most likely paths. However, straightforward implementation of this algorithm requires $\Omega (L n^{2})$ time, which is in stark contrast with the $O(n \log n)$ complexity of the original successive-cancellation decoder. In this paper, we utilize the structure of polar codes along with certain algorithmic transformations in order to overcome this problem: we devise an efficient, numerically stable, implementation of the proposed list decoder that takes only $O(L n \log n)$ time and $O(L n)$ space.

1,263 citations


Journal ArticleDOI
TL;DR: The LLR-based formulation of the successive cancellation list (SCL) decoder is presented, which leads to a more efficient hardware implementation of the decoder compared to the known log-likelihood based implementation.
Abstract: We show that successive cancellation list decoding can be formulated exclusively using log-likelihood ratios. In addition to numerical stability, the log-likelihood ratio based formulation has useful properties that simplify the sorting step involved in successive cancellation list decoding. We propose a hardware architecture of the successive cancellation list decoder in the log-likelihood ratio domain which, compared with a log-likelihood domain implementation, requires less irregular and smaller memories. This simplification, together with the gains in the metric sorter, lead to $ 56\%$ to $137\%$ higher throughput per unit area than other recently proposed architectures. We then evaluate the empirical performance of the CRC-aided successive cancellation list decoder at different list sizes using different CRCs and conclude that it is important to adapt the CRC length to the list size in order to achieve the best error-rate performance of concatenated polar codes. Finally, we synthesize conventional successive cancellation decoders at large block-lengths with the same block-error probability as our proposed CRC-aided successive cancellation list decoders to demonstrate that, while our decoders have slightly lower throughput and larger area, they have a significantly smaller decoding latency.

541 citations


Journal ArticleDOI
TL;DR: This paper constructs protograph-based spatially coupled low-density parity-check codes by coupling together a series of L disjoint, or uncoupled, LDPC code Tanner graphs into a single coupled chain, and obtains sequences of asymptotically good LDPC codes with fast convergence rates and BP thresholds close to the Shannon limit.
Abstract: In this paper, we construct protograph-based spatially coupled low-density parity-check (LDPC) codes by coupling together a series of $L$ disjoint, or uncoupled, LDPC code Tanner graphs into a single coupled chain. By varying $L$ , we obtain a flexible family of code ensembles with varying rates and frame lengths that can share the same encoding and decoding architecture for arbitrary $L$ . We demonstrate that the resulting codes combine the best features of optimized irregular and regular codes in one design: capacity approaching iterative belief propagation (BP) decoding thresholds and linear growth of minimum distance with block length. In particular, we show that, for sufficiently large $L$ , the BP thresholds on both the binary erasure channel and the binary-input additive white Gaussian noise channel saturate to a particular value significantly better than the BP decoding threshold and numerically indistinguishable from the optimal maximum a posteriori decoding threshold of the uncoupled LDPC code. When all variable nodes in the coupled chain have degree greater than two, asymptotically the error probability converges at least doubly exponentially with decoding iterations and we obtain sequences of asymptotically good LDPC codes with fast convergence rates and BP thresholds close to the Shannon limit. Further, the gap to capacity decreases as the density of the graph increases, opening up a new way to construct capacity achieving codes on memoryless binary-input symmetric-output channels with low-complexity BP decoding.

237 citations


Book ChapterDOI
26 Apr 2015
TL;DR: A new decoding algorithm for random binary linear codes, on which all variants of the currently best known decoding algorithms are build, is proposed.
Abstract: We propose a new decoding algorithm for random binary linear codes. The so-called information set decoding algorithm of Prange (1962) achieves worst-case complexity \(2^{0.121n}\). In the late 80s, Stern proposed a sort-and-match version for Prange’s algorithm, on which all variants of the currently best known decoding algorithms are build. The fastest algorithm of Becker, Joux, May and Meurer (2012) achieves running time \(2^{0.102n}\) in the full distance decoding setting and \(2^{0.0494n}\) with half (bounded) distance decoding.

202 citations


Journal ArticleDOI
TL;DR: A multibit-decision approach that can significantly reduce latency of SCL decoders and a general decoding scheme that can perform intermediate decoding of any 2K bits simultaneously, which can reduce the overall decoding latency to as short as n/2K-2-2 cycles.
Abstract: Polar codes, as the first provable capacity-achieving error-correcting codes, have received much attention in recent years. However, the decoding performance of polar codes with traditional successive-cancellation (SC) algorithm cannot match that of the low-density parity-check or Turbo codes. Because SC list (SCL) decoding algorithm can significantly improve the error-correcting performance of polar codes, design of SCL decoders is important for polar codes to be deployed in practical applications. However, because the prior latency reduction approaches for SC decoders are not applicable for SCL decoders, these list decoders suffer from the long-latency bottleneck. In this paper, we propose a multibit-decision approach that can significantly reduce latency of SCL decoders. First, we present a reformulated SCL algorithm that can perform intermediate decoding of 2 b together. The proposed approach, referred as 2-bit reformulated SCL ( 2b-rSCL ) algorithm , can reduce the latency of SCL decoder from ( $3{n}-2$ ) to ( $2{n}-2$ ) clock cycles without any performance loss. Then, we extend the idea of 2-b-decision to general case, and propose a general decoding scheme that can perform intermediate decoding of any $2^{K}$ bits simultaneously. This general approach, referred as $\textit {2}^{K}$ -bit reformulated SCL ( ${2}^{K}$ b-rSCL ) algorithm , can reduce the overall decoding latency to as short as ${n}/2^{K-2}-2$ cycles. Furthermore, on the basis of the proposed algorithms, very large-scale integration architectures for 2b-rSCL and 4b-rSCL decoders are synthesized. Compared with a prior SCL decoder, the proposed (1024, 512) 2b-rSCL and 4b-rSCL decoders can achieve 21% and 60% reduction in latency, 1.66 and 2.77 times increase in coded throughput with list size 2, and 2.11 and 3.23 times increase in coded throughput with list size 4, respectively.

131 citations


Journal ArticleDOI
TL;DR: In this paper, a new algorithm based on unrolling the decoding tree of the polar code was proposed to improve the speed of list decoding by an order of magnitude when implemented in software.
Abstract: Polar codes asymptotically achieve the symmetric capacity of memoryless channels, yet their error-correcting performance under successive-cancellation (SC) decoding for short and moderate length codes is worse than that of other modern codes such as low-density parity-check (LDPC) codes. Of the many methods to improve the error-correction performance of polar codes, list decoding yields the best results, especially when the polar code is concatenated with a cyclic redundancy check (CRC). List decoding involves exploring several decoding paths with SC decoding, and therefore tends to be slower than SC decoding itself, by an order of magnitude in practical implementations. In this paper, we present a new algorithm based on unrolling the decoding tree of the code that improves the speed of list decoding by an order of magnitude when implemented in software. Furthermore, we show that for software-defined radio applications, our proposed algorithm is faster than the fastest software implementations of LDPC decoders in the literature while offering comparable error-correction performance at similar or shorter code lengths.

103 citations


Journal ArticleDOI
TL;DR: It is shown that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e).
Abstract: We study the list-decodability of multiplicity codes. These codes, which are based on evaluations of high-degree polynomials and their derivatives, have rate approaching 1 while simultaneously allowing for sublinear-time error correction. In this paper, we show that multiplicity codes also admit powerful list-decoding and local list-decoding algorithms that work even in the presence of a large error fraction. In other words, we give algorithms for recovering a polynomial given several evaluations of it and its derivatives, where possibly many of the given evaluations are incorrect. Our first main result shows that univariate multiplicity codes over fields of prime order can be list-decoded up to the so-called "list-decoding capacity." Specifically, we show that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e). This resembles the behavior of the "Folded Reed-Solomon Codes" of Guruswami and Rudra (Trans. Info. Theory 2008). The list-decoding algorithm is based on constructing a differential equation of which the desired codeword is a solution; this differential equation is then solved using a power-series approach (a variation of Hensel lifting) along with other algebraic ideas. Our second main result is a list-decoding algorithm for decoding multivariate multiplicity codes up to their Johnson radius. The key ingredient of this algorithm is the construction of a special family of "algebraically-repelling" curves passing through the points of F m ; no moderate-degree multivariate polynomial over F m can simultaneously vanish on all these A version of this paper was posted online as an Electronic Colloq. on Computational Complexity Tech. Report (20). Supported in part by a Sloan Fellowship and NSF grant CCF-1253886.

96 citations


Journal ArticleDOI
TL;DR: This paper conceive a modified non-binary decoding algorithm for homogeneous Calderbank-Shor-Steane-type QLDPC codes, which is capable of alleviating the problems imposed by the unavoidable length-four cycles.
Abstract: The near-capacity performance of classical low-density parity check (LDPC) codes and their efficient iterative decoding makes quantum LDPC (QLPDC) codes a promising candidate for quantum error correction. In this paper, we present a comprehensive survey of QLDPC codes from the perspective of code design as well as in terms of their decoding algorithms. We also conceive a modified non-binary decoding algorithm for homogeneous Calderbank–Shor–Steane-type QLDPC codes, which is capable of alleviating the problems imposed by the unavoidable length-four cycles. Our modified decoder outperforms the state-of-the-art decoders in terms of their word error rate performance, despite imposing a reduced decoding complexity. Finally, we intricately amalgamate our modified decoder with the classic uniformly reweighted belief propagation for the sake of achieving an improved performance.

75 citations


Proceedings ArticleDOI
24 May 2015
TL;DR: This work proposes two new architectures that exploit the structure of the path metrics in a log-likelihood ratio based formulation of successive cancellation list decoding of polar codes decoders for polar codes.
Abstract: We focus on the metric sorter unit of successive cancellation list decoders for polar codes, which lies on the critical path in all current hardware implementations of the decoder. We review existing metric sorter architectures and we propose two new architectures that exploit the structure of the path metrics in a log-likelihood ratio based formulation of successive cancellation list decoding. Our synthesis results show that, for the list size of L = 32, our first proposed sorter is 14% faster and 45% smaller than existing sorters, while for smaller list sizes, our second sorter has a higher delay in return for up to 36% reduction in the area.

67 citations


Journal ArticleDOI
TL;DR: It is shown that optimal decode of stabilizer codes (previously known to be NP-hard) is in fact computationally much harder than optimal decoding of classical linear codes, it is #P-complete.
Abstract: In this paper, we address the computational hardness of optimally decoding a quantum stabilizer code. Much like classical linear codes, errors are detected by measuring certain check operators which yield an error syndrome, and the decoding problem consists of determining the most likely recovery given the syndrome. The corresponding classical problem is known to be NP-complete, and are appropriate a similar decoding problem for quantum codes is also known to be NP-complete. However, this decoding strategy is not optimal in the quantum setting as it does not consider error degeneracy, which causes distinct errors to have the same effect on the code. Here, we show that optimal decoding of stabilizer codes (previously known to be NP-hard) is in fact computationally much harder than optimal decoding of classical linear codes, it is #P-complete.

63 citations


Journal ArticleDOI
TL;DR: Owing to its significantly increased parallelism, the proposed algorithm facilitates throughputs and latencies that are up to 6.86 times superior to those of the state-of-the art algorithm, when employed for the LTE and WiMAX turbo codes, but at the cost of a moderately increased computational complexity and resource requirement.
Abstract: This paper proposes a novel alternative to the Logarithmic Bahl-Cocke-Jelinek-Raviv (Log-BCJR) algorithm for turbo decoding, yielding significantly improved processing throughput and latency. While the Log-BCJR processes turbo-encoded bits in a serial forwards-backwards manner, the proposed algorithm operates in a fully-parallel manner, processing all bits in both components of the turbo code at the same time. The proposed algorithm is compatible with all turbo codes, including those of the LTE and WiMAX standards. These standardized codes employ odd-even interleavers, facilitating a novel technique for reducing the complexity of the proposed algorithm by 50%. More specifically, odd-even interleavers allow the proposed algorithm to alternate between processing the odd-indexed bits of the first component code at the same time as the even-indexed bits of the second component, and vice-versa. Furthermore, the proposed fully-parallel algorithm is shown to converge to the same error correction performance as the state-of-the-art turbo decoding algorithm. Owing to its significantly increased parallelism, the proposed algorithm facilitates throughputs and latencies that are up to 6.86 times superior to those of the state-of-the art algorithm, when employed for the LTE and WiMAX turbo codes. However, this is achieved at the cost of a moderately increased computational complexity and resource requirement.

Proceedings ArticleDOI
21 Dec 2015
TL;DR: A generalization of successive cancellation algorithm for channels with memory where the complexity is polynomial in the number of states is proposed and two polar coding scheme are proposed to generate codewords following non-i.i.d. process required to achieve the capacity.
Abstract: Polar codes for channels with memory are considered in this paper. Sasoglu proved that Arikan's polarization applies to finite-state memory channels but practical decoding algorithms of polar codes for such channels have been unknown. This paper proposes a generalization of successive cancellation algorithm for channels with memory where the complexity is polynomial in the number of states. In addition, two polar coding scheme are proposed to generate codewords following non-i.i.d. process required to achieve the capacity. Whereas one is a simple application of the polar coding scheme for asymmetric memoryless channels, the other one combines a polar code with a fixed-to-variable length homophonic coding scheme, which can realize the input distribution exactly equal to the target process.

Journal ArticleDOI
TL;DR: This letter proposes a new method to reduce the decoding complexity of ADMM-based LP decoder by decreasing the number of Euclidean projections, and results show that the proposed decoder can still save roughly 20% decoding time even if both the over-relaxation and early termination techniques are used.
Abstract: The Euclidean projection onto check polytopes is the most time-consuming operation in the linear programming (LP) decoding based on alternating direction method of multipliers (ADMM) for low-density parity-check (LDPC) codes. In this letter, instead of reducing the complexity of Euclidean projection itself, we propose a new method to reduce the decoding complexity of ADMM-based LP decoder by decreasing the number of Euclidean projections. In particular, if all absolute values of the element-wise differences between the input vector of Euclidean projection in the current iteration and that in the previous iteration are less than a predefined value, then the Euclidean projection at the current iteration will be no longer performed. Simulation results show that the proposed decoder can still save roughly 20% decoding time even if both the over-relaxation and early termination techniques are used.

Posted Content
TL;DR: Improved bounds are obtained for a suite of problems in effective algebraic geometry, including Hilbert nullstellensatz, radical membership and counting rational points in low degree varieties.
Abstract: Let $f$ be a polynomial of degree $d$ in $n$ variables over a finite field $\mathbb{F}$. The polynomial is said to be unbiased if the distribution of $f(x)$ for a uniform input $x \in \mathbb{F}^n$ is close to the uniform distribution over $\mathbb{F}$, and is called biased otherwise. The polynomial is said to have low rank if it can be expressed as a composition of a few lower degree polynomials. Green and Tao [Contrib. Discrete Math 2009] and Kaufman and Lovett [FOCS 2008] showed that bias implies low rank for fixed degree polynomials over fixed prime fields. This lies at the heart of many tools in higher order Fourier analysis. In this work, we extend this result to all prime fields (of size possibly growing with $n$). We also provide a generalization to nonprime fields in the large characteristic case. However, we state all our applications in the prime field setting for the sake of simplicity of presentation. As an immediate application, we obtain improved bounds for a suite of problems in effective algebraic geometry, including Hilbert nullstellensatz, radical membership and counting rational points in low degree varieties. Using the above generalization to large fields as a starting point, we are also able to settle the list decoding radius of fixed degree Reed-Muller codes over growing fields. The case of fixed size fields was solved by Bhowmick and Lovett [STOC 2015], which resolved a conjecture of Gopalan-Klivans-Zuckerman [STOC 2008]. Here, we show that the list decoding radius is equal the minimum distance of the code for all fixed degrees, even when the field size is possibly growing with $n$.

Journal ArticleDOI
TL;DR: This paper investigates the decoding process of asynchronous convolutional-coded physical-layer network coding (PNC) systems with a layered decoding framework consisting of three layers, and proposes the Jt-CNC decoding algorithm, based on belief propagation, which is BER-optimal for synchronous PNC and near optimal for asynchronous PNC.
Abstract: This paper investigates the decoding process of asynchronous convolutional-coded physical-layer network coding (PNC) systems. Specifically, we put forth a layered decoding framework for convolutional-coded PNC consisting of three layers: symbol realignment layer, codeword realignment layer, and joint channel-decoding network coding (Jt-CNC) decoding layer. Our framework can deal with phase asynchrony (phase offset) and symbol arrival-time asynchrony (symbol misalignment) between the signals simultaneously transmitted by multiple sources. A salient feature of this framework is that it can handle both fractional and integral symbol misalignments. For the decoding layer, instead of Jt-CNC, previously proposed PNC decoding algorithms (e.g., XOR-CD and reduced-state Viterbi algorithms) can also be used with our framework to deal with general symbol misalignments. Our Jt-CNC algorithm, based on belief propagation, is BER-optimal for synchronous PNC and near optimal for asynchronous PNC. Extending beyond convolutional codes, we further generalize the Jt-CNC decoding algorithm for all cyclic codes. Our simulation shows that Jt-CNC outperforms the previously proposed XOR-CD algorithm and reduced-state Viterbi algorithm by 2 dB for synchronous PNC. For both phase-asynchronous and symbol-asynchronous PNC, Jt-CNC performs better than the other two algorithms. Importantly, for real wireless network experimentation, we implemented our decoding algorithm in a PNC prototype built on the USRP software radio platform. Our experiment shows that the proposed Jt-CNC decoder works well in practice.

Journal ArticleDOI
TL;DR: Under $\rm MAP$ decoding, although the introduction of a list can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected for any finite list size.
Abstract: Motivated by the significant performance gains which polar codes experience under successive cancellation list decoding, their scaling exponent is studied as a function of the list size. In particular, the error probability is fixed, and the tradeoff between the block length and back-off from capacity is analyzed. A lower bound is provided on the error probability under $\rm MAP$ decoding with list size $L$ for any binary-input memoryless output-symmetric channel and for any class of linear codes such that their minimum distance is unbounded as the block length grows large. Then, it is shown that under $\rm MAP$ decoding, although the introduction of a list can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected for any finite list size. In particular, this result applies to polar codes, since their minimum distance tends to infinity as the block length increases. A similar result is proved for genie-aided successive cancellation decoding when transmission takes place over the binary erasure channel, namely, the scaling exponent remains constant for any fixed number of helps from the genie. Note that since genie-aided successive cancellation decoding might be strictly worse than successive cancellation list decoding, the problem of establishing the scaling exponent of the latter remains open.

Journal ArticleDOI
TL;DR: This paper compares the finite-length performance of protograph-based spatially coupled low-density paritycheck (SC-LDPC) codes and LDPC block codes (LDPC-BCs) over GF(q) with a sliding window decoder with a stopping rule based on a soft belief propagation (BP) estimate to reduce computational complexity and latency.
Abstract: In this paper, we compare the finite-length performance of protograph-based spatially coupled low-density paritycheck (SC-LDPC) codes and LDPC block codes (LDPC-BCs) over GF(q). To reduce computational complexity and latency, a sliding window decoder with a stopping rule based on a soft belief propagation (BP) estimate is used for the q-ary SC-LDPC codes. Two regimes are considered: one when the constraint length of q-ary SC-LDPC codes is equal to the block length of q-ary LDPC-BCs and the other when the two decoding latencies are equal. Simulation results confirm that, in both regimes, (3,6)-, (3,9)-, and (3,12)-regular non-binary SC-LDPC codes can significantly outperform both binary and non-binary LDPC-BCs and binary SC-LDPC codes. Finally, we present a computational complexity comparison of q-ary SC-LDPC codes and q-ary LDPC-BCs under equal decoding latency and equal decoding performance assumptions.

Journal ArticleDOI
TL;DR: A framework for solving polynomial equations with size constraints on solutions is developed and powerful analogies from algebraic number theory allow us to identify the appropriate analogue of a lattice in each application and provide efficient algorithms to find a suitably short vector, thus allowing completely parallel proofs of the above theorems.
Abstract: We develop a framework for solving polynomial equations with size constraints on solutions. We obtain our results by showing how to apply a technique of Coppersmith for finding small solutions of polynomial equations modulo integers to analogous problems over polynomial rings, number fields, and function fields. This gives us a unified view of several problems arising naturally in cryptography, coding theory, and the study of lattices. We give (1) a polynomial-time algorithm for finding small solutions of polynomial equations modulo ideals over algebraic number fields, (2) a faster variant of the Guruswami-Sudan algorithm for list decoding of Reed-Solomon codes, and (3) an algorithm for list decoding of algebraic-geometric codes that handles both single-point and multi-point codes. Coppersmith's algorithm uses lattice basis reduction to find a short vector in a carefully constructed lattice; powerful analogies from algebraic number theory allow us to identify the appropriate analogue of a lattice in each application and provide efficient algorithms to find a suitably short vector, thus allowing us to give completely parallel proofs of the above theorems.

Journal ArticleDOI
TL;DR: A variable-node-based dynamic scheduling decoding algorithm that updates the same number of messages in one iteration as the original BP decoding algorithm does, which is different from some other dynamic decoding algorithms.
Abstract: Among the belief-propagation (BP) decoding algorithms of low-density parity-check (LDPC) codes, the algorithms based on dynamic scheduling strategy show excellent performance. In this letter, we propose a variable-node-based dynamic scheduling decoding algorithm. For the proposed algorithm, the reliability of variable nodes is evaluated based on the log-likelihood ratio (LLR) values and the parity-check equations; then, a more accurate dynamic selection strategy is presented. Simultaneously, the oscillating variable nodes are processed so that the influence of the spread of error messages caused by oscillation are suppressed. In addition, the proposed algorithm updates the same number of messages in one iteration as the original BP decoding algorithm does, which is different from some other dynamic decoding algorithms. Simulation results demonstrate that the proposed algorithm outperforms other algorithms.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: It is proved that random decisions on low-entropy bits may be replaced by an argmax decision without any loss of performance in the encoding and decoding of polar codes.
Abstract: We show how to replace some of the randomized decisions in the encoding and decoding of polar codes by deterministic decisions. Specifically, we prove that random decisions on low-entropy bits may be replaced by an argmax decision without any loss of performance. We illustrate the usefulness of this result in the case of polar coding for the Wyner-Ziv problem and for channel coding.

Proceedings ArticleDOI
19 Apr 2015
TL;DR: In this work, aiming at a low-latency list decoding implementation, a double thresholding algorithm is proposed for a fast list pruning and, with a negligible performance degradation, the list pruned delay is greatly reduced.
Abstract: For polar codes with short-to-medium code length, list successive cancellation decoding is used to achieve a good error-correcting performance. However, list pruning in the current list decoding is based on the sorting strategy and its timing complexity is high. This results in a long decoding latency for large list size. In this work, aiming at a low-latency list decoding implementation, a double thresholding algorithm is proposed for a fast list pruning. As a result, with a negligible performance degradation, the list pruning delay is greatly reduced. Based on the double thresholding, a low-latency list decoding architecture is proposed and implemented using a UMC 90nm CMOS technology. Synthesis results show that, even for a large list size of 16, the proposed low-latency architecture achieves a decoding throughput of 220 Mbps at a frequency of 641 MHz.

Journal ArticleDOI
TL;DR: The fundamental limits of channels with mismatched decoding are addressed, and an identity is deduced between the Verdu–Han general channel capacity formula, and the mismatch capacity formula applied to maximum likelihood decoding metric.
Abstract: The fundamental limits of channels with mismatched decoding are addressed. A general formula is established for the mismatch capacity of a general channel, defined as a sequence of conditional distributions with a general decoding metrics sequence. We deduce an identity between the Verdu–Han general channel capacity formula, and the mismatch capacity formula applied to maximum likelihood decoding metric. Furthermore, several upper bounds on the capacity are provided, and a simpler expression for a lower bound is derived for the case of a non-negative decoding metric. The general formula is specialized to the case of finite input and output alphabet channels with a type-dependent metric. The closely related problem of threshold mismatched decoding is also studied, and a general expression for the threshold mismatch capacity is obtained. As an example of threshold mismatch capacity, we state a general expression for the erasures-only capacity of the finite input and output alphabet channel. We observe that for every channel, there exists a (matched) threshold decoder, which is capacity achieving. In addition, necessary and sufficient conditions are stated for a channel to have a strong converse.

Patent
11 Mar 2015
TL;DR: In this paper, a tree-type decoding graph is generated for polar codes, where decoding paths within a threshold number of critical paths survive within the decoding path list in an order of high likelihood probability.
Abstract: A list decoding method for a polar code includes generating a tree-type decoding graph for input codeword symbols; the generating a tree-type decoding graph including, generating a decoding path list to which a decoding edge is added based on a reliability of a decoding path, the decoding path list being generated such that, among decoding paths generated based on the decoding edge, decoding paths within a threshold number of critical paths survive within the decoding path list in an order of high likelihood probability, and determining an estimation value, which corresponds to a decoding path having a maximum likelihood probability from among decoding paths of the decoding path list, as an information word.

Proceedings ArticleDOI
TL;DR: It is demonstrated that systematic network codes equipped with the proposed algorithm are good candidates for progressive packet recovery owing to their overall decoding delay characteristics.
Abstract: We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional network coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve the theoretical optimal performance. Furthermore, we demonstrate that systematic network codes equipped with the proposed algorithm are good candidates for progressive packet recovery owing to their overall decoding delay characteristics.

Proceedings ArticleDOI
24 May 2015
TL;DR: Simulation results show that a stochastic SC decoder can achieve similar error-correcting performance as its deterministic counterpart, which can pave the way for future hardware design of Stochastic polar codes decoders.
Abstract: Polar codes have emerged as the most favorable channel codes for their unique capacity-achieving property. To date, numerous approaches for efficient decoding of polar codes have been reported. However, these prior efforts focused on design of polar decoders via deterministic computation, while the behavior of stochastic polar decoder, which can have potential advantages such as low complexity and strong error-resilience, has not been studied in existing literatures. This paper, for the first time, investigates polar decoding using stochastic logic. Specifically, the commonly-used successive cancellation (SC) algorithm is reformulated into the stochastic form. Several methods that can potentially improve decoding performance are discussed and analyzed. Simulation results show that a stochastic SC decoder can achieve similar error-correcting performance as its deterministic counterpart. This work can pave the way for future hardware design of stochastic polar codes decoders.

Journal ArticleDOI
TL;DR: New scalable decoder architectures for Reed-Solomon (RS) codes are devised, comprising three parts: error-only decoding, error-erasure decoding, and their decoding for singly extended RS codes, and a unified parallel inversionless Blahut algorithm (UPIBA) is proposed by incorporating the key virtues of the error- only decoder ePIBMA into SPIBA.
Abstract: In this paper, we devise new scalable decoder architectures for Reed–Solomon (RS) codes, comprising three parts: error-only decoding, error-erasure decoding, and their decoding for singly extended RS codes New error-only decoders are devised through algorithmic transformations of the inversionless Berlekamp–Massey algorithm (IBMA) We first generalize the Horiguchi–Koetter formula to evaluate error magnitudes using the error locator polynomial $\Lambda(x)$ and the auxiliary polynomial $B(x) $ produced by IBMA, which effectively eliminates the computation of error evaluator polynomial We next devise an enhanced parallel inversionless Berlekamp–Massey algorithm (ePIBMA) that effectively takes advantage of the generalized Horiguchi–Koetter formula The derivative ePIBMA architecture requires only $2t+1$ ( $t$ denotes the error correction capability) systolic cells, in contrast with $3t$ or more cells of the existing regular architectures based on IBMA or the Euclidean algorithm Moreover, it may literally function as a linear-feedback-shift-register encoder New error-erasure decoders are devised through algorithmic transformations of the inversionless Blahut algorithm (IBA) The proposed split parallel inversionless Blahut algorithm (SPIBA) yields merely $2t+1$ systolic cells, which is the same number as the error-only decoder ePIBMA The task is partitioned into two separate steps, computing the complementary error erasure evaluator polynomial followed by computing error-erasure locator polynomial, both utilizing SPIBA Surprisingly, it has exactly the same number of cells and literally the same complexity and throughput as the proposed error-only decoder architecture ePIBMA; it employs 33% less hardware and at the same time achieves more than twice faster throughput, than the serial architecture IBA we further propose a unified parallel inversionless Blahut algorithm (UPIBA) by incorporating the key virtues of the error-only decoder ePIBMA into SPIBA The complexity and throughput of the rderivative UPIBA architecture are literally the same as ePIBMA and SPIBA, while performing almost equally efficiently as ePIBMA on error-only decoding and as SPIBA on error-erasure decoding UPIBA also inherits the dynamic power saving feature of ePIBMA and SPIBA Indeed, UPIBA renders highly attractive for on-the-fly implementation of error-erasure decoding We finally demonstrate that the proposed decoders, ie, ePIBMA, SPIBA, and UPIBA, can be magically migrated to decode singly extended RS codes, with negligible add-ons, except that an extra multiplexer is added to their critical paths To the author's best knowledge, this is the first time that a high-throughput decoder for singly extended RS codes is explored

Proceedings ArticleDOI
14 Jun 2015
TL;DR: A technique for designing the message-passing decoder mappings (or lookup tables) based on the ideas of channel quantization, which is not derived from sum-product algorithm or any other LDPC decoding algorithm, but is inserted in the density evolution algorithm to generate the lookup tables.
Abstract: A recent result has shown connections between statistical learning theory and channel quantization In this paper, we present a practical application of this result to the implementation of LDPC decoders In particular, we describe a technique for designing the message-passing decoder mappings (or lookup tables) based on the ideas of channel quantization This technique is not derived from sum-product algorithm or any other LDPC decoding algorithm Instead, the proposed algorithm is based on an optimal quantizer in the sense of maximization of mutual information, which is inserted in the density evolution algorithm to generate the lookup tables This algorithm has low complexity since it only employs 3-bit messages and lookup tables, which can be easily implemented in hardware Two quantized versions of the min-sum decoding algorithm are used for comparison Simulation results for a binary-input AWGN channel show 03 dB and 12 dB gains versus the two quantized min-sum algorithms On the binary symmetric channel also a gain is seen

Posted Content
TL;DR: In this paper, the error probability of decoding goes to one as the source block length goes to infinity, which implies that we have a strong converse theorem for the one helper source coding problem.
Abstract: We consider the one helper source coding problem posed and investigated by Ahlswede, Korner and Wyner In this system, the error probability of decoding goes to one as the source block length $n$ goes to infinity This implies that we have a strong converse theorem for the one helper source coding problem In this paper we provide a much stronger version of this strong converse theorem for the one helper source coding problem We prove that the error probability of decoding tends to one exponentially and derive an explicit lower bound of this exponent function

Journal ArticleDOI
TL;DR: This paper presents two simple and very flexible methods for constructing non-binary (NB) quasi-cyclic (QC) LDPC codes which can be decoded with a reduced-complexity iterative decoding scheme which significantly reduces the hardware implementation complexity.
Abstract: This paper presents two simple and very flexible methods for constructing non-binary (NB) quasi-cyclic (QC) LDPC codes. The proposed construction methods have several known ingredients including base array , masking , binary to non-binary replacement , and matrix-dispersion . By proper choice and combination of these ingredients, NB-QC-LDPC codes with excellent performance can be constructed. The constructed codes can be decoded with a reduced-complexity iterative decoding scheme which significantly reduces the hardware implementation complexity.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: A List- SD algorithm for short polar codes that has a fixed time complexity and does not make use of a radius and can match that of SC and SCL with as low as 72% of their memory requirements.
Abstract: Polar codes have gained a lot of attention during the past few years, because they can provably achieve the capacity of a memoryless channel. The design of efficient polar code decoders has been an active topic of research. The simple Successive Cancellation (SC) decoding algorithm yields poor error correction performance on short polar codes: the SC- List (SCL) algorithm overcomes this problem, but its hardware implementation requires a large amount of memory. Sphere Decoding (SD) is an alternative decoding technique that has been shown to work well for short polar codes, but it is burdened by undesirable characteristics. The performance of SD strongly depends on the choice of a suitable sphere radius, whose value must be selected according to the conditions of the channel. Channel conditions also affect the algorithm's time complexity, that is consequently variable. In this paper, we introduce a List- SD algorithm for short polar codes. It has a fixed time complexity and does not make use of a radius: thus, no knowledge of the channel noise level is required. It is shown that the error correction performance of List-SD can match that of SC and SCL with as low as 72% of their memory requirements.