scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2006"


Book
20 Jun 2006
TL;DR: Upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs and establish the goodness of linear Codes under optimal maximum-likelihood (ML) decoding.
Abstract: This article is focused on the performance evaluation of linear codes under optimal maximum-likelihood (ML) decoding. Though the ML decoding algorithm is prohibitively complex for most practical codes, their performance analysis under ML decoding allows to predict their performance without resorting to computer simulations. It also provides a benchmark for testing the sub-optimality of iterative (or other practical) decoding algorithms. This analysis also establishes the goodness of linear codes (or ensembles), determined by the gap between their achievable rates under optimal ML decoding and information theoretical limits. In this article, upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we address de Caen's based bounds and their improvements, and also consider sphere-packing bounds with their recent improvements targeting codes of moderate block lengths.

190 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is presented for soft-input soft-output (SISO) decoding of Reed-Solomon (RS) codes that uses the sum-product algorithm (SPA) in conjunction with a binary parity-check matrix of the RS code.
Abstract: An iterative algorithm is presented for soft-input soft-output (SISO) decoding of Reed-Solomon (RS) codes. The proposed iterative algorithm uses the sum-product algorithm (SPA) in conjunction with a binary parity-check matrix of the RS code. The novelty is in reducing a submatrix of the binary parity-check matrix that corresponds to less reliable bits to a sparse nature before the SPA is applied at each iteration. The proposed algorithm can be geometrically interpreted as a two-stage gradient descent with an adaptive potential function. This adaptive procedure is crucial to the convergence behavior of the gradient descent algorithm and, therefore, significantly improves the performance. Simulation results show that the proposed decoding algorithm and its variations provide significant gain over hard-decision decoding (HDD) and compare favorably with other popular soft-decision decoding methods

184 citations


Journal ArticleDOI
TL;DR: This letter presents the first successful method for iterative stochastic decoding of state-of-the-art low-density parity-check (LDPC) codes and has a significant potential for high-throughput and/or low complexity iterative decoding.
Abstract: This letter presents the first successful method for iterative stochastic decoding of state-of-the-art low-density parity-check (LDPC) codes. The proposed method shows the viability of the stochastic approach for decoding LDPC codes on factor graphs. In addition, simulation results for a 200 and a 1024 length LDPC code demonstrate the near-optimal performance of this method with respect to sum-product decoding. The proposed method has a significant potential for high-throughput and/or low complexity iterative decoding.

184 citations


Journal ArticleDOI
TL;DR: Simulation results show that for all RM codes of length 256 and many subcodes of length 512, these algorithms approach maximum-likelihood (ML) performance within a margin of 0.1 dB.
Abstract: Recursive list decoding is considered for Reed-Muller (RM) codes. The algorithm repeatedly relegates itself to the shorter RM codes by recalculating the posterior probabilities of their symbols. Intermediate decodings are only performed when these recalculations reach the trivial RM codes. In turn, the updated lists of most plausible codewords are used in subsequent decodings. The algorithm is further improved by using permutation techniques on code positions and by eliminating the most error-prone information bits. Simulation results show that for all RM codes of length 256 and many subcodes of length 512, these algorithms approach maximum-likelihood (ML) performance within a margin of 0.1 dB. As a result, we present tight experimental bounds on ML performance for these codes

152 citations


Journal ArticleDOI
TL;DR: The Viterbi algorithm is now used in most digital cellular phones and digital satellite receivers as well as in such diverse fields as magnetic recoding, voice recognition, and DNA sequence analysis.
Abstract: This paper describes how Andrew J. Viterbi developed a non-sequential decoding algorithm which proved useful in showing the superiority of convolutional codes over block codes for a given degree of decoding complexity. The Viterbi algorithm is now used in most digital cellular phones and digital satellite receivers as well as in such diverse fields as magnetic recoding, voice recognition, and DNA sequence analysis.

151 citations


Journal ArticleDOI
TL;DR: This paper introduces a method to construct high coding gain lattices with low decoding complexity based on LDPC codes and applies Construction D', due to Bos, Conway, and Sloane, to a set of parity checks defining a family of nestedLDPC codes to construct such lattices.
Abstract: Low-density parity-check codes (LDPC) can have an impressive performance under iterative decoding algorithms. In this paper we introduce a method to construct high coding gain lattices with low decoding complexity based on LDPC codes. To construct such lattices we apply Construction D', due to Bos, Conway, and Sloane, to a set of parity checks defining a family of nested LDPC codes. For the decoding algorithm, we generalize the application of max-sum algorithm to the Tanner graph of lattices. Bounds on the decoding complexity are derived and our analysis shows that using LDPC codes results in low decoding complexity for the proposed lattices. The progressive edge growth (PEG) algorithm is then extended to construct a class of nested regular LDPC codes which are in turn used to generate low density parity check lattices. Using this approach, a class of two-level lattices is constructed. The performance of this class improves when the dimension increases and is within 3 dB of the Shannon limit for error probabilities of about 10-6. This is while the decoding complexity is still quite manageable even for dimensions of a few thousands

119 citations


Journal ArticleDOI
TL;DR: This paper presents an iterative soft-decision decoding algorithm for Reed-Solomon (RS) codes offering both complexity and performance advantages over previously known decoding algorithms, and introduces the concept of using a belief-propagation-based decoder to enhance the soft-input information prior to decoding with an algebraic soft- decoder.
Abstract: In this paper, we present an iterative soft-decision decoding algorithm for Reed-Solomon (RS) codes offering both complexity and performance advantages over previously known decoding algorithms. Our algorithm is a list decoding algorithm which combines two powerful soft-decision decoding techniques which were previously regarded in the literature as competitive, namely, the Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation based on adaptive parity-check matrices, recently proposed by Jiang and Narayanan. Building on the Jiang-Narayanan algorithm, we present a belief-propagation-based algorithm with a significant reduction in computational complexity. We introduce the concept of using a belief-propagation-based decoder to enhance the soft-input information prior to decoding with an algebraic soft-decision decoder. Our algorithm can also be viewed as an interpolation multiplicity assignment scheme for algebraic soft-decision decoding of RS codes.

96 citations


Journal ArticleDOI
TL;DR: These Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel.
Abstract: We consider the problem of optimally decoding a quantum error correction code—that is, to find the optimal recovery procedure given the outcomes of partial "check" measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead.

96 citations


Proceedings ArticleDOI
22 Mar 2006
TL;DR: This paper proposes a new approach to channel code design for wireless network applications that provides flexibility in the design of error protection schemes for multi-terminal wireless networks.
Abstract: This paper proposes a new approach to channel code design for wireless network applications. The resulting nested codes can be decoded at different effective rates by different receivers ? rates that depend on the prior knowledge possessed by each receiver; we say these codes have multiple interpretations. We have identified several applications in wireless networks where this property is useful. Specific nested code constructions as well as efficient soft and hard decision decoding algorithms are described. The concept of a nested code with multiple interpretations provides flexibility in the design of error protection schemes for multi-terminal wireless networks.

78 citations


PatentDOI
Jung-Hoe Kim1, Miao Lei1, Oh Eun-Mi1
TL;DR: In this article, a decoding level generation unit produces decoding-level information that helps a bitstream including a number of audio channel signals and space information to be decoded into multiple channels, wherein the space information includes information about magnitude differences and/or similarities between channels.
Abstract: An system, method, and method of encoding/decoding a multi-channel audio signal, including a decoding level generation unit producing decoding-level information that helps a bitstream including a number of audio channel signals and space information to be decoded into a number of audio channel signals, wherein the space information includes information about magnitude differences and/or similarities between channels, and an audio decoder decoding the bitstream according to the decoding-level information. Accordingly, even a single input bitstream can be decoded into a suitable number of channels depending on the type of a speaker configuration used. Scalable channel decoding can be achieved by partially decoding an input bitstream. In the scalable channel decoding, a decoder may set decoding levels and outputs audio channel signals according to the decoding levels, thereby reducing decoding complexity.

77 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: This paper addresses the problem of distributed space-time coding with reduced decoding complexity for wireless relay network and admits code constructions with lower decoding complexity compared to codes based on some earlier system models.
Abstract: We address the problem of distributed space-time coding with reduced decoding complexity for wireless relay network. The transmission protocol follows a two-hop model wherein the source transmits a vector in the first hop and in the second hop the relays transmit a vector, which is a transformation of the received vector by a relay-specific unitary transformation. Design criteria is derived for this system model and codes are proposed that achieve full diversity. For a fixed number of relay nodes, the general system model considered in this paper admits code constructions with lower decoding complexity compared to codes based on some earlier system models.

Patent
14 Nov 2006
TL;DR: In this paper, a method, medium, and apparatus encoding and decoding an image in order to increase the decoding efficiency by performing binary-arithmetic coding/decoding on a binary value of a syntax element using a probability model having the same syntax element probability value for respective context index information of each of at least two image components.
Abstract: A method, medium, and apparatus encoding and/or decoding an image in order to increase encoding and decoding efficiency by performing binary-arithmetic coding/decoding on a binary value of a syntax element using a probability model having the same syntax element probability value for respective context index information of each of at least two image components.

Proceedings ArticleDOI
21 May 2006
TL;DR: In this paper, an explicit construction of error-correcting codes of rate R that can be list decoded in polynomial time up to a fraction (1-R-e) of errors is presented.
Abstract: For every 0 0, we present an explicit construction of error-correcting codes of rate R that can be list decoded in polynomial time up to a fraction (1-R-e) of errors. These codes achieve the "capacity" for decoding from adversarial errors, i.e., achieve the optimal trade-off between rate and error-correction radius. At least theoretically, this meets one of the central challenges in coding theory.Prior to this work, explicit codes achieving capacity were not known for any rate R. In fact, our codes are the first to beat the error-correction radius of 1-√R, that was achieved for Reed-Solomon (RS) codes in [9], for all rates R. (For rates R

Journal ArticleDOI
TL;DR: This paper approaches the soft-decision KV algorithm from the point of view of a communications systems designer who wants to know what benefits the algorithm can give, and how the extra complexity introduced by soft decoding can be managed at the systems level.
Abstract: Efficient soft-decision decoding of Reed–Solomon codes is made possible by the Koetter–Vardy (KV) algorithm which consists of a front-end to the interpolation-based Guruswami–Sudan list decoding algorithm. This paper approaches the soft-decision KV algorithm from the point of view of a communications systems designer who wants to know what benefits the algorithm can give, and how the extra complexity introduced by soft decoding can be managed at the systems level. We show how to reduce the computational complexity and memory requirements of the soft-decision front-end. Applications to wireless communications over Rayleigh fading channels and magnetic recording channels are proposed. For a high-rate (RS 9225,239) Reed–Solomon code, 2–3 dB of soft-decision gain is possible over a Rayleigh fading channel using 16-quadrature amplitude modulation. For shorter codes and at lower rates, the gain can be as large as 9 dB. To lower the complexity of decoding on the systems level, the redecoding architecture is explored which uses only the appropriate amount of complexity to decode each packet. An error-detection criterion based on the properties of the KV decoder is proposed for the redecoding architecture. Queuing analysis verifies the practicality of the redecoding architecture by showing that only a modestly sized RAM buffer is required.

Journal ArticleDOI
TL;DR: It is shown that selecting easily constructable "expander"-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance and therefore the Slepian-Wolf problem is considered.
Abstract: This paper discusses the Slepian-Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple "source-splitting" strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian-Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the "min-sum" iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable "expander"-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance

Journal ArticleDOI
TL;DR: This paper proves the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes.
Abstract: In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes. 1) Given n distinct elements alpha1,...,alphan from a field F, and n subsets S1,...,Sn of F, each of size at most l, the list decoding algorithm of Guruswami and Sudan can in polynomial time output all polynomials p of degree at most k that satisfy p(alphai)isinSi for every i, as long as l 0 (agreement of k is trivial to achieve). Such a bound was known earlier only for a nonexplicit center. Finding explicit bad list decoding configurations is of significant interest-for example, the best known rate versus distance tradeoff, due to Xing, is based on a bad list decoding configuration for algebraic-geometric codes, which is unfortunately not explicitly known

Proceedings ArticleDOI
09 Jul 2006
TL;DR: An efficient algorithm that solves the minimal polynomial of the ideal of interpolating polynomials with respect to a certain monomial order is presented based on the theory of Grobner bases of modules.
Abstract: A central problem of algebraic soft-decision decoding of Reed-Solomon codes is to find the minimal polynomial of the ideal of interpolating polynomials with respect to a certain monomial order. An efficient algorithm that solves the problem is presented based on the theory of Grobner bases of modules.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: This work presents iterative soft-in soft-out (SISO) decoding algorithms in a common framework and presents a related algorithm - random redundant iterative decoding - that is both practically realizable and applicable to arbitrary linear block codes.
Abstract: A number of authors have recently considered iterative soft-in soft-out (SISO) decoding algorithms for classical linear block codes that utilize redundant Tanner graphs. Jiang and Narayanan presented a practically realizable algorithm that applies only to cyclic codes while Kothiyal et al. presented an algorithm that, while applicable to arbitrary linear block codes, does not imply a low-complexity implementation. This work first presents the aforementioned algorithms in a common framework and then presents a related algorithm - random redundant iterative decoding - that is both practically realizable and applicable to arbitrary linear block codes. Simulation results illustrate the successful application of the random redundant iterative decoding algorithm to the extended binary Golay code. Additionally, the proposed algorithm is shown to outperform Jiang and Narayanan's algorithm for a number of Bose-Chaudhuri-Hocquenghem (BCH) codes

Patent
31 Mar 2006
TL;DR: In this article, a low-complexity MIMO detector that combines sphere decoding and m-algorithm approaches, while accounting for the effect of channel condition on the decoding operation, is provided.
Abstract: A method and system for detecting and decoding multiple signals. A low-complexity MIMO detector that combines sphere decoding and m-algorithm approaches, while accounting for the effect of channel condition on the decoding operation, is provided. Taking into account the channel condition effectively controls the size of the search tree, and consequently the search complexity, in an adaptive manner. The channel condition is exploited in the construction of the tree to manage the number of branches in the tree and to avoid undesirable growth.

Journal ArticleDOI
TL;DR: A recursive decoding algorithm is designed and its decoding threshold is derived for long RM codes and it corrects most error patterns of the Euclidean weight of order radicn/lnn, instead of the decoding threshold radicd/2 of the bounded distance decoding.
Abstract: Soft-decision decoding is considered for general Reed-Muller (RM) codes of length n and distance d used over a memoryless channel. A recursive decoding algorithm is designed and its decoding threshold is derived for long RM codes. The algorithm has complexity of order nlnn and corrects most error patterns of the Euclidean weight of order radicn/lnn, instead of the decoding threshold radicd/2 of the bounded distance decoding. Also, for long RM codes of fixed rate R, the new algorithm increases 4/pi times the decoding threshold of its hard-decision counterpart

Patent
01 Feb 2006
TL;DR: In this article, a decoding apparatus and method is described by which the decoder error occurrence probability is suppressed and a high decoding performance can be achieved, where the decoding apparatus diagonalizes a parity check matrix, updates LLR values and then adds a decoded word obtained by the decoding to a decoding word list.
Abstract: A decoding apparatus and method is disclosed by which the decoder error occurrence probability is suppressed and a high decoding performance can be achieved An ABP decoding apparatus diagonalizes a parity check matrix, updates LLR values, decodes the LLR values and then adds a decoded word obtained by the decoding to a decoded word list The ABP decoding apparatus repeats the decoding procedure as inner repetitive decoding by a predetermined number of times Further, as the ABP decoding apparatus successively changes initial values for priority ranks of the LLR values, it repeats the inner repetitive decoding as outer repetitive decoding by a predetermined number of times Then, the ABP decoding apparatus selects an optimum one of the decoded words from within a decoded word list obtained by the repeated inner repetitive decoding The invention is applied to an error correction system

Patent
Kai Yang1, Lin Wang1, Yi Lin1, Wei Yu1
12 Oct 2006
TL;DR: In this paper, a CABAC decoding system with at least a decoding unit group is proposed, where each decoding unit groups includes N decoding units connected with each other, and each group receives parameter information for decoding bins and bit streams to be decoded.
Abstract: A CABAC decoding system includes at least a decoding unit group. Each decoding unit group includes N decoding units connected with each other. The M th decoding unit receives parameter information for decoding bins and bit streams to be decoded, decodes the bins of the bit streams to be decoded, obtains the decoding result of the current decoding unit bin, and sends the updated parameter information to the (M+ 1 ) th decoding unit and an output unit. The CABAC decoding system achieves high decoding rate and keeps a reasonable cost of hardware resource, and thereby provides a high efficient and reasonable decoding solution.

Journal ArticleDOI
TL;DR: A new upper bound on the block error probability after decoding over the erasure channel is derived and it is shown that the algebraic immunity of a random balanced m-variable Boolean function is of order m/2(1-o(1)) with probability tending to 1 as m goes to infinity.
Abstract: Motivated by cryptographic applications, we derive a new upper bound on the block error probability after decoding over the erasure channel. The bound works for all linear codes and is in terms of the generalized Hamming weights. It turns out to be quite useful for Reed-Muller codes for which all the generalized Hamming weights are known whereas the full weight distribution is only partially known. For these codes, the error probability is related to the cryptographic notion of algebraic immunity. We use our bound to show that the algebraic immunity of a random balanced m-variable Boolean function is of order m/2(1-o(1)) with probability tending to 1 as m goes to infinity

Proceedings ArticleDOI
30 Aug 2006
TL;DR: The principle of "layered decoding" is extended to those codes not especially conceived for this practice, as to benefit of the increased convergence speed, and the average boost of two times in the convergence speed is shown.
Abstract: The principle of "layered decoding" is extended to those codes not especially conceived for this practice, as to benefit of the increased convergence speed. Two different strategies are considered to solve the problem and the related architectures presented: one more straightforward, and based on the use for the soft output of the last value originated in a layer; the other based on the computation of the variation (or delta) of the soft output metrics to allow concurrent updates. Then, as in architecture-first approach, the performance are assessed for several widths of the layer, which remarks the robustness of the delta-mechanism to high parallelisation factors. As in the exact layered decoding, the average boost of two times in the convergence speed is shown.

Journal ArticleDOI
TL;DR: In this letter, a new method for decoding turbo-like codes is proposed to simplify the hardware implementation of Log-MAP algorithm, where the multivariable Jacobian logarithm in Log- MAP algorithm is actually concatenated by recursive 1D Jacobian logs units.
Abstract: In this letter, a new method for decoding turbo-like codes is proposed to simplify the hardware implementation of Log-MAP algorithm. In our method, the multivariable Jacobian logarithm in Log-MAP algorithm is actually concatenated by recursive 1D Jacobian logarithm units. Two new approximations of Log-MAP algorithm based on these 1D units are then presented, which have good approximated accuracy and is simple for hardware implementation. We further suggest a novel decoding scheme that its complexity is near the Max-Log-MAP while the performance is close to the Log-MAP algorithm.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: The application of such adaptive methods can significantly reduce the complexity of the LP decoding algorithm, which, in the standard formulation, is exponential in the maximum row weight of the parity-check matrix.
Abstract: The ability of linear programming (LP) decoding to detect failures, and its potential for improvement by the addition of new constraints, motivates the use of an adaptive approach in selecting the constraints for the underlying LP problem. In this paper, we show that the application of such adaptive methods can significantly reduce the complexity of the LP decoding algorithm, which, in the standard formulation, is exponential in the maximum row weight of the parity-check matrix. We further show that adaptively adding new constraints, e.g. by combining parity checks, can provide large gains in LP decoder performance.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: A new effective algebraic decoding method is suggested, able to decode a single low rate Reed-Solomon code beyond half the minimum distance, based on multi-sequence shift-register synthesis, and is able to correct errors within a correcting radius similar to the Sudan algorithm.
Abstract: It is known, that Interleaved Reed-Solomon codes can be decoded algebraically beyond half the minimum distance using collaborative decoding strategies. Based on the same principles, we suggest a new effective algebraic decoding method, which is able to decode a single low rate Reed-Solomon code beyond half the minimum distance. This new algorithm is based on multi-sequence shift-register synthesis, and is able to correct errors within a correcting radius similar to the Sudan algorithm. In contrast to the Sudan algorithm, which may obtain a list of codewords, our algorithm yields a decoding failure if there does not exist a unique solution. However, the probability of such a failure is very small.

Journal ArticleDOI
TL;DR: The increased interference-rejection capability that can be obtained from convolutional coding with Viterbi decoding, Reed-Solomon coding with errors-and-erasures decoding, and block product coding with iterative decoding is explored.
Abstract: High-rate direct-sequence spread spectrum is a modulation technique in which most or all of the spreading is provided by nonbinary data modulation. For applications to mobile ad hoc wireless networks, the limited processing gain of high-rate direct-sequence spread spectrum gives only modest protection against multiple access or multipath interference, which limits the applicability of the modulation technique to fairly benign channels. In this paper, we explore the increased interference-rejection capability that can be obtained from convolutional coding with Viterbi decoding, Reed-Solomon coding with errors-and-erasures decoding, and block product coding with iterative decoding. For channels with multiple access or multipath interference, performance results are given for several soft-decision decoding metrics, the benefits of adaptive-rate coding are illustrated, and the accuracy and utility of the Gaussian approximation are described. We also show how to use the bit-error probability for a system without error-control coding to determine which modulation method will give the best packet-error probability in a system with error-control coding

Proceedings ArticleDOI
Chung-Hyo Kim, In-Cheol Park1
21 May 2006
TL;DR: A CABAC decoder based on the proposed prediction scheme improves the decoding performance by 24% compared to conventional decoders.
Abstract: Context-based adaptive binary arithmetic coding (CABAC) is the major entropy-coding algorithm employed in H.264/AVC. Although the performance gain of H.264/AVC is mostly resulted from CABAC, it is difficult to achieve a fast decoder because the decoding algorithm is basically sequential. In this paper, a prediction scheme is proposed to enhance overall decoding performance by decoding two binary symbols at a time. A CABAC decoder based on the proposed prediction scheme improves the decoding performance by 24% compared to conventional decoders.

Journal ArticleDOI
TL;DR: The optimum gear-shift decoder is proved to have a decoding threshold equal to or better than the best decoding threshold among those of the available algorithms.
Abstract: This paper considers a class of iterative message-passing decoders for low-density parity-check codes in which the decoder can choose its decoding rule from a set of decoding algorithms at each iteration. Each available decoding algorithm may have a different per-iteration computation time and performance. With an appropriate choice of algorithm at each iteration, overall decoding latency can be reduced significantly, compared with standard decoding methods. Such a decoder is called a gear-shift decoder because it changes its decoding rule (shifts gears) in order to guarantee both convergence and maximum decoding speed (minimum decoding latency). Using extrinsic information transfer charts, the problem of finding the optimum (minimum decoding latency) gear-shift decoder is formulated as a computationally tractable dynamic program. The optimum gear-shift decoder is proved to have a decoding threshold equal to or better than the best decoding threshold among those of the available algorithms. In addition to speeding up software decoder implementations, gear-shift decoding can be applied to optimize a pipelined hardware decoder, minimizing hardware cost for a given decoder throughput