scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2005"


Book
01 Jan 2005
TL;DR: This work aims to provide a context for Error Correcting Coding and to inspire a new generation of coders to tackle the challenge of Space-Time Coding.
Abstract: Preface. List of Program Files. List of Laboratory Exercises. List of Algorithms. List of Figures. List of Tables. List of Boxes. PART I: INTRODUCTION AND FOUNDATIONS. 1. A Context for Error Correcting Coding. PART II: BLOCK CODES. 2. Groups and Vector Spaces. 3. Linear Block Codes. 4. Cyclic Codes, Rings, and Polynomials. 5. Rudiments of Number Theory and Algebra. 6. BCH and Reed-Solomon Codes: Designer Cyclic Codes. 7. Alternate Decoding Algorithms for Reed-Solomon Codes. 8. Other Important Block Codes. 9. Bounds on Codes. 10. Bursty Channels, Interleavers, and Concatenation. 11. Soft-Decision Decoding Algorithms. PART III: CODES ON GRAPHS. 12. Convolution Codes. 13. Trefils Coded Modulation. PART IV: INTERATIVELY DECODED CODES. 14. Turbo Codes. 15. Low-Density Parity-Check Codes. 16. Decoding Algorithms on Graphs. PART V: SPACE-TIME CODING. 17. Fading Channels and Space-Time Coding. References. Index.

1,055 citations


Journal ArticleDOI
TL;DR: The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
Abstract: Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.

989 citations


Posted Content
TL;DR: In this article, the authors introduce the concept of graph-cover decoding, which is a theoretical tool that can be used to show connections between linear programming decoding and message-passing iterative decoding.
Abstract: The goal of the present paper is the derivation of a framework for the finite-length analysis of message-passing iterative decoding of low-density parity-check codes. To this end we introduce the concept of graph-cover decoding. Whereas in maximum-likelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graph-cover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graph-cover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and message-passing iterative decoding. Namely, on the one hand it turns out that graph-cover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like message-passing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graph-cover decoding can serve as a model to explain the behavior of message-passing iterative decoding. Understanding the behavior of graph-cover decoding is tantamount to understanding the so-called fundamental polytope. Therefore, we give some characterizations of this polytope and explain its relation to earlier concepts that were introduced to understand the behavior of message-passing iterative decoding for finite-length codes.

260 citations


Proceedings ArticleDOI
31 Oct 2005
TL;DR: A new construction of rank codes is presented, which defines new codes and includes known codes, and it is argued that these are different codes.
Abstract: The only known construction of error-correcting codes in rank metric was proposed in 1985. These were codes with fast decoding algorithm. We present a new construction of rank codes, which defines new codes and includes known codes. This is a generalization of E.M. Gabidulin, 1985. Though the new codes seem to be very similar to subcodes of known rank codes, we argue that these are different codes. A fast decoding algorithm is described

155 citations


Posted Content
TL;DR: The authors presented error-correcting codes that achieve the information-theoretically best possible trade-off between the rate and error-correction radius for RS codes, which are called folded Reed-Solomon codes.
Abstract: We present error-correcting codes that achieve the information-theoretically best possible trade-off between the rate and error-correction radius. Specifically, for every $0 0$, we present an explicit construction of error-correcting codes of rate $R$ that can be list decoded in polynomial time up to a fraction $(1-R-\eps)$ of {\em worst-case} errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are {\em folded Reed-Solomon codes}, which are in fact {\em exactly} Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in {\em phased bursts}. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on $\eps$) using ideas concerning ``list recovery'' and expander-based codes from \cite{GI-focs01,GI-ieeejl}. Concatenating the folded RS codes with suitable inner codes also gives us polynomial time constructible binary codes that can be efficiently list decoded up to the Zyablov bound, i.e., up to twice the radius achieved by the standard GMD decoding of concatenated codes.

148 citations


Journal ArticleDOI
TL;DR: In this paper, turbo-based coding schemes for relay systems together with iterative decoding algorithms are designed and it is shown that a remarkable advantage can be achieved over the direct and multihop transmission alternatives.
Abstract: In this paper, we design turbo-based coding schemes for relay systems together with iterative decoding algorithms. In the proposed schemes, the source node sends coded information bits to both the relay and the destination nodes, while the relay simultaneously forwards its estimate for the previous coded block to the destination after decoding and re-encoding. The destination observes a superposition of the codewords and uses an iterative decoding algorithm to estimate the transmitted messages. Different from the block-by-block decoding techniques used in the literature, this decoding scheme operates over all the transmitted blocks jointly. Various encoding and decoding approaches are proposed for both single-input single-output and multi-input multi-output systems over several different channel models. Capacity bounds and information-rate bounds with binary inputs are also provided, and it is shown that the performance of the proposed practical scheme is typically about 1.0-1.5 dB away from the theoretical limits, and a remarkable advantage can be achieved over the direct and multihop transmission alternatives.

146 citations


Patent
28 Apr 2005
TL;DR: In this paper, a method and apparatus for decoding a coded data stream of bits using an inner decoder, deinterleaver and an outer decoder is presented. But the decoding is terminated and a decoded word is outputted if the syndromes of the corrected word of the first decoding are all zeros.
Abstract: A method and apparatus for decoding a coded data stream of bits using an inner decoder, deinterleaver and an outer decoder. The outer decoder first decodes by error correction decoding for r errors per word. The decoding is terminated and a decoded word is outputted if the syndromes of the corrected word of the first decoding are all zeros. If the syndromes of the corrected word of the first decoding are not all zeros, a second decoding is performed by error decoding and erasure for the number of errors reduced by one and the number of erasures increased to two. The decoding is terminated and a decoded word is outputted if the syndromes of the corrected word of the second decoding are all zeros. If the syndromes of the corrected word of the second decoding are not all zeros, the second decoding by correcting and erasure decoding is repeated for the number of errors reduced by one and the number of erasures increased by two for each iteration of the second decoding.

141 citations


Journal ArticleDOI
TL;DR: An explicit construction of linear-time encodable and decodable codes of rate r which can correct a fraction of errors over an alphabet of constant size depending only on /spl epsiv/, for every 00.
Abstract: We present an explicit construction of linear-time encodable and decodable codes of rate r which can correct a fraction (1-r-/spl epsiv/)/2 of errors over an alphabet of constant size depending only on /spl epsiv/, for every 0 0. The error-correction performance of these codes is optimal as seen by the Singleton bound (these are "near-MDS" codes). Such near-MDS linear-time codes were known for the decoding from erasures; our construction generalizes this to handle errors as well. Concatenating these codes with good, constant-sized binary codes gives a construction of linear-time binary codes which meet the Zyablov bound, and also the more general Blokh-Zyablov bound (by resorting to multilevel concatenation). Our work also yields linear-time encodable/decodable codes which match Forney's error exponent for concatenated codes for communication over the binary symmetric channel. The encoding/decoding complexity was quadratic in Forney's result, and Forney's bound has remained the best constructive error exponent for almost 40 years now. In summary, our results match the performance of the previously known explicit constructions of codes that had polynomial time encoding and decoding, but in addition have linear-time encoding and decoding algorithms.

134 citations


Book ChapterDOI
14 Mar 2005
TL;DR: In this paper, the decoding of Gabidulin codes is seen as an instance of reconstruction of linearized polynomials, and two efficient decoding algorithms inspired from the Welch-Berlekamp decoding algorithm for Reed-Solomon codes are presented.
Abstract: In this paper, we present a new approach of the decoding of Gabidulin codes. We show that, in the same way as decoding Reed-Solomon codes is an instance of the problem called polynomial reconstruction, the decoding of Gabidulin codes can be seen as an instance of the problem of reconstruction of linearized polynomials. This approach leads to the design of two efficient decoding algorithms inspired from the Welch–Berlekamp decoding algorithm for Reed–Solomon codes. The first algorithm has the same complexity as the existing ones, that is cubic in the number of errors, whereas the second has quadratic complexity in 2.5n2 – 1.5k2.

133 citations


Journal ArticleDOI
TL;DR: This paper gives another fast algorithm for the soft decoding of Reed-Solomon codes different from the procedure proposed by Feng, which works in time (w/r) O(1)nlog2n, where r is the rate of the code, and w is the maximal weight assigned to a vertical line.
Abstract: This paper generalizes the classical Knuth-Schoumlnhage algorithm computing the greatest common divisor (gcd) of two polynomials for solving arbitrary linear Diophantine systems over polynomials in time, quasi-linear in the maximal degree. As an application, the following weighted curve fitting problem is considered: given a set of points in the plane, find an algebraic curve (satisfying certain degree conditions) that goes through each point the prescribed number of times. The main motivation for this problem comes from coding theory, namely, it is ultimately related to the list decoding of Reed-Solomon codes. This paper presents a new fast algorithm for the weighted curve fitting problem, based on the explicit construction of a Groebner basis. This gives another fast algorithm for the soft decoding of Reed-Solomon codes different from the procedure proposed by Feng, which works in time (w/r) O(1)nlog2n, where r is the rate of the code, and w is the maximal weight assigned to a vertical line

107 citations


Proceedings ArticleDOI
11 Sep 2005
TL;DR: A unique technique is introduced that improves the performance of the BP decoding in waterfall and error-floor regions by reversing the decoder failures and is able to provide performance improvements for short-length LDPC codes and push or avoid error- floor behaviors of longer codes.
Abstract: In this work, we introduce a unique technique that improves the performance of the BP decoding in waterfall and error-floor regions by reversing the decoder failures. Based on the short cycles existing in the bipartite graph, an importance sampling simulation technique is used to identify the bit and check node combinations that are the dominant sources of error events, called trapping sets. Then, the identified trapping sets are used in the decoding process to avoid the pre-known failures and to converge to the transmitted codeword. With a minimal additional decoding complexity, the proposed technique is able to provide performance improvements for short-length LDPC codes and push or avoid error-floor behaviors of longer codes

Book
01 Apr 2005
TL;DR: In this paper, a unified framework for list decodability of algebraic codes is presented. But this framework is not applicable to algebraic geometrical codes, as discussed in Section 2.1.
Abstract: 1 Introduction.- 1 Introduction.- 2 Preliminaries and Monograph Structure.- I Combinatorial Bounds.- 3 Johnson-Type Bounds and Applications to List Decoding.- 4 Limits to List Decodability.- 5 List Decodability Vs. Rate.- II Code Constructions and Algorithms.- 6 Reed-Solomon and Algebraic-Geometric Codes.- 7 A Unified Framework for List Decoding of Algebraic Codes.- 8 List Decoding of Concatenated Codes.- 9 New, Expander-Based List Decodable Codes.- 10 List Decoding from Erasures.- III Applications.- Interlude.- III Applications.- 11 Linear-Time Codes for Unique Decoding.- 12 Sample Applications Outside Coding Theory.- 13 Concluding Remarks.- A GMD Decoding of Concatenated Codes.

Journal ArticleDOI
TL;DR: In this article, it was shown that maximum likelihood decoding of Reed-Solomon codes is NP-hard even with unlimited preprocessing, thus strengthening a result of Bruck and Naor.
Abstract: Maximum-likelihood decoding is one of the central algorithmic problems in coding theory. It has been known for over 25 years that maximum-likelihood decoding of general linear codes is NP-hard. Nevertheless, it was so far unknown whether maximum-likelihood decoding remains hard for any specific family of codes with nontrivial algebraic structure. In this paper, we prove that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes remains hard even with unlimited preprocessing, thereby strengthening a result of Bruck and Naor.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: Two improved min-sum algorithms, the normalized and offset min-Sum algorithms, are applied to the decoding of irregular LDPC codes and it is shown that the behavior of the two algorithms in decoding irregularLDPC codes is different from that in decoding regular LDPC code.
Abstract: In this paper, we apply two improved min-sum algorithms, the normalized and offset min-sum algorithms, to the decoding of irregular LDPC codes. We show that the behavior of the two algorithms in decoding irregular LDPC codes is different from that in decoding regular LDPC codes, due to the existence of bit nodes of degree two. We analyze and give explanations to the difference, and propose approaches to improve the performance of the two algorithms

Journal ArticleDOI
TL;DR: It is concluded that analysis of estimation decoding for LDPC codes is feasible in channels with memory, and that such analysis shows large potential gains.
Abstract: Density evolution analysis of low-density parity-check (LDPC) codes in memoryless channels is extended to the Gilbert-Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum-product algorithm (SPA) is used to perform LDPC decoding jointly with channel-state detection. Density evolution results show (and simulation results confirm) that such decoders provide a significantly enlarged region of successful decoding within the GE parameter space, compared with decoders that do not exploit the channel memory. By considering a variety of ways in which a GE channel may be degraded, it is shown how knowledge of the decoding behavior at a single point of the GE parameter space may be extended to a larger region within the space, thereby mitigating the large complexity needed in using density evolution to explore the parameter space point-by-point. Using the GE channel as a straightforward example, we conclude that analysis of estimation decoding for LDPC codes is feasible in channels with memory, and that such analysis shows large potential gains.

Journal ArticleDOI
TL;DR: A novel maximum a posteriori (MAP) estimation approach is employed for error correction of arithmetic codes with a forbidden symbol, which improves the performance in terms of error correction with respect to a separated source and channel coding approach based on convolutional codes.
Abstract: In this paper, a novel maximum a posteriori (MAP) estimation approach is employed for error correction of arithmetic codes with a forbidden symbol. The system is founded on the principle of joint source channel coding, which allows one to unify the arithmetic decoding and error correction tasks into a single process, with superior performance compared to traditional separated techniques. The proposed system improves the performance in terms of error correction with respect to a separated source and channel coding approach based on convolutional codes, with the additional great advantage of allowing complete flexibility in adjusting the coding rate. The proposed MAP decoder is tested in the case of image transmission across the additive white Gaussian noise channel and compared against standard forward error correction techniques in terms of performance and complexity. Both hard and soft decoding are taken into account, and excellent results in terms of packet error rate and decoded image quality are obtained.

Journal ArticleDOI
TL;DR: A new low-complexity algorithm to decode low-density parity-check (LDPC) codes that achieves an appealing tradeoff between performance and complexity for FG-LDPC codes.
Abstract: In this paper, we develop a new low-complexity algorithm to decode low-density parity-check (LDPC) codes. The developments are oriented specifically toward low-cost, yet effective, decoding of (high-rate) finite-geometry (FG) LDPC codes. The decoding procedure updates iteratively the hard-decision received vector in search of a valid codeword in the vector space. Only one bit is changed in each iteration, and the bit-selection criterion combines the number of failed checks and the reliability of the received bits. Prior knowledge of the signal amplitude and noise power is not required. An optional mechanism to avoid infinite loops in the search is also proposed. Our studies show that the algorithm achieves an appealing tradeoff between performance and complexity for FG-LDPC codes.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: This paper presents optimal linear transformations of information symbols for quasi-orthogonal space-time block codes (QOSTBC) with minimum ML decoding complexity and shows that the diversity product is maximized when the mean transmission power is fixed.
Abstract: In this paper, we first present a necessary and sufficient condition on linear transformations for an QOSTBC to possess the minimum ML decoding complexity, i.e., real symbol pair-wise decoding. We then present optimal linear transformations of information symbols for quasi-orthogonal space-time block codes (QOSTBC) with minimum ML decoding complexity. The optimality is in the sense that the diversity product (or product distance) is maximized when the mean transmission power is fixed

Journal ArticleDOI
TL;DR: This work introduces and analyzes verification-based decoding for low-density parity-check (LDPC) codes, an approach specifically designed to manipulate data in packet-sized units, and describes how to utilize code scrambling to extend results to channels with errors controlled by an oblivious adversary.
Abstract: We introduce and analyze verification-based decoding for low-density parity-check (LDPC) codes, an approach specifically designed to manipulate data in packet-sized units. Verification-based decoding requires only linear time for both encoding and decoding and succeeds with high probability under random errors. We describe how to utilize code scrambling to extend our results to channels with errors controlled by an oblivious adversary.

Journal ArticleDOI
TL;DR: This approach allows to bridge the gap between the error performance achieved by the lower order reliability-based decoding algorithms which remain sub-optimum, and the maximum likelihood decoding, which is too complex to be implemented for most codes employed in practice.
Abstract: In this letter, an iterative decoding algorithm for linear block codes combining reliability-based decoding with adaptive belief propagation decoding is proposed. At each iteration, the soft output values delivered by the adaptive belief propagation algorithm are used as reliability values to perform reduced order reliability-based decoding of the code considered. This approach allows to bridge the gap between the error performance achieved by the lower order reliability-based decoding algorithms which remain sub-optimum, and the maximum likelihood decoding, which is too complex to be implemented for most codes employed in practice. Simulations results for various linear block codes are given and elaborated.

Journal ArticleDOI
01 Jan 2005
TL;DR: In this paper, an algebraic soft-decision decoder for Reed-Solomon codes based on the Guruswami-Sudan list decoder is proposed.
Abstract: The Koetter-Vardy algorithm is an algebraic soft-decision decoder for Reed-Solomon codes which is based on the Guruswami-Sudan list decoder. There are three main steps: (1) multiplicity calculation, (2) interpolation and (3) root finding. The Koetter-Vardy algorithm seems challenging to implement due to the high cost of interpolation. Motivated by a VLSI implementation viewpoint we propose an improvement to the interpolation algorithm that uses a transformation of the received word to reduce the number of iterations. We show how to reduce the memory requirements and give an efficient VLSI implementation for the Hasse derivative.

Patent
13 Apr 2005
TL;DR: In this paper, a decoding scheme for LDPC (Low-Density Parity-Check) codes using sequential decoding has been proposed, where the nodes are divided according to a parity-check matrix into check nodes for a parity check message and variable nodes for bit messages.
Abstract: Disclosed is a decoding apparatus for LDPC (Low-Density Parity-Check) codes when receiving data encoded with LDPC codes on a channel having consecutive output values, and a method thereof The decoding method for LDPC codes uses sequential decoding and includes the following steps: (a) the nodes are divided according to a parity-check matrix into check nodes for a parity-check message and variable nodes for a bit message; (b) the check nodes are divided into a predetermined number of subsets; (c) the LDPC codeword of each subset for all the check nodes is sequentially decoded; (d) an output message is generated for verifying validity of the decoding result; and (e) the steps (b), (c), and (d) are iteratively performed by a predetermined number of iterations

Journal ArticleDOI
TL;DR: In this article, the problem of bounding below the probability of error under maximum-likelihood decoding of a binary code with a known distance distribution used on a binary-symmetric channel (BSC) was addressed.
Abstract: We address the problem of bounding below the probability of error under maximum-likelihood decoding of a binary code with a known distance distribution used on a binary-symmetric channel (BSC). An improved upper bound is given for the maximum attainable exponent of this probability (the reliability function of the channel). In particular, we prove that the "random coding exponent" is the true value of the channel reliability for codes rate R in some interval immediately below the critical rate of the channel. An analogous result is obtained for the Gaussian channel.

Proceedings ArticleDOI
E. Erez1, Meir Feder1
31 Oct 2005
TL;DR: This work shows that network codes can be constructed for cyclic networks as long as at least one edge in each cycle has a delay, but it is not required that every edge would have a delay.
Abstract: In this work we address the problem of network codes for cyclic networks. We show that network codes can be constructed for cyclic networks as long as at least one edge in each cycle has a delay, but it is not required that every edge would have a delay. We then present the algorithm for constructing an optimal multicast network code, developed in our previous work, and analyze its computational complexity, showing that it is polynomial in the graph size. We discuss the properties of the resulting codes, and show the ability to modify the code in a localized manner when sinks are added or removed. This property is also applicable to acyclic networks. Finally, we propose the sequential decoding algorithm we developed in an earlier work for decoding the resulting codes. For this we analyze its decoding delay, for both acyclic and cyclic networks

Proceedings ArticleDOI
31 Oct 2005
TL;DR: It is shown for a large class of LDPC ensembles, including RA and IRA codes, that the bit iterative decoding threshold is essentially identical to the block iterative decode threshold.
Abstract: We show for a large class of LDPC ensembles, including RA and IRA codes, that the bit iterative decoding threshold is essentially identical to the block iterative decoding threshold

Proceedings ArticleDOI
01 Jan 2005
TL;DR: Simulations of codes of very short lengths over BEC reveal the superiority of the proposed decoding algorithm over present improved decoding algorithms for a wide range of bit error rates.
Abstract: This paper presents a new improved decoding algorithm for low-density parity-check (LDPC) codes over the binary erasure channel (BEC). The proposed algorithm combines the fact that a considerable fraction of unsatisfied check nodes are of degree two with the concept of guessing bits to perform simple graph-theoretic manipulations on the Tanner graph. The proposed decoding algorithm has a complexity similar to present improved decoding algorithms [H. Pishro-Nik et al., 2004]. Simulations of codes of very short lengths over BEC reveal the superiority of our algorithm over present improved decoding algorithms for a wide range of bit error rates.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: Simulation results indicate that the bounds are tight for small error rates, and the proposed codes have strong UEP property.
Abstract: A generalization of rateless codes (LT and Raptor codes) to provide unequal error protection (UEP) property is proposed in this paper. The proposed codes (UEP-LT and UEP-Raptor codes) are analyzed for the best possible performance over the binary erasure channel (BEC) in finite-length cases. We derive upper and lower bounds on the bit error probabilities under the maximum-likelihood (ML) decoding. We further verify our work with simulations. Simulation results indicate that the bounds are tight for small error rates, and the proposed codes have strong UEP property.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: A two-dimensional post normalization scheme is proposed to improve the performance of conventional min-sum (MS) and normalized MS decoding of irregular low density parity check codes and exhibits a lower error floor than that of belief propagation decoding in the high SNR region.
Abstract: A two-dimensional post normalization scheme is proposed to improve the performance of conventional min-sum (MS) and normalized MS decoding of irregular low density parity check codes. An iterative procedure based on parallel differential optimization algorithm is presented to obtain the optimal two-dimensional normalization factors. Both density evolution analysis and specific code simulation show that the proposed method provides a comparable performance as belief propagation decoding while requiring less complexity. Interestingly, the new method exhibits a lower error floor than that of belief propagation decoding in the high SNR region. With respect to standard MS and one-dimensional normalized MS decodings, the two-dimensional normalized MS offers a considerably better performance.

Posted Content
TL;DR: An algorithm of improving the performance of iterative decoding on perpendicular magnetic recording with signal-to-noise ratio mismatch technique and it is shown that an improvement of within one order of magnitude can be achieved.
Abstract: An algorithm of improving the performance of iterative decoding on perpendicular magnetic recording is presented. This algorithm follows on the authors' previous works on the parallel and serial concatenated turbo codes and low-density parity-check codes. The application of this algorithm with signal-to-noise ratio mismatch technique shows promising results in the presence of media noise. We also show that, compare to the standard iterative decoding algorithm, an improvement of within one order of magnitude can be achieved.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: It is shown that for the Gallager B decoding algorithm on binary symmetric channels, the optimization procedure can produce complexity savings of 30-40% as compared to the conventional code design method.
Abstract: The complexity-rate tradeoff for error-correcting codes below the Shannon limit is a central question in coding theory. This paper makes progress in this area by presenting a joint numerical optimization of rate and decoding complexity for low-density parity-check codes. The focus of this paper is on the binary symmetric channel and on a class of decoding algorithms for which an exact extrinsic information transfer (EXIT) chart analysis is possible. This class of decoding algorithms includes the Gallager decoding algorithm B. The main feature of the optimization method is a complexity measure based on the EXIT chart that accurately estimates the number of iterations required for the decoding algorithm to reach a target error rate. Under a fixed check-degree distribution, it is shown that the proposed complexity measure is a convex function of the variable-degree distribution in a region of interest. This allows us to numerically characterize the complexity-rate tradeoff. We show that for the Gallager B decoding algorithm on binary symmetric channels, the optimization procedure can produce complexity savings of 30-40% as compared to the conventional code design method