scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2002"


Journal ArticleDOI
TL;DR: A belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm is proposed.
Abstract: In this paper, we propose a belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm. The normalization factors can be obtained not only by simulation, but also, importantly, theoretically. This new BP-based algorithm is much simpler to implement than BP decoding as it requires only additions of the normalized received values and is universal, i.e., the decoding is independent of the channel characteristics. Some simulation results are given, which show this new decoding approach can achieve an error performance very close to that of BP on the additive white Gaussian noise channel, especially for low-density parity check (LDPC) codes whose check sums have large weights. The principle of normalization can also be used to improve the performance of the max-log-MAP algorithm in turbo decoding, and some coding gain can be achieved if the code length is long enough.

660 citations


Book
01 Jan 2002
TL;DR: This chapter discusses encoding and decoding of binary BCH codes as well as some of the techniques used in the Viterbi algorithm, which simplifies the decoding process and increases the chances of success in the face of uncertainty.
Abstract: Preface. Foreword. The ECC web site. 1. Introduction. 1.1 Error correcting coding: Basic concepts. 1.1.1 Block codes and convolutional codes. 1.1.2 Hamming distance, Hamming spheres and error correcting capability. 1.2 Linear block codes. 1.2.1 Generator and parity-check matrices. 1.2.2 The weight is the distance. 1.3 Encoding and decoding of linear block codes. 1.3.1 Encoding with G and H. 1.3.2 Standard array decoding. 1.3.3 Hamming spheres, decoding regions and the standard array. 1.4 Weight distribution and error performance. 1.4.1 Weight distribution and undetected error probability over a BSC. 1.4.2 Performance bounds over BSC, AWGN and fading channels. 1.5 General structure of a hard-decision decoder of linear codes. Problems. 2. Hamming, Golay and Reed-Muller codes. 2.1 Hamming codes. 2.1.1 Encoding and decoding procedures. 2.2 The binary Golay code. 2.2.1 Encoding. 2.2.2 Decoding. 2.2.3 Arithmetic decoding of the extended (24, 12, 8) Golay code. 2.3 Binary Reed-Muller codes. 2.3.1 Boolean polynomials and RM codes. 2.3.2 Finite geometries and majority-logic decoding. Problems. 3. Binary cyclic codes and BCH codes. 3.1 Binary cyclic codes. 3.1.1 Generator and parity-check polynomials. 3.1.2 The generator polynomial. 3.1.3 Encoding and decoding of binary cyclic codes. 3.1.4 The parity-check polynomial. 3.1.5 Shortened cyclic codes and CRC codes. 3.1.6 Fire codes. 3.2 General decoding of cyclic codes. 3.2.1 GF(2m) arithmetic. 3.3 Binary BCH codes. 3.3.1 BCH bound. 3.4 Polynomial codes. 3.5 Decoding of binary BCH codes. 3.5.1 General decoding algorithm for BCH codes. 3.5.2 The Berlekamp-Massey algorithm (BMA). 3.5.3 PGZ decoder. 3.5.4 Euclidean algorithm. 3.5.5 Chien search and error correction. 3.5.6 Errors-and-erasures decoding. 3.6 Weight distribution and performance bounds. 3.6.1 Error performance evaluation. Problems. 4. Nonbinary BCH codes: Reed-Solomon codes. 4.1 RS codes as polynomial codes. 4.2 From binary BCH to RS codes. 4.3 Decoding RS codes. 4.3.1 Remarks on decoding algorithms. 4.3.2 Errors-and-erasures decoding. 4.4 Weight distribution. Problems. 5. Binary convolutional codes. 5.1 Basic structure. 5.1.1 Recursive systematic convolutional codes. 5.1.2 Free distance. 5.2 Connections with block codes. 5.2.1 Zero-tail construction. 5.2.2 Direct-truncation construction. 5.2.3 Tail-biting construction. 5.2.4 Weight distributions. 5.3 Weight enumeration. 5.4 Performance bounds. 5.5 Decoding: Viterbi algorithm with Hamming metrics. 5.5.1 Maximum-likelihood decoding and metrics. 5.5.2 The Viterbi algorithm. 5.5.3 Implementation issues. 5.6 Punctured convolutional codes. 5.6.1 Implementation issues related to punctured convolutional codes. 5.6.2 RCPC codes. Problems. 6. Modifying and combining codes. 6.1 Modifying codes. 6.1.1 Shortening. 6.1.2 Extending. 6.1.3 Puncturing. 6.1.4 Augmenting, expurgating and lengthening. 6.2 Combining codes. 6.2.1 Time sharing of codes. 6.2.2 Direct sums of codes. 6.2.3 The |u|u + v|-construction and related techniques. 6.2.4 Products of codes. 6.2.5 Concatenated codes. 6.2.6 Generalized concatenated codes. 7. Soft-decision decoding. 7.1 Binary transmission over AWGN channels. 7.2 Viterbi algorithm with Euclidean metric. 7.3 Decoding binary linear block codes with a trellis. 7.4 The Chase algorithm. 7.5 Ordered statistics decoding. 7.6 Generalized minimum distance decoding. 7.6.1 Sufficient conditions for optimality. 7.7 List decoding. 7.8 Soft-output algorithms. 7.8.1 Soft-output Viterbi algorithm. 7.8.2 Maximum-a posteriori (MAP) algorithm. 7.8.3 Log-MAP algorithm. 7.8.4 Max-Log-MAP algorithm. 7.8.5 Soft-output OSD algorithm. Problems. 8. Iteratively decodable codes. 8.1 Iterative decoding. 8.2 Product codes. 8.2.1 Parallel concatenation: Turbo codes. 8.2.2 Serial concatenation. 8.2.3 Block product codes. 8.3 Low-density parity-check codes. 8.3.1 Tanner graphs. 8.3.2 Iterative hard-decision decoding: The bit-flip algorithm. 8.3.3 Iterative probabilistic decoding: Belief propagation. Problems. 9. Combining codes and digital modulation. 9.1 Motivation. 9.1.1 Examples of signal sets. 9.1.2 Coded modulation. 9.1.3 Distance considerations. 9.2 Trellis-coded modulation (TCM). 9.2.1 Set partitioning and trellis mapping. 9.2.2 Maximum-likelihood. 9.2.3 Distance considerations and error performance. 9.2.4 Pragmatic TCM and two-stage decoding. 9.3 Multilevel coded modulation. 9.3.1 Constructions and multistage decoding. 9.3.2 Unequal error protection with MCM. 9.4 Bit-interleaved coded modulation. 9.4.1 Gray mapping. 9.4.2 Metric generation: De-mapping. 9.4.3 Interleaving. 9.5 Turbo trellis-coded modulation. 9.5.1 Pragmatic turbo TCM. 9.5.2 Turbo TCM with symbol interleaving. 9.5.3 Turbo TCM with bit interleaving. Problems. Appendix A: Weight distributions of extended BCH codes. A.1 Length 8. A.2 Length 16. A.3 Length 32. A.4 Length 64. A.5 Length 128. Bibliography. Index.

506 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: A pipelined VLSI architecture for the implementation of the K-best algorithm, which is computational inexpensive and has fixed throughput and has similar performance as the optimal lattice decoding algorithm if high value of K is used.
Abstract: Lattice decoding algorithms have been proposed for implementing the maximum likelihood detector (MLD), which is the optimal receiver for multiple-input multiple-output (MIMO) channels. However the computational complexity of direct implementation of the lattice decoding algorithm is high and the throughput is variable. In this work, a K-best algorithm is proposed to implement the lattice decoding. It is computational inexpensive and has fixed throughput. It can be easily implemented in a pipelined fashion and has similar performance as the optimal lattice decoding algorithm if high value of K is used. In this paper, we describe a pipelined VLSI architecture for the implementation of the K-best algorithm. The architecture was designed and synthesized using a 0.35 /spl mu/m library. For a 4-transmit and 4-receive antennas system using 16-QAM, a decoding throughput of 10 Mbit/s can be achieved.

413 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: A new sphere decoding algorithm which is even less computationally complex than the original sphere decoder is presented, and the complexity of the new sphere decoding is relatively insensitive to the initial choice of sphere radius.
Abstract: Sphere decoding for multiple antenna systems has been shown to achieve near-ML performance with low complexity. However, the achievement of such an excellent performance-complexity tradeoff is highly dependent on the initial choice of sphere radius. We present a new sphere decoding algorithm which is even less computationally complex than the original sphere decoder. Moreover, the complexity of the new sphere decoder is relatively insensitive to the initial choice of sphere radius. Thus, by making the choice of radius sufficiently large, the ML solution is guaranteed with low complexity, even for large constellations. In our simulations, we show that with 4 transmit and 4 receive antennas and 64-QAM, our new sphere decoding algorithm achieves the exact ML solution with approximately a factor of 3.5 reduction in complexity when compared to the original sphere decoder, and a factor of 10/sup 5/ reduction when compared to brute-force ML decoding.

273 citations


Proceedings ArticleDOI
03 Nov 2002
TL;DR: A shuffled version of the belief propagation algorithm for the decoding of low-density parity-check (LDPC) codes is proposed, and it is shown that when the Tanner graph of the code is acyclic and connected, the proposed scheme is optimal in the sense of MAP decoding and converges faster than the standard BP algorithm.
Abstract: In this paper, we propose a shuffled version of the belief propagation (BP) algorithm for the decoding of low-density parity-check (LDPC) codes. We show that when the Tanner graph of the code is acyclic and connected, the proposed scheme is optimal in the sense of MAP decoding and converges faster (or at least no slower) than the standard BP algorithm. Interestingly, this new version keeps the computational advantages of the forward-backward implementations of BP decoding. Both serial and parallel implementations are considered. We show by simulation that the new schedule offers better performance/complexity trade-offs.

169 citations


Patent
23 Dec 2002
TL;DR: In this article, a method, apparatus and article of manufacture for decoding concatenated codes is described, where the second inner decoding is a function of the reliability information from the first outer decoding, the first inner decoding, and the parity data from the second outer decoding.
Abstract: A method, apparatus and article of manufacture for decoding concatenated codes includes (in terms of the method): receiving data representing concatenated codes; first inner decoding the received data resulting in first inner message data and parity data; first outer decoding the first inner message data, resulting in reliability information and first outer message data; second inner decoding the first outer message data, resulting in second inner message data; and second outer decoding the second inner message data. The second inner decoding is a function of: the reliability information from the first outer decoding; the first outer message data; and the parity data from the first inner decoding.

152 citations


Book
29 Apr 2002
TL;DR: Practical examples of MAP and SOVA decoding for turbo codes and infinite field arithematic and algebraic decoding methods for BCH and Reed-Solomon codes are explained.
Abstract: From the Publisher: Error Control Coding: From Theory to Practice provides a concise introduction to basic coding techniques and their application. The fundamental concepts of coding theory are explained using simple examples with minimum use of complex mathematical tools. The selection of appropriate codes and the design of decoders are discussed. Bridging the gap between digital communications and information theory, this accessible approach will appeal to students and practising engineers alike. The clear presentation and practical emphasis make this book an excellent tool for both communications and electronic engineering students. Practitioners new to the field will find this text an essential guide to coding. Features include: End of chapter problems to test and develop the readers understanding of the most popular codes and decoding methodsFinite field arithematic and algebraic decoding methods for BCH and Reed-Solomon codesDetailed coverage of Viterbi decoding and related implementation issuesTurbo codes and related code types, including Gallager codes and turbo product codesPractical examples of MAP and SOVA decoding for turbo codes

144 citations


Journal ArticleDOI
30 Jun 2002
TL;DR: The ordered statistics decoding algorithm is improved by using matching techniques to reduce the worst case complexity of decoding or to improve the error performance, to achieve practically optimal decoding of rate-1/2 codes of lengths up to 128.
Abstract: In this paper, we improve the ordered statistics decoding algorithm by using matching techniques. This allows us: to reduce the worst case complexity of decoding (the error performance being fixed) or to improve the error performance (for a same complexity); to reduce the ratio between average complexity and worst case complexity; to achieve practically optimal decoding of rate-1/2 codes of lengths up to 128 (rate-1/2 codes are a traditional benchmark, for coding rates different from 1/2, the decoding is easier); to achieve near-optimal decoding of a rate-1/2 code of length 192, which could never be performed before.

133 citations


Patent
Sungwook Kim1
30 Aug 2002
TL;DR: In this article, the authors propose an approximation of the standard message passing algorithm used for LDPC decoding, which reduces computational complexity and provides reduced area without substantial added latency by using a block-serial mode.
Abstract: Architectures for decoding low density parity check codes permit varying degrees of hardware sharing to balance throughput, power consumption and area requirements. The LDPC decoding architectures may be useful in a variety of communication systems in which throughput, power consumption, and area are significant concerns. The decoding architectures implement an approximation of the standard message passing algorithm used for LDPC decoding, thereby reducing computational complexity. Instead of a fully parallel structure, this approximation permits at least a portion of the message passing structure between check and bit nodes to be implemented in a block-serial mode, providing reduced area without substantial added latency.

129 citations


Journal ArticleDOI
TL;DR: This work presents a polynomial time constructible asymptotically good family of binary codes of rate /spl Omega/(/spl epsi//sup 4/) that can be list-decoded in polynometric time from up to a fraction of errors, using lists of size O(/spl Epsi //sup -2/).
Abstract: Informally, an error-correcting code has "nice" list-decodability properties if every Hamming ball of "large" radius has a "small" number of codewords in it. We report linear codes with nontrivial list-decodability: i.e., codes of large rate that are nicely list-decodable, and codes of large distance that are not nicely list-decodable. Specifically, on the positive side, we show that there exist codes of rate R and block length n that have at most c codewords in every Hamming ball of radius H/sup -1/(1-R-1/c)/spl middot/n. This answers the main open question from the work of Elias (1957). This result also has consequences for the construction of concatenated codes of good rate that are list decodable from a large fraction of errors, improving previous results of Guruswami and Sudan (see IEEE Trans. Inform. Theory, vol.45, p.1757-67, Sept. 1999, and Proc. 32nd ACM Symp. Theory of Computing (STOC), Portland, OR, p. 181-190, May 2000) in this vein. Specifically, for every /spl epsi/ > 0, we present a polynomial time constructible asymptotically good family of binary codes of rate /spl Omega/(/spl epsi//sup 4/) that can be list-decoded in polynomial time from up to a fraction (1/2-/spl epsi/) of errors, using lists of size O(/spl epsi//sup -2/). On the negative side, we show that for every /spl delta/ and c, there exists /spl tau/ 0, and an infinite family of linear codes {C/sub i/}/sub i/ such that if n/sub i/ denotes the block length of C/sub i/, then C/sub i/ has minimum distance at least /spl delta/ /spl middot/ n/sub i/ and contains more than c/sub 1/ /spl middot/ n/sub i//sup c/ codewords in some Hamming ball of radius /spl tau/ /spl middot/ n/sub i/. While this result is still far from known bounds on the list-decodability of linear codes, it is the first to bound the "radius for list-decodability by a polynomial-sized list" away from the minimum distance of the code.

128 citations


Journal ArticleDOI
30 Jun 2002
TL;DR: With a proper choice of the initial p, the proposed improved bit-flipping (BF) algorithm achieves gain not only in performance, but also in average decoding time for signal-to-noise ratio (SNR) values of interest with respect to p = 1.
Abstract: In this correspondence, a new method for improving hard-decision bit-flipping decoding of low-density parity-check (LDPC) codes is presented. Bits with a number of unsatisfied check sums larger than a predetermined threshold are flipped with a probability p /spl les/ 1 which is independent of the code considered. The probability p is incremented during decoding according to some rule. With a proper choice of the initial p, the proposed improved bit-flipping (BF) algorithm achieves gain not only in performance, but also in average decoding time for signal-to-noise ratio (SNR) values of interest with respect to p = 1.

Proceedings ArticleDOI
19 May 2002
TL;DR: An explicit construction of linear-time encodable and decodable codes of rate r which can correct a fraction (1&mdash:r&egr;ε)/2 of errors over an alphabet of constant size depending only on ε, for every 0 < r < 1 and arbitrarily small ε> 0.
Abstract: We present an explicit construction of linear-time encodable and decodable codes of rate r which can correct a fraction (1 —re)/2 of errors over an alphabet of constant size depending only on e, for every 0 r 0. The error-correction performance of these codes is optimal as seen by the Singleton bound (these are "near-MDS" codes). Such near-MDS linear-time codes were known for the decoding from erasures [2]; our construction generalizes this to handle errors as well. Concatenating these codes with good, constant-sized binary codes gives a construction of linear-time binary codes which meet the so-called "Zyablov bound". In a nutshell, our results match the performance of the previously known explicit constructions of codes that had polynomial time encoding and decoding, but in addition have linear time encoding and decoding algorithms.We also obtain some results for list decoding targeted at the situation when the fraction of errors is very large, namely (1—e) for an arbitrarily small constant e > 0. The previously known constructions of such codes of good rate over constant-sized alphabets either used algebraic-geometric codes and thus suffered from complicated constructions and slow decoding, or as in the recent work of the authors [9], had fast encoding/decoding, but suffered from an alphabet size that was exponential in 1/e. We present two constructions of such codes with rate close to Ω(e2) over an alphabet of size quasi-polynomial in 1/e. One of the constructions, at the expense of a slight worsening of the rate, can achieve an alphabet size which is polynomial in 1/e. It also yields constructions of codes for list decoding from erasures which achieve new trade-offs. In particular, we construct codes of rate close to the optimal Ω(e) rate which can be efficiently list decoded from a fraction (1—e) of erasures.

Proceedings ArticleDOI
17 Nov 2002
TL;DR: This work proposes that half symbols in a quasi-orthogonal space-time block codes with full diversity are from a signal constellation A and another half are optimal selections from the rotated constellation e/sup j/spl phi///spl Ascr/.
Abstract: Space-time block codes from orthogonal designs proposed by Alamouti (1998), and Tarokh-Jafarkhani-Calderbank (199) have attracted much attention lately due to their fast maximum-likelihood (ML) decoding and full diversity. However, the maximum symbol transmission rate of a space-time block code from complex orthogonal designs for complex constellations is only 3/4 for three and four transmit antennas. Jafarkhani (see IEEE Trans. Commun., vol.49, no.1, p.1-4, 2001), and Tirkkonen-Boariu-Hottinen (see ISSSTA 2000, pp.429-432, September 2000) proposed space-time block codes from quasi-orthogonal designs, where the orthogonality is relaxed to provide higher symbol transmission rates. With the quasi-orthogonal structure, these codes still have a fast ML decoding, but do not have the full diversity. In this paper, we design quasi-orthogonal space-time block codes with full diversity by properly choosing the signal constellations. In particular, we propose that half symbols in a quasi-orthogonal design are from a signal constellation A and another half of them are optimal selections from the rotated constellation e/sup j/spl phi// A. The optimal rotation angles /spl phi/ are obtained for some commonly used signal constellations. The resulting codes have both full diversity and fast ML decoding.

Patent
26 Jun 2002
TL;DR: In this paper, a method for decoding data stored in a partial area of a coding pattern on a surface, based on a recorded image of the partial area, is presented, and a device and a memory medium storing a computer program with instructions for performing such a data decoding technique are also provided.
Abstract: A method is provided for decoding data stored in a partial area of a coding pattern on a surface, based on a recorded image of the partial area. The coding pattern contains elements which each have at least two possible decoding values. The method identifies in the image a plurality of elements. The method further calculates, for each identified element, an associated value probability for each possible decoding value that the element has this decoding value. Additionally, the method performs the decoding of data based on the decoding values and the corresponding value probabilities. A device and a memory medium storing a computer program with instructions for performing such a data decoding technique are also provided.

Journal ArticleDOI
TL;DR: To reduce the storage bottleneck for each subdecoder, a modified version of the partial storage of state metrics approach is presented and achieves a better tradeoff between storage part and recomputation part in general.
Abstract: Turbo decoders inherently have large decoding latency and low throughput due to iterative decoding. To increase the throughput and reduce the latency, high-speed decoding schemes have to be employed. In this paper, following a discussion on basic parallel decoding architectures, the segmented sliding window approach and two other types of area-efficient parallel decoding schemes are proposed. Detailed comparison on storage requirement, number of computation units, and the overall decoding latency is provided for various decoding schemes with different levels of parallelism. Hybrid parallel decoding schemes are proposed as an attractive solution for very high level parallelism implementations. To reduce the storage bottleneck for each subdecoder, a modified version of the partial storage of state metrics approach is presented. The new approach achieves a better tradeoff between storage part and recomputation part in general. The application of the pipeline-interleaving technique to parallel turbo decoding architectures is also presented. Simulation results demonstrate that the proposed area-efficient parallel decoding schemes do not cause performance degradation.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: A VLSI architecture for interpolation that uses a transformation of the received word to reduce the number of iterations of the interpolation algorithm and how the memory requirements can be reduced and an important operation, the Hasse derivative, can be efficiently implemented in VLSi.
Abstract: The Koetter-Vardy algorithm is an algebraic soft-decision decoder for Reed-Solomon codes which is based on the Guruswami-Sudan list decoder. There are three main steps: 1) multiplicity calculation, 2) interpolation and 3) root finding. The Koetter-Vardy algorithm is challenging to implement due to the high cost of interpolation. We propose a VLSI architecture for interpolation that uses a transformation of the received word to reduce the number of iterations of the interpolation algorithm. We also show how the memory requirements can be reduced and an important operation, the Hasse derivative, can be efficiently implemented in VLSI.

Proceedings ArticleDOI
17 Nov 2002
TL;DR: It is shown that min-sum is robust against quantization effects, and in many cases, only four quantization bits suffices to obtain close to ideal performance.
Abstract: This paper is concerned with the implementation issues of the so-called min-sum algorithm (also referred to as max-sum or max-product) for the decoding of low-density parity-check (LDPC) codes The effects of clipping threshold and the number of quantization bits on the performance of the min-sum algorithm at short and intermediate block lengths are studied It is shown that min-sum is robust against quantization effects, and in many cases, only four quantization bits suffices to obtain close to ideal performance We also propose modifications to the min-sum algorithm that improve the performance by a few tenths of a dB with just a small increase in decoding complexity

Proceedings ArticleDOI
16 Nov 2002
TL;DR: This paper gives another fast algorithm for the soft decoding of Reed-Solomon codes different from the procedure proposed by Feng, which works in time (w/r) O(1)nlog2n, where r is the rate of the code, and w is the maximal weight assigned to a vertical line.
Abstract: We generalize the classical Knuth-Schonhage algorithm computing GCD of two polynomials for solving arbitrary linear Diophantine systems over polynomials in time, quasi-linear in the maximal degree As an application, we consider the following weighted curve fitting problem: given a set of points in the plain, find an algebraic curve (satisfying certain degree conditions) that goes through each point the prescribed number of times The main motivation for this problem comes from coding theory, namely it is ultimately related to the list decoding of Reed-Solomon codes We present a new fast algorithm for the weighted curve fitting problem, based on the explicit construction of Groebner basis This gives another fast algorithm for soft-decoding of Reed-Solomon codes different from the procedure proposed by Feng (1999), which works in time (w/r)/sup O(1)/ n log/sup 2/ n loglogn, where r is the rate of the code, and w is the maximal weight assigned to a vertical line

01 Jan 2002
TL;DR: An overview of the design alternatives for turbo equalization given system parameters such as the channel response or the signal-to-noise ratio is provided and it is shown that the proposed sub-optimum algorithm achieves near-MLD performance with significantly lower decoding complexity.
Abstract: We study the turbo equalization approach to coded data trans- mission over channels with intersymbol interference. In the original system invented by Douillared et al., the data is protected by a convolutional code and the receiver consists of two trellis-based detectors, one for the channel (the equalizer) and one for the code (the decoder). It has been shown that iterating equalization and decoding tasks can yield tremendous improve- ments in bit error rate. We introduce new approaches to combining equal- ization based on linear filtering with decoding. Through simulation and an- alytical results, we show that the performance of the new approaches are similar to the trellis-based receiver, while providing large savings in com- putational complexity. Moreover, this paper provides an overview of the design alternatives for turbo equalization given system parameters such as the channel response or the signal-to-noise ratio. Abstract—This paper presents a maximum-likelihood decoding (MLD) and a sub-optimum decoding algorithm for Reed-Solomon (RS) codes. The proposed algorithms are based on the algebraic structure of the binary images of RS codes. Theoretical bounds on the performance are derived and shown to be accurate. The proposed sub-optimum algorithm achieves near-MLD performance with significantly lower decoding complexity. It is also shown that the proposed sub-optimum algorithm has better perfor- mance compared to generalized minimum distance (GMD) decoding, while the proposed MLD algorithm has significantly lower decoding complexity than the well-known Vardy-Be'ery algorithm.

Proceedings ArticleDOI
17 Nov 2002
TL;DR: The numerical calculations show that with one properly chosen parameter for each of these two improved BP-based decoding algorithms, performances very close to that of the BP algorithm can be achieved.
Abstract: In this paper, we analyze the performance of two improved BP-based decoding algorithms for LDPC codes, namely the normalized BP-based and the offset BP-based algorithms, by means of density evolution. The numerical calculations show that with one properly chosen parameter for each of these two improved BP-based algorithms, performances very close to that of the BP algorithm can be achieved. Simulation results for LDPC codes with code length moderately long validate the proposed optimization. Finite quantization effects on the BP-based and the offset BP-based decoding algorithms are evaluated.

Journal ArticleDOI
TL;DR: It is shown that the proposed suboptimum, algorithm has better performance compared with generalized minimum distance decoding, while the proposedMLD algorithm has significantly lower decoding complexity than the well-known Vardy-Be'ery (1991) MLD algorithm.
Abstract: This paper presents a maximum-likelihood decoding (MLD) and a suboptimum decoding algorithm for Reed-Solomon (RS) codes. The proposed algorithms are based on the algebraic structure of the binary images of RS codes. Theoretical bounds on the performance are derived and shown to be consistent with simulation results. The proposed suboptimum algorithm achieves near-MLD performance with significantly lower decoding complexity. It is also shown that the proposed suboptimum, algorithm has better performance compared with generalized minimum distance decoding, while the proposed MLD algorithm has significantly lower decoding complexity than the well-known Vardy-Be'ery (1991) MLD algorithm.

Patent
30 Apr 2002
TL;DR: In this paper, area-efficient parallel decoding schemes may be used to overcome the decoding latency and throughput associated with turbo decoders in high-level parallelism implementations, and the area efficient parallel decoding scheme introduces little or no performance degradation.
Abstract: Turbo decoders may have large decoding latency and low throughput due to iterative decoding. One way to increase the throughput and reduce the latency of turbo decoders is to use high speed decoding schemes. In particular, area-efficient parallel decoding schemes may be used to overcome the decoding latency and throughput associated with turbo decoders. In addition, hybrid parallel decoding schemes may be used in high-level parallelism implementations. Moreover, the area-efficient parallel decoding schemes introduce little or no performance degradation.

Journal ArticleDOI
TL;DR: A general algorithm, applicable to a wide range of constrained interpolation problems in coding theory and systems theory, including list decoding and M-Pade approximation is presented.

Book
31 Oct 2002
TL;DR: This text discusses codes for low-Density Parity-Check Codes, Convolutional Encoding, and the Elements of Graph Theory, which describes the construction of rings, Domains, and Fields in sets and groups.
Abstract: List of Figures. List of Tables. Preface. 1: Digital Communication. 1. Basics. 2. Algorithms and Complexity. 3. Encoding and Decoding. 4. Bounds. 5. Overview of the Text. 2: Abstract Algebra. 1. Sets and Groups. 2. Rings, Domains, and Fields. 3. Vector Spaces and GF(pm). 4. Polynomials over Galois Fields. 5. Frequency Domain Analysis of Polynomials over GF(q) [x]/(xn-1). Linear Block Codes. 1. Basic Structure of Linear Codes. 2. Repetition and Parity Check Codes. 3. Hamming Codes. 4. Reed-Muller Codes. 5. Cyclic Codes. 6. Quadratic Residue Codes. 7. Golay Codes. 8. BCH and Reed-Solomon Codes. 4: Convolutional and Concatenated Codes. 1. Convolutional Encoders. 2. Analysis of Component Codes. 3. Concatenated Codes. 4. Analysis of Parallel Concatenated Codes. 5: Elements of Graph Theory. 1. Introduction. 2. Martingales. 3. Expansion. 6: Algorithms on Graphs. 1. Probability Models and Bayesian Networks. 2. Belief Propagation Algorithm. 3. Junction Tree Propagation Algorithm. 4. Message Passing and Error Control Decoding. 5. Message Passing in Loops. 7: Turbo Decoding. 1. Turbo Decoding. 2. Parallel Decoding. 3. Notes. 8: Low-Density Parity-Check Codes. 1. Basic Properties. 2. Simple Decoding Algorithms. 3. Explicit Construction. 4. Gallager's Decoding Algorithms. 5. Belief Propagation Decoding. 6. Notes. 9: Low-Density Generator Codes. 1. Introduction. 2. Decoding Analyses. 3. Good Degree Sequences. 4. Irregular Repeat-Accumulate Codes. 5. Cascaded Codes. 6. Notes. References. Index.

Patent
24 Jul 2002
TL;DR: In this article, a system for decoding product codes is described, which includes logic configured to pass reliability determinations made while decoding symbols using first parity information, to use in decoding the symbols using second parity information.
Abstract: A system is described for decoding product codes. The system includes logic configured to pass reliability determinations made while decoding symbols using first parity information, to use in decoding the symbols using second parity information, while substantially simultaneously passing the reliability determinations made while decoding the symbols using the second parity information, to use in decoding the symbols using the first parity information.

Journal ArticleDOI
TL;DR: A trellis-based maximum-likelihood soft-decision sequential decoding algorithm (MLSDA) for binary convolutional codes and shows that, under moderate SNR, the algorithm is about four times faster than the conventional sequential decode algorithm having comparable bit-error probability.
Abstract: We present a trellis-based maximum-likelihood soft-decision sequential decoding algorithm (MLSDA) for binary convolutional codes. Simulation results show that, for (2, 1, 6) and (2, 1, 16) codes antipodally transmitted over the AWGN channel, the average computational effort required by the algorithm is several orders of magnitude less than that of the Viterbi algorithm. Also shown via simulations upon the same system models is that, under moderate SNR, the algorithm is about four times faster than the conventional sequential decoding algorithm (i.e., stack algorithm with Fano metric) having comparable bit-error probability.

Journal ArticleDOI
TL;DR: The bootstrap step is applied to the weighted bit-flipping algorithm to decode a number of LDPC codes and large improvements in both performance and complexity are observed.
Abstract: An initial bootstrap step for the decoding of low-density parity-check (LDPC) codes is proposed. Decoding is initiated by first erasing a number of less reliable bits. New values and reliabilities are then assigned to erasure bits by passing messages from nonerasure bits through the reliable check equations. The bootstrap step is applied to the weighted bit-flipping algorithm to decode a number of LDPC codes. Large improvements in both performance and complexity are observed.

Patent
05 Sep 2002
TL;DR: In this paper, a low-complexity linear filter method is first utilized by the receiver, and then a higher complexity non-linear method is utilized when the performance of the linear filter is not adequate, for example during poor channel conditions.
Abstract: To address the need for a receiver that provides good performance during poor channel conditions yet reduces the computational complexity of existing ML receivers, a method and apparatus for MIMO joint detection and decoding is provided herein. In accordance with the preferred embodiment of the present invention, a low-complexity linear filter method is first utilized by the receiver. However, a higher-complexity non-linear method is utilized when the performance of the linear method is not adequate, for example, during poor channel conditions. In order to reduce the complexity of the non-linear decoding method, the distances to a much smaller set of candidate constellation points are computed than with prior-art decoding methods. This is made possible by the fact that some bits decoded with higher confidence utilizing the output of a linear filter method can help the decoding of the other bits.

Journal ArticleDOI
TL;DR: Different than the original TPC decoder, which performs row and column decoding in a serial fashion, a parallel decoder structure is proposed, showing that decoding latency of TPCs can be halved while maintaining virtually the same performance level.
Abstract: There has been intensive focus on turbo product codes (TPCs) which have low decoding complexity and achieve near-optimum performances at low signal-to-noise ratios. Different than the original TPC decoder, which performs row and column decoding in a serial fashion, we propose a parallel decoder structure. Simulation results show that with this approach, decoding latency of TPCs can be halved while maintaining virtually the same performance level.

Proceedings ArticleDOI
L. Perros-Meilhac1, C. Lamy1
07 Aug 2002
TL;DR: A new soft VLC decoding algorithm based on a sequential decoding technique that is very efficient in terms of decoding complexity that is evaluated on the well-known "Foreman" video sequence.
Abstract: This paper considers VLC decoding algorithms based on MAP sequence estimation techniques, using residual source redundancy to provide channel error correction. These algorithms rely on soft values available at the entrance of the VLC decoder. We present a new soft VLC decoding algorithm based on a sequential decoding technique that is very efficient in terms of decoding complexity. The application of the considered soft decoding algorithms to practical decoding of MPEG-4 texture information packets under the assumption of an unequal protection scheme is investigated. The algorithm performance is evaluated on the well-known "Foreman" video sequence. Simulation results show that the proposed algorithm provides approximately the same performance as all existing soft decoding algorithms while exhibiting a significantly lower complexity.