scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1999"


Journal ArticleDOI
TL;DR: An improved list decoding algorithm for decoding Reed-Solomon codes and alternant codes and algebraic-geometry codes is presented and a solution to a weighted curve-fitting problem is presented, which may be of use in soft-decision decoding algorithms for Reed- Solomon codes.
Abstract: Given an error-correcting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding Reed-Solomon codes. The list decoding problem for Reed-Solomon codes reduces to the following "curve-fitting" problem over a field F: given n points ((x/sub i//spl middot/y/sub i/))/sub i=1//sup n/, x/sub i/, y/sub i//spl isin/F, and a degree parameter k and error parameter e, find all univariate polynomials p of degree at most k such that y/sub i/=p(x/sub i/) for all but at most e values of i/spl isin/(1,...,n). We give an algorithm that solves this problem for e 1/3, where the result yields the first asymptotic improvement in four decades. The algorithm generalizes to solve the list decoding problem for other algebraic codes, specifically alternant codes (a class of codes including BCH codes) and algebraic-geometry codes. In both cases, we obtain a list decoding algorithm that corrects up to n-/spl radic/(n(n-d')) errors, where n is the block length and d' is the designed distance of the code. The improvement for the case of algebraic-geometry codes extends the methods of Shokrollahi and Wasserman (see in Proc. 29th Annu. ACM Symp. Theory of Computing, p.241-48, 1998) and improves upon their bound for every choice of n and d'. We also present some other consequences of our algorithm including a solution to a weighted curve-fitting problem, which may be of use in soft-decision decoding algorithms for Reed-Solomon codes.

1,108 citations


Journal ArticleDOI
TL;DR: A class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes is presented, showing that for the rate R=1/2 binary codes, the performance is substantially better than for ordinary convolutionian codes with the same decoding complexity per information bit.
Abstract: We present a class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes The performance of this decoding is close to the performance of turbo decoding Our simulation shows that for the rate R=1/2 binary codes, the performance is substantially better than for ordinary convolutional codes with the same decoding complexity per information bit As an example, we constructed convolutional codes with memory M=1025, 2049, and 4097 showing that we are about 1 dB from the capacity limit at a bit-error rate (BER) of 10/sup -5/ and a decoding complexity of the same magnitude as a Viterbi decoder for codes having memory M=10

902 citations


Journal ArticleDOI
TL;DR: This paper presents two simple and effective criteria for stopping the iteration process in turbo decoding with a negligible degradation of the error performance based on the cross-entropy (CE) concept.
Abstract: This paper presents two simple and effective criteria for stopping the iteration process in turbo decoding with a negligible degradation of the error performance. Both criteria are devised based on the cross-entropy (CE) concept. They are as efficient as the CE criterion, but require much less and simpler computations.

364 citations


Book
29 Oct 1999
TL;DR: The range of topics covered in this book is beneficial to undergraduate and postgraduate students performing research at an advanced level in this subject area, as well as for engineers whose work involves communications error control coding.
Abstract: From the Publisher: Channel coding is the theory by which codes can be constructed to correct and detect errors. Such errors may be caused by transmission channels and noise.. "This book presents classic theory and techniques for block and convolutional codes with an emphasis on decoding algorithms. Furthermore, the powerful technique of Generalized Concatenated Coding (GCC) is introduced, using illustrative examples.. "The range of topics covered in this book is beneficial to undergraduate and postgraduate students performing research at an advanced level in this subject area, as well as for engineers whose work involves communications error control coding.

268 citations


Proceedings ArticleDOI
06 Jun 1999
TL;DR: A symbol by symbol MAP decoding rule is derived for space-time block codes and two schemes for "turbo"-TCM which have been shown to achieve decoding results close to the Shannon limit in AWGN channels.
Abstract: For high data rate transmission over wireless fading channels space-time block codes provide the maximal possible diversity advantage for multiple transmit antenna systems with a very simple decoding algorithm. To achieve also a significant coding gain space-time block codes have to be concatenated with an outer code. We derive a symbol by symbol MAP decoding rule for space-time block codes. We describe two schemes for "turbo"-TCM which have been shown to achieve decoding results close to the Shannon limit in AWGN channels. These turbo-TCM schemes are concatenated to space-time block codes. The MAP decoding algorithm is described. We also discuss a feedback to the space-time block decoder. Significant coding gain in addition to the diversity advantage is shown to be achieved while the decoding complexity is mainly determined by the trellis complexity of the outer code.

169 citations


Proceedings ArticleDOI
05 Dec 1999
TL;DR: A solution is proposed which uses a modified concatenation scheme, in which the positions of the modulation and error-correcting codes are reversed, and improved performance is obtained by iterating with this soft constraint decoder.
Abstract: Soft iterative decoding of turbo codes and low-density parity check codes has been shown to offer significant improvements in performance. To apply soft iterative decoding to digital recorders, where binary modulation constraints are often used, modifications must be made to allow reliability information to be accessible by the decoder. A solution is proposed which uses a modified concatenation scheme, in which the positions of the modulation and error-correcting codes are reversed. In addition, a soft decoder based on the BCJR algorithm is introduced for the modulation constraint, and improved performance is obtained by iterating with this soft constraint decoder.

119 citations


Patent
TL;DR: In this article, decoding ambiguities are identified and at least partially resolved intermediate to the language decoding procedures to reduce the subsequent number of final decoding alternatives, where the user is questioned about identified decoding ambiguity as they are being decoded.
Abstract: A method of language recognition wherein decoding ambiguities are identified and at least partially resolved intermediate to the language decoding procedures to reduce the subsequent number of final decoding alternatives. The user is questioned about identified decoding ambiguities as they are being decoded. There are two language decoding levels: fast match and detailed match. During the fast match decoding level a large potential candidate list is generated, very quickly. Then, during the more comprehensive (and slower) detailed match decoding level, the fast match candidate list is applied to the ambiguity to reduce the potential selections for final recognition. During the detailed match decoding level a unique candidate is selected for decoding. Decoding may be interactive and, as each ambiguity is encountered, recognition suspended to present questions to the user that will discriminate between potential response classes. Thus, recognition performance and accuracy is improved by interrupting recognition, intermediate to the decoding process, and allowing the user to select appropriate response classes to narrow the number of final decoding alternatives.

112 citations


Journal ArticleDOI
TL;DR: This work designs algorithms for list decoding of algebraic geometric codes which can decode beyond the conventional error-correction bound (d-1)/2, d being the minimum distance of the code.
Abstract: We generalize Sudan's (see J. Compl., vol.13, p.180-93, 1997) results for Reed-Solomon codes to the class of algebraic-geometric codes, designing algorithms for list decoding of algebraic geometric codes which can decode beyond the conventional error-correction bound (d-1)/2, d being the minimum distance of the code. Our main algorithm is based on an interpolation scheme and factorization of polynomials over algebraic function fields. For the latter problem we design a polynomial-time algorithm and show that the resulting overall list-decoding algorithm runs in polynomial time under some mild conditions. Several examples are included.

99 citations


Journal ArticleDOI
TL;DR: Computer simulations assuming a turbo-coded W-CDMA mobile radio reverse link under frequency selective Rayleigh fading demonstrate that when the maximum number of iterations is 8, the average number of decoding iterations can be reduced to 1/4 at BER=10/sup -6/.
Abstract: The average number of decoding iterations in a turbo decoder is reduced by incorporating CRC error detection into the decoding iteration process. Turbo decoding iterations are stopped when CRC decoding determines that there is no error in the decoded data sequence. Computer simulations assuming a turbo-coded W-CDMA mobile radio reverse link under frequency selective Rayleigh fading demonstrate that when the maximum number of iterations is 8, the average number of decoding iterations can be reduced to 1/4 at BER=10/sup -6/.

96 citations


Book ChapterDOI
15 Aug 1999
TL;DR: In this article, the authors describe new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes, consisting of two parts, a preprocessing part and a decoding part.
Abstract: This paper describes new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes. The proposed algorithm consists of two parts, a preprocessing part and a decoding part. The preprocessing part identifies several parallel convolutional codes, embedded in the code generated by the LFSR, all sharing the same information bits. The decoding part then finds the correct information bits through an iterative decoding procedure. This provides the initial state of the LFSR.

93 citations


Journal Article
TL;DR: New methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes, are described.
Abstract: This paper describes new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes. The proposed algorithm consists of two parts, a preprocessing part and a decoding part. The preprocessing part identifies several parallel convolutional codes, embedded in the code generated by the LFSR, all sharing the same information bits. The decoding part then finds the correct information bits through an iterative decoding procedure. This provides the initial state of the LFSR.

01 Jan 1999
TL;DR: Mutual information transfer characteristics for soft in/soft out decoders are proposed as a tool to better understand the convergence behavior of iterative decoding schemes.
Abstract: Mutual information transfer characteristics for soft in/soft out decoders are proposed as a tool to better understand the convergence behavior of iterative decoding schemes. The exchange of extrinsic information is visualized as a decoding trajectory in the Extrinsic Information Transfer Chart. This allows the prediction of turbo cliff position and bit error rate after an arbitrary number of iterations. The influence of code memory, generator polynomials as well as different constituent codes on the convergence behavior is studied for parallel concatenated codes.

Journal ArticleDOI
TL;DR: The standard union bound is applied to turbo-coded modulation systems with maximum-likelihood decoding, based on "uniform interleaving" just as its counterpart for standard turbo coding, and provides a tool for comparing coded modulation schemes having different component codes, interleaver lengths, mappings, etc., using maximum- likelihood decoding.
Abstract: We apply the standard union bound to turbo-coded modulation systems with maximum-likelihood decoding. To illustrate the methodology, we explicitly derive the bounds for the 2-bits/s/Hz 16 QAM system. Generalization of this bound to other turbo-coded modulation systems is straightforward. As in the case of the standard union bound for turbo codes, we expect these bounds to be useful for rather large values of signal-to-noise ratios, i.e., signal-to-noise ratios for which the code rate is smaller than the corresponding cutoff rate. The bound is based on "uniform interleaving" just as its counterpart for standard turbo coding. The derived bound provides a tool for comparing coded modulation schemes having different component codes, interleaver lengths, mappings, etc., using maximum-likelihood decoding. It is also useful in studying the effectiveness of various suboptimal decoding algorithms. The bounding technique is also applicable to other coded-modulation schemes such as serially concatenated coded modulation.

Journal ArticleDOI
Simon Litsyn1
TL;DR: New upper bounds on the error exponents for the maximum-likelihood decoding and error detecting in the binary symmetric channels are derived based on an analysis of possible distance distributions of the codes along with some inequalities relating the distance distributions to the error probabilities.
Abstract: We derive new upper bounds on the error exponents for the maximum-likelihood decoding and error detecting in the binary symmetric channels. This is an improvement on the best earlier known bounds by Shannon-Gallager-Berlekamp (1967) and McEliece-Omura (1977). For the probability of undetected error the new bounds are better than the bounds by Levenshtein (1978, 1989) and the bound by Abdel-Ghaffar (see ibid., vol.43, p.1489-502, 1997). Moreover, we further extend the range of rates where the undetected error exponent is known to be exact. The new bounds are based on an analysis of possible distance distributions of the codes along with some inequalities relating the distance distributions to the error probabilities.

Journal ArticleDOI
TL;DR: This article gives a tutorial introduction to research on the iterative decoding of state-of-the-art error correcting codes such as turbo codes, and it is estimated that analog decoder can outperform digital decoders by two orders of magnitude in speed and/or power consumption.
Abstract: The iterative decoding of state-of-the-art error correcting codes such as turbo codes is computationally demanding. It is argued that analog implementations of such decoders can be much more efficient than digital implementations. This article gives a tutorial introduction to research on this topic. It is estimated that analog decoders can outperform digital decoders by two orders of magnitude in speed and/or power consumption.

Journal ArticleDOI
TL;DR: A recursive implementation of optimal soft decoding for vector quantization over noisy channels with finite memory and an approach to suboptimal decoding, of lower complexity, being based on a generalization of the Viterbi algorithm are considered.
Abstract: We provide a general treatment of optimal soft decoding for vector quantization over noisy channels with finite memory. The main result is a recursive implementation of optimal decoding. We also consider an approach to suboptimal decoding, of lower complexity, being based on a generalization of the Viterbi algorithm. Finally, we treat the problem of combined encoder-decoder design. Simulations compare the new decoders to a decision-based approach that uses Viterbi detection plus table lookup decoding. Optimal soft decoding significantly outperforms the benchmark decoder. The introduced suboptimal decoder is able to perform close to the optimal and to outperform the benchmark scheme at a comparable complexity.

Proceedings ArticleDOI
01 May 1999
TL;DR: A unified framework for derivation of efficient list decoding algorithms for algebraicgeometric codes is developed using methods originating in numerical analysis and appropriate displacement operators for matrices that occur in the context of list decoding are derived.
Abstract: Using methods originating in numerical analysis, we will develop a unified framework for derivation of efficient list decoding algorithms for algebraicgeometric codes. We will demonstrate our method by accelerating Sudan's list decoding algorithm for Reed-Solomon codes [22], its generalization to algebraic-geometric codes by Shokrollahi and Wasserman [21], and the recent improvement of Guruswami and Sudan [8] in the case of ReedSolomon codes. The basic problem we attack in this paper is that of efficiently finding nonzero elements in the kernel of a structured matrix. The structure of such an n x n-matrix allows it to be "compressed" to ? n parameters for some ? which is usually a constant in applications. The concept of structure is formalized using the displacement operator. The displacement operator allows to perform matrix operations on the compressed version of the matrix. In particular, we can find a PLU- decomposition of the original matrix in time O(? n2), which is quadratic in n for constant ?. We will derive appropriate displacement operators for matrices that occur in the context of list decoding, and apply our general algorithm to them. For example, we will obtain algorithms that use O(n2 l) and O(n7/3 l) operations over the base field for list decoding of Reed-Solomon codes and algebraic-geometric codes from certain plane curves, respectively, where l is the length of the list. Assuming that l is constant, this gives algorithms of running time O(n2) and O(n7/3), which is the same as the running time of conventional decoding algorithms. We will also sketch methods to parallelize our algorithms

Patent
18 Feb 1999
TL;DR: In this paper, cyclic shifting of codewords is applied in the context of iterative soft decision-in soft decision out decoding to maximize the usefulness of a parity equation corresponding to any particular codeword bit.
Abstract: Systems and methods for augmenting the performance of iterative soft decision-in soft decision-out decoding of block codes with extrinsic information based on multiple parity equations inherent to the block codes. Cyclic shifting of codewords may be applied in the context of iterative soft decision-in soft decision-out decoding to maximize the usefulness of a parity equation corresponding to any particular codeword bit. Soft decisions are determined on a bit-by-bit basis in response to multi-bit symbol measurements. This allows the use of relatively inexpensive bit-based decoders for decoding of multi-bit symbols.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: The results show that the approximate MAP technique from Park et al. outperforms other approximate methods and provides substantial error protection to variable-length encoded data.
Abstract: Joint source-channel decoding based on residual source redundancy is an effective paradigm for error-resilient data compression. While previous work only considered fixed rate systems, the extension of these techniques for variable-length encoded data was previously independently proposed by the authors, Park and Miller (see Proc. of Conf. on Info. Sciences and Systems, Princeton, N.J., 1998) and by Demir and Sayood (see Proc. of the Data Compression Conf., Snowbird, U.T., p.139-48, 1998). In this paper, we describe and compare the performance of a computationally complex exact maximum a posteriori (MAP) decoder, its efficient approximation, an alternative approximate MAP decoder, and an improved version of this decoder suggested here. Moreover, we evaluate several source and channel coding configurations. Our results show that the approximate MAP technique from Park et al. outperforms other approximate methods and provides substantial error protection to variable-length encoded data.

Book ChapterDOI
TL;DR: An efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance is presented and a reduction of the problemof factoring the Q-polynomial to the problem of factoring a univariate polynomial over a large finite field is reduced.
Abstract: We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q- polynomial, and a reduction of the problemof factoring the Q-polynomial to the problem of factoring a univariate polynomial over a large finite field.

Journal ArticleDOI
TL;DR: A construction of uniquely decodable codes for the two-user binary adder channel that are greater than the rates guaranteed by the Coebergh van den Braak and van Tilborg construction and can be used with simple encoding and decoding procedures.
Abstract: A construction of uniquely decodable codes for the two-user binary adder channel is presented. The rates of the codes obtained by this construction are greater than the rates guaranteed by the Coebergh van den Braak and van Tilborg construction and these codes can be used with simple encoding and decoding procedures.

Proceedings ArticleDOI
29 Mar 1999
TL;DR: Soft-input VLC decoding is free from the risk of terminating the decoding in an unsynchronized state, and it offers the possibility to exploit a priori knowledge, if available, of the number of symbols contained in the packet.
Abstract: We present a method for utilizing soft information in decoding of variable length codes (VLCs). When compared with traditional VLC decoding, which is performed using "hard" input bits and a state machine, soft-input VLC decoding offers improved performance in terms of packet and symbol error rates. Soft-input VLC decoding is free from the risk, encountered in hard decision VLC decoders in noisy environments, of terminating the decoding in an unsynchronized state, and it offers the possibility to exploit a priori knowledge, if available, of the number of symbols contained in the packet.

Proceedings ArticleDOI
29 Mar 1999
TL;DR: This paper introduces the use of iterative decoding techniques similar to those used in "turbo" decoding to decode multiple correlated descriptions transmitted over a noisy channel, demonstrating that there is an optimal amount of redundancy or correlation for a given channel state.
Abstract: This paper considers the transmission of multiple descriptions over noisy channels rather than the on-off channels that are traditionally considered. We introduce the use of iterative decoding techniques similar to those used in "turbo" decoding to decode multiple correlated descriptions transmitted over a noisy channel. For a given transmission rate per channel and a given channel state, the efficacy of iterative decoding depends on the correlatedness of the two descriptions produced by the multiple description encoder. We demonstrate that there is an optimal amount of redundancy or correlation for a given channel state. Hence, multiple description codes may also be viewed as joint source-channel codes.

Journal ArticleDOI
TL;DR: A decoding algorithm of q-ary linear codes, which is exponentially smaller than the complexity of all other methods known, is suggested, which develops the ideas of covering-set decoding and split syndrome decoding.
Abstract: We suggest a decoding algorithm of q-ary linear codes, which we call supercode decoding. It ensures the error probability that approaches the error probability of minimum-distance decoding as the length of the code grows. For n/spl rarr//spl infin/ the algorithm has the maximum-likelihood performance. The asymptotic complexity of supercode decoding is exponentially smaller than the complexity of all other methods known. The algorithm develops the ideas of covering-set decoding and split syndrome decoding.

Proceedings ArticleDOI
16 May 1999
TL;DR: In this paper, an iterative decoding suitability measure is presented, intended to serve as an indication on the degree of correlation between extrinsic inputs, which can be used as a complement to the weight distribution when ranking interleavers.
Abstract: The performance of a turbo code is dependent on two properties of the code: its distance spectrum and its suitability to be iteratively decoded. The performance of iterative decoding depends on the quality of the extrinsic inputs; badly correlated extrinsic inputs can deteriorate the performance. While most turbo coding literature assumes that the extrinsic information is uncorrelated, we investigate these correlation properties. An iterative decoding suitability measure is presented, intended to serve as an indication on the degree of correlation between extrinsic inputs. The suitability measure can be used as a complement to the weight distribution when ranking interleavers.

Journal ArticleDOI
TL;DR: A simple and efficient error correction scheme for array-like data structures that can be used for correction of error clusters and for decoding of concatenated codes and a random access scheme that has many similarities with the Aloha system.
Abstract: We present a simple and efficient error correction scheme for array-like data structures. We assume that the channel behaves such that each row of a received array is either error-free or corrupted by many symbol errors. Provided that row error vectors are linearly independent, the proposed decoding algorithm can correct asymptotically one erroneous row per redundant row, even without having reliability information from the channel output. This efficient decoding algorithm can be used for correction of error clusters and for decoding of concatenated codes. We also derive a random access scheme that has many similarities with the Aloha system.

Proceedings ArticleDOI
21 Sep 1999
TL;DR: Simulation results presented show that the proposed techniques can result in significant improvement in decoding performance, and are extended to address the problem of error propagation, an inherent problem with using VLCs.
Abstract: Digital communications systems commonly use compression (source coding) and error control (channel coding) to allow efficient and robust transmission of data over noisy channels. When compression is imperfect, some residual redundancy remains in the transmitted data and can be exploited at the decoder to improve the decoder's probability-of-error performance. A new approach to joint source-channel maximum a posteriori probability (MAP) decoding applicable to systems employing variable-length source codes (VLCs) was previously developed by the authors-the resulting joint decoder's structure is similar to that of the conventional Viterbi decoder. This paper extends the authors' previous work to address the problem of error propagation, an inherent problem with using VLCs. Options considered include list decoding, trellis-pruning, and composite schemes. Simulation results presented show that the proposed techniques can result in significant improvement in decoding performance.

Journal ArticleDOI
TL;DR: An exhaustive treatment of various coding and decoding techniques for use in fast frequency-hopping/multiple frequency shift keying multiple-access systems to show how reliability information on each received bit can be derived to enable soft-decision decoding.
Abstract: In this contribution we present an exhaustive treatment of various coding and decoding techniques for use in fast frequency-hopping/multiple frequency shift keying multiple-access systems. One of the main goals is to show how reliability information on each received bit can be derived to enable soft-decision decoding. Convolutional codes as well as turbo codes are considered applying soft-decision, erasure, and hard-decision decoding. Their performance is compared to that of previously proposed Reed-Solomon with either errors-only or errors-and-erasures decoding. A mobile radio environment yielding a frequency-selective fading channel is assumed. It is shown that the application of turbo codes and convolutional codes with soft decision decoding can allow for a comparable number of simultaneously transmitting users to Reed-Solomon codes with errors-and-erasures decoding. Furthermore, the advantage of soft decisions is shown, which can be applied to a widely and growing range of channel codes. The pertinent technique of calculating soft decisions is described in the paper.

Proceedings ArticleDOI
Bonghoe Kim1, Hwang Soo Lee
15 Sep 1999
TL;DR: This work proposes an efficient algorithm of decoding turbo codes that can greatly reduce the delay and computation and uses extrinsic information as the reliability measure of each decoded bit.
Abstract: Turbo decoding can be done in an iterative manner. This requires the delay and computational complexity for decoding an input data. We propose an efficient algorithm of decoding turbo codes that can greatly reduce the delay and computation. It uses extrinsic information as the reliability measure of each decoded bit. When the variance of the extrinsic information exceeds a threshold the iteration for decoding stops with negligible performance loss to conventional decoding algorithm.

Patent
08 Dec 1999
TL;DR: In this article, an encoding and decoding apparatus for multidimensionally decoded information is presented. But the complexity of decoding is not reduced since only the required number of iteration steps is adaptively performed in the decoding apparatus.
Abstract: The present invention provides an encoding apparatus, comprising means ( 21 ) for generating a checksum for incoming data, means ( 22 ) for constructing frames on the basis of said incoming data and said generated checksum, and means ( 23 ) for multidimensionally coding said frames. Further, the present invention comprises a decoding apparatus for iterative decoding of multidimensionally decoded information, comprising means ( 28 ) for performing at least one decoding iteration on multidimensionally coded information, and means ( 32 ) for checking the decoded information after each decoding iteration and for causing said decoding iteration means ( 28 ) to perform a further decoding iteration on the basis of a checking result. The present invention further comprises the corresponding encoding method and decoding method. The average processing delay and the computational complexity is significantly reduced, since only the required number of iteration steps is adaptively performed in the decoding apparatus.