scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1999"


Journal ArticleDOI
TL;DR: An improved list decoding algorithm for decoding Reed-Solomon codes and alternant codes and algebraic-geometry codes is presented and a solution to a weighted curve-fitting problem is presented, which may be of use in soft-decision decoding algorithms for Reed- Solomon codes.
Abstract: Given an error-correcting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding Reed-Solomon codes. The list decoding problem for Reed-Solomon codes reduces to the following "curve-fitting" problem over a field F: given n points ((x/sub i//spl middot/y/sub i/))/sub i=1//sup n/, x/sub i/, y/sub i//spl isin/F, and a degree parameter k and error parameter e, find all univariate polynomials p of degree at most k such that y/sub i/=p(x/sub i/) for all but at most e values of i/spl isin/(1,...,n). We give an algorithm that solves this problem for e 1/3, where the result yields the first asymptotic improvement in four decades. The algorithm generalizes to solve the list decoding problem for other algebraic codes, specifically alternant codes (a class of codes including BCH codes) and algebraic-geometry codes. In both cases, we obtain a list decoding algorithm that corrects up to n-/spl radic/(n(n-d')) errors, where n is the block length and d' is the designed distance of the code. The improvement for the case of algebraic-geometry codes extends the methods of Shokrollahi and Wasserman (see in Proc. 29th Annu. ACM Symp. Theory of Computing, p.241-48, 1998) and improves upon their bound for every choice of n and d'. We also present some other consequences of our algorithm including a solution to a weighted curve-fitting problem, which may be of use in soft-decision decoding algorithms for Reed-Solomon codes.

1,108 citations


Journal ArticleDOI
TL;DR: Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed, which greatly simplifies the decoding complexity of belief propagation.
Abstract: Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed. Both versions are implemented with real additions only, which greatly simplifies the decoding complexity of belief propagation in which products of probabilities have to be computed. Also, these two algorithms do not require any knowledge about the channel characteristics. Both algorithms yield a good performance-complexity trade-off and can be efficiently implemented in software as well as in hardware, with possibly quantized received values.

1,039 citations


Journal ArticleDOI
TL;DR: A class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes is presented, showing that for the rate R=1/2 binary codes, the performance is substantially better than for ordinary convolutionian codes with the same decoding complexity per information bit.
Abstract: We present a class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes The performance of this decoding is close to the performance of turbo decoding Our simulation shows that for the rate R=1/2 binary codes, the performance is substantially better than for ordinary convolutional codes with the same decoding complexity per information bit As an example, we constructed convolutional codes with memory M=1025, 2049, and 4097 showing that we are about 1 dB from the capacity limit at a bit-error rate (BER) of 10/sup -5/ and a decoding complexity of the same magnitude as a Viterbi decoder for codes having memory M=10

902 citations


Journal ArticleDOI
TL;DR: This paper presents two simple and effective criteria for stopping the iteration process in turbo decoding with a negligible degradation of the error performance based on the cross-entropy (CE) concept.
Abstract: This paper presents two simple and effective criteria for stopping the iteration process in turbo decoding with a negligible degradation of the error performance. Both criteria are devised based on the cross-entropy (CE) concept. They are as efficient as the CE criterion, but require much less and simpler computations.

364 citations


Journal ArticleDOI
TL;DR: It is shown that convolutional codes with good Hamming-distance property can provide both high diversity order and large free Euclidean distance for BICM-ID, which provides a simple mechanism for variable-rate transmission.
Abstract: This paper considers bit-interleaved coded modulation (BICM) for bandwidth-efficient transmission using software radios. A simple iterative decoding (ID) method with hard-decision feedback is suggested to achieve better performance. The paper shows that convolutional codes with good Hamming-distance property can provide both high diversity order and large free Euclidean distance for BICM-ID. The method offers a common framework for coded modulation over channels with a variety of fading statistics. In addition, BICM-ID allows an efficient combination of punctured convolutional codes and multiphase/level modulation, and therefore provides a simple mechanism for variable-rate transmission.

249 citations


Proceedings ArticleDOI
06 Jun 1999
TL;DR: A symbol by symbol MAP decoding rule is derived for space-time block codes and two schemes for "turbo"-TCM which have been shown to achieve decoding results close to the Shannon limit in AWGN channels.
Abstract: For high data rate transmission over wireless fading channels space-time block codes provide the maximal possible diversity advantage for multiple transmit antenna systems with a very simple decoding algorithm. To achieve also a significant coding gain space-time block codes have to be concatenated with an outer code. We derive a symbol by symbol MAP decoding rule for space-time block codes. We describe two schemes for "turbo"-TCM which have been shown to achieve decoding results close to the Shannon limit in AWGN channels. These turbo-TCM schemes are concatenated to space-time block codes. The MAP decoding algorithm is described. We also discuss a feedback to the space-time block decoder. Significant coding gain in addition to the diversity advantage is shown to be achieved while the decoding complexity is mainly determined by the trellis complexity of the outer code.

169 citations


Journal ArticleDOI
TL;DR: Iterative demodulation and decoding of convolutionally encoded data is treated as a special case of the previously proposed serial concatenation of interleaved codes and it is shown that by exploiting the recursive nature of the differential modulation schemes, large interleaving gains can be achieved similar to serial Concatenation schemes.
Abstract: Iterative demodulation and decoding of convolutionally encoded data is treated as a special case of the previously proposed serial concatenation of interleaved codes. It is shown that by exploiting the recursive nature of the differential modulation schemes (for example, DBPSK, DQPSK, CPM, etc.), large interleaving gains can be achieved similar to serial concatenation schemes. We also show that when memoryless modulation is used, precoding can be used to create a rate-1 recursive inner code in order to obtain interleaving gains without adding redundancy from the inner code.

143 citations


Journal ArticleDOI
TL;DR: It is shown that quaternary codes can be advantageous, both from performance and complexity standpoints, but that higher-order codes may not bring further improvement.
Abstract: The authors consider the use of non-binary convolutional codes in turbo coding. It is shown that quaternary codes can be advantageous, both from performance and complexity standpoints, but that higher-order codes may not bring further improvement.

134 citations


Proceedings ArticleDOI
05 Dec 1999
TL;DR: A solution is proposed which uses a modified concatenation scheme, in which the positions of the modulation and error-correcting codes are reversed, and improved performance is obtained by iterating with this soft constraint decoder.
Abstract: Soft iterative decoding of turbo codes and low-density parity check codes has been shown to offer significant improvements in performance. To apply soft iterative decoding to digital recorders, where binary modulation constraints are often used, modifications must be made to allow reliability information to be accessible by the decoder. A solution is proposed which uses a modified concatenation scheme, in which the positions of the modulation and error-correcting codes are reversed. In addition, a soft decoder based on the BCJR algorithm is introduced for the modulation constraint, and improved performance is obtained by iterating with this soft constraint decoder.

119 citations


Patent
TL;DR: In this article, decoding ambiguities are identified and at least partially resolved intermediate to the language decoding procedures to reduce the subsequent number of final decoding alternatives, where the user is questioned about identified decoding ambiguity as they are being decoded.
Abstract: A method of language recognition wherein decoding ambiguities are identified and at least partially resolved intermediate to the language decoding procedures to reduce the subsequent number of final decoding alternatives. The user is questioned about identified decoding ambiguities as they are being decoded. There are two language decoding levels: fast match and detailed match. During the fast match decoding level a large potential candidate list is generated, very quickly. Then, during the more comprehensive (and slower) detailed match decoding level, the fast match candidate list is applied to the ambiguity to reduce the potential selections for final recognition. During the detailed match decoding level a unique candidate is selected for decoding. Decoding may be interactive and, as each ambiguity is encountered, recognition suspended to present questions to the user that will discriminate between potential response classes. Thus, recognition performance and accuracy is improved by interrupting recognition, intermediate to the decoding process, and allowing the user to select appropriate response classes to narrow the number of final decoding alternatives.

112 citations


Journal ArticleDOI
TL;DR: This work designs algorithms for list decoding of algebraic geometric codes which can decode beyond the conventional error-correction bound (d-1)/2, d being the minimum distance of the code.
Abstract: We generalize Sudan's (see J. Compl., vol.13, p.180-93, 1997) results for Reed-Solomon codes to the class of algebraic-geometric codes, designing algorithms for list decoding of algebraic geometric codes which can decode beyond the conventional error-correction bound (d-1)/2, d being the minimum distance of the code. Our main algorithm is based on an interpolation scheme and factorization of polynomials over algebraic function fields. For the latter problem we design a polynomial-time algorithm and show that the resulting overall list-decoding algorithm runs in polynomial time under some mild conditions. Several examples are included.

Journal ArticleDOI
TL;DR: Computer simulations assuming a turbo-coded W-CDMA mobile radio reverse link under frequency selective Rayleigh fading demonstrate that when the maximum number of iterations is 8, the average number of decoding iterations can be reduced to 1/4 at BER=10/sup -6/.
Abstract: The average number of decoding iterations in a turbo decoder is reduced by incorporating CRC error detection into the decoding iteration process. Turbo decoding iterations are stopped when CRC decoding determines that there is no error in the decoded data sequence. Computer simulations assuming a turbo-coded W-CDMA mobile radio reverse link under frequency selective Rayleigh fading demonstrate that when the maximum number of iterations is 8, the average number of decoding iterations can be reduced to 1/4 at BER=10/sup -6/.

Book ChapterDOI
15 Aug 1999
TL;DR: In this article, the authors describe new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes, consisting of two parts, a preprocessing part and a decoding part.
Abstract: This paper describes new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes. The proposed algorithm consists of two parts, a preprocessing part and a decoding part. The preprocessing part identifies several parallel convolutional codes, embedded in the code generated by the LFSR, all sharing the same information bits. The decoding part then finds the correct information bits through an iterative decoding procedure. This provides the initial state of the LFSR.

Journal Article
TL;DR: New methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes, are described.
Abstract: This paper describes new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes. The proposed algorithm consists of two parts, a preprocessing part and a decoding part. The preprocessing part identifies several parallel convolutional codes, embedded in the code generated by the LFSR, all sharing the same information bits. The decoding part then finds the correct information bits through an iterative decoding procedure. This provides the initial state of the LFSR.

01 Jan 1999
TL;DR: Mutual information transfer characteristics for soft in/soft out decoders are proposed as a tool to better understand the convergence behavior of iterative decoding schemes.
Abstract: Mutual information transfer characteristics for soft in/soft out decoders are proposed as a tool to better understand the convergence behavior of iterative decoding schemes. The exchange of extrinsic information is visualized as a decoding trajectory in the Extrinsic Information Transfer Chart. This allows the prediction of turbo cliff position and bit error rate after an arbitrary number of iterations. The influence of code memory, generator polynomials as well as different constituent codes on the convergence behavior is studied for parallel concatenated codes.

Journal ArticleDOI
TL;DR: Using analog, non-linear and highly parallel networks, this work attempts to perform decoding of block and convolutional codes, equalization of certain frequency-selective channels, decoding of multi-level coded modulation and reconstruction of coded PCM signals.
Abstract: Using analog, non-linear and highly parallel networks, we attempt to perform decoding of block and convolutional codes, equalization of certain frequency-selective channels, decoding of multi-level coded modulation and reconstruction of coded PCM signals. This is in contrast to common practice where these tasks are performed by sequentially operating processors. Our advantage is that we operate fully on soft values for input and output, similar to what is done in 'turbo' decoding. However, we do not have explicit iterations because the networks float freely in continuous time. The decoder has almost no latency in time because we are only restricted by the time constants from the parasitic RC values of integrated circuits. Simulation results for several simple examples are shown which, in some cases, achieve the performance of a conventional MAP detector. For more complicated codes we indicate promising solutions with more complex analog networks based on the simple ones. Furthermore, we discuss the principles of the analog VLSI implementation of these networks.

Journal ArticleDOI
TL;DR: This article gives a tutorial introduction to research on the iterative decoding of state-of-the-art error correcting codes such as turbo codes, and it is estimated that analog decoder can outperform digital decoders by two orders of magnitude in speed and/or power consumption.
Abstract: The iterative decoding of state-of-the-art error correcting codes such as turbo codes is computationally demanding. It is argued that analog implementations of such decoders can be much more efficient than digital implementations. This article gives a tutorial introduction to research on this topic. It is estimated that analog decoders can outperform digital decoders by two orders of magnitude in speed and/or power consumption.

Journal ArticleDOI
TL;DR: A recursive implementation of optimal soft decoding for vector quantization over noisy channels with finite memory and an approach to suboptimal decoding, of lower complexity, being based on a generalization of the Viterbi algorithm are considered.
Abstract: We provide a general treatment of optimal soft decoding for vector quantization over noisy channels with finite memory. The main result is a recursive implementation of optimal decoding. We also consider an approach to suboptimal decoding, of lower complexity, being based on a generalization of the Viterbi algorithm. Finally, we treat the problem of combined encoder-decoder design. Simulations compare the new decoders to a decision-based approach that uses Viterbi detection plus table lookup decoding. Optimal soft decoding significantly outperforms the benchmark decoder. The introduced suboptimal decoder is able to perform close to the optimal and to outperform the benchmark scheme at a comparable complexity.

Proceedings ArticleDOI
01 May 1999
TL;DR: A unified framework for derivation of efficient list decoding algorithms for algebraicgeometric codes is developed using methods originating in numerical analysis and appropriate displacement operators for matrices that occur in the context of list decoding are derived.
Abstract: Using methods originating in numerical analysis, we will develop a unified framework for derivation of efficient list decoding algorithms for algebraicgeometric codes. We will demonstrate our method by accelerating Sudan's list decoding algorithm for Reed-Solomon codes [22], its generalization to algebraic-geometric codes by Shokrollahi and Wasserman [21], and the recent improvement of Guruswami and Sudan [8] in the case of ReedSolomon codes. The basic problem we attack in this paper is that of efficiently finding nonzero elements in the kernel of a structured matrix. The structure of such an n x n-matrix allows it to be "compressed" to ? n parameters for some ? which is usually a constant in applications. The concept of structure is formalized using the displacement operator. The displacement operator allows to perform matrix operations on the compressed version of the matrix. In particular, we can find a PLU- decomposition of the original matrix in time O(? n2), which is quadratic in n for constant ?. We will derive appropriate displacement operators for matrices that occur in the context of list decoding, and apply our general algorithm to them. For example, we will obtain algorithms that use O(n2 l) and O(n7/3 l) operations over the base field for list decoding of Reed-Solomon codes and algebraic-geometric codes from certain plane curves, respectively, where l is the length of the list. Assuming that l is constant, this gives algorithms of running time O(n2) and O(n7/3), which is the same as the running time of conventional decoding algorithms. We will also sketch methods to parallelize our algorithms

Patent
18 Feb 1999
TL;DR: In this paper, cyclic shifting of codewords is applied in the context of iterative soft decision-in soft decision out decoding to maximize the usefulness of a parity equation corresponding to any particular codeword bit.
Abstract: Systems and methods for augmenting the performance of iterative soft decision-in soft decision-out decoding of block codes with extrinsic information based on multiple parity equations inherent to the block codes. Cyclic shifting of codewords may be applied in the context of iterative soft decision-in soft decision-out decoding to maximize the usefulness of a parity equation corresponding to any particular codeword bit. Soft decisions are determined on a bit-by-bit basis in response to multi-bit symbol measurements. This allows the use of relatively inexpensive bit-based decoders for decoding of multi-bit symbols.

Proceedings ArticleDOI
16 May 1999
TL;DR: The effects of quantization and fixed point arithmetic upon the log-MAP and APRI-SOVA decoding algorithms for a BPSK communication system are quantified via simulation and it is shown that, with proper scaling of the signal prior to quantization, no degradation of the BER performance is incurred with eight-bit quantization.
Abstract: The majority of the performance studies of turbo codes presented in the literature to date have assumed the use of floating point arithmetic. However, if fixed point arithmetic is employed, a corresponding degradation in the BER performance of the turbo decoding algorithm is expected. In this paper, the effects of quantization and fixed point arithmetic upon the log-MAP and APRI-SOVA decoding algorithms for a BPSK communication system are quantified via simulation. It is shown that, with proper scaling of the signal prior to quantization, no degradation of the BER performance is incurred with eight-bit quantization, and even four-bit quantization can provide acceptable BER performance.

Journal ArticleDOI
TL;DR: A construction of uniquely decodable codes for the two-user binary adder channel that are greater than the rates guaranteed by the Coebergh van den Braak and van Tilborg construction and can be used with simple encoding and decoding procedures.
Abstract: A construction of uniquely decodable codes for the two-user binary adder channel is presented. The rates of the codes obtained by this construction are greater than the rates guaranteed by the Coebergh van den Braak and van Tilborg construction and these codes can be used with simple encoding and decoding procedures.

Book ChapterDOI
TL;DR: It is proved that on the AWGN channel, RA codes have the potential for achieving channel capacity, and as the rate of the RA code approaches zero, the average required bit Eb/N0 for arbitrarily small error probability with maximum-likelihood decoding approaches log 2, which is the Shannon limit.
Abstract: In ref. [3] we introduced a simplified ensemble of serially concatenated "turbo-like" codes which we called repeat-accumulate, or RA codes. These codes are very easy to decode using an iterative decoding algorithm derived from belief propagation on the appropriate Tanner graph, yet their performance is scarcely inferior to that of full-fledged turbo codes. In this paper, we prove that on the AWGN channel, RA codes have the potential for achieving channel capacity. That is, as the rate of the RA code approaches zero, the average required bit Eb/N0 for arbitrarily small error probability with maximum-likelihood decoding approaches log 2, which is the Shannon limit. In view of the extreme simplicity of RA codes, this result is both surprising and suggestive.

Proceedings ArticleDOI
29 Mar 1999
TL;DR: Soft-input VLC decoding is free from the risk of terminating the decoding in an unsynchronized state, and it offers the possibility to exploit a priori knowledge, if available, of the number of symbols contained in the packet.
Abstract: We present a method for utilizing soft information in decoding of variable length codes (VLCs). When compared with traditional VLC decoding, which is performed using "hard" input bits and a state machine, soft-input VLC decoding offers improved performance in terms of packet and symbol error rates. Soft-input VLC decoding is free from the risk, encountered in hard decision VLC decoders in noisy environments, of terminating the decoding in an unsynchronized state, and it offers the possibility to exploit a priori knowledge, if available, of the number of symbols contained in the packet.

Proceedings ArticleDOI
29 Mar 1999
TL;DR: This paper introduces the use of iterative decoding techniques similar to those used in "turbo" decoding to decode multiple correlated descriptions transmitted over a noisy channel, demonstrating that there is an optimal amount of redundancy or correlation for a given channel state.
Abstract: This paper considers the transmission of multiple descriptions over noisy channels rather than the on-off channels that are traditionally considered. We introduce the use of iterative decoding techniques similar to those used in "turbo" decoding to decode multiple correlated descriptions transmitted over a noisy channel. For a given transmission rate per channel and a given channel state, the efficacy of iterative decoding depends on the correlatedness of the two descriptions produced by the multiple description encoder. We demonstrate that there is an optimal amount of redundancy or correlation for a given channel state. Hence, multiple description codes may also be viewed as joint source-channel codes.

Journal ArticleDOI
TL;DR: A decoding algorithm of q-ary linear codes, which is exponentially smaller than the complexity of all other methods known, is suggested, which develops the ideas of covering-set decoding and split syndrome decoding.
Abstract: We suggest a decoding algorithm of q-ary linear codes, which we call supercode decoding. It ensures the error probability that approaches the error probability of minimum-distance decoding as the length of the code grows. For n/spl rarr//spl infin/ the algorithm has the maximum-likelihood performance. The asymptotic complexity of supercode decoding is exponentially smaller than the complexity of all other methods known. The algorithm develops the ideas of covering-set decoding and split syndrome decoding.

Proceedings ArticleDOI
16 May 1999
TL;DR: In this paper, an iterative decoding suitability measure is presented, intended to serve as an indication on the degree of correlation between extrinsic inputs, which can be used as a complement to the weight distribution when ranking interleavers.
Abstract: The performance of a turbo code is dependent on two properties of the code: its distance spectrum and its suitability to be iteratively decoded. The performance of iterative decoding depends on the quality of the extrinsic inputs; badly correlated extrinsic inputs can deteriorate the performance. While most turbo coding literature assumes that the extrinsic information is uncorrelated, we investigate these correlation properties. An iterative decoding suitability measure is presented, intended to serve as an indication on the degree of correlation between extrinsic inputs. The suitability measure can be used as a complement to the weight distribution when ranking interleavers.

Proceedings ArticleDOI
31 Oct 1999
TL;DR: It is claimed that for most of the code rates, when a pseudo random interleaver is applied, the selection of puncturing pattern does not have significant effect on the code performance, however, for some rates, a commonly used puncturing patterns does cause much poorer performance.
Abstract: Turbo codes have performance superior than all other coding techniques. The main factors that make turbo codes so efficient include, parallel concatenation structure of the encoding system, recursive convolutional encoder, interleaver, puncturing pattern and iterative decoding. In this research, we have investigated the effect of the puncturing pattern on the performance of high rate turbo codes. Based on simulation results, we claim that for most of the code rates, when a pseudo random interleaver is applied, the selection of puncturing pattern does not have significant effect on the code performance. However, for some rates, a commonly used puncturing patterns does cause much poorer performance. For these rates, a modified puncturing pattern is proposed which restores the performance back to the Shannon limit.

Journal ArticleDOI
TL;DR: A simple and efficient error correction scheme for array-like data structures that can be used for correction of error clusters and for decoding of concatenated codes and a random access scheme that has many similarities with the Aloha system.
Abstract: We present a simple and efficient error correction scheme for array-like data structures. We assume that the channel behaves such that each row of a received array is either error-free or corrupted by many symbol errors. Provided that row error vectors are linearly independent, the proposed decoding algorithm can correct asymptotically one erroneous row per redundant row, even without having reliability information from the channel output. This efficient decoding algorithm can be used for correction of error clusters and for decoding of concatenated codes. We also derive a random access scheme that has many similarities with the Aloha system.

Proceedings ArticleDOI
06 Jun 1999
TL;DR: A new neural approach to decode convolutional codes is presented that uses a recurrent neural network tailored to the used Convolutional code, and it is shown that the performance of the Viterbi algorithm can be approached very closely.
Abstract: A new neural approach to decode convolutional codes is presented The method uses a recurrent neural network tailored to the used convolutional code No supervision is required As an example, decoders for 1/2 and 1/3 rate convolutional codes with constraint length 9 are studied Such codes have been proposed, eg, for the third generation WCDMA cellular system The new decoders have been tested in a Gaussian channel and it is shown that the performance of the Viterbi algorithm can be approached very closely The decoder lends itself to pleasing implementations in hardware Its complexity increases only polynomially with increasing constraint length, which is slower than the exponential increase of the Viterbi algorithm However, the speed of the current circuits may set limits to the codes used With increasing speeds of the circuits in the future, the proposed technique may become a tempting choice for decoding convolutional coding with long constraint lengths