scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1998"


Journal ArticleDOI
TL;DR: It is shown that Pearl's algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager's low-density parity-check codes, serially concatenated codes, and product codes.
Abstract: We describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. (1993) and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl's (1982) belief propagation algorithm. We see that if Pearl's algorithm is applied to the "belief network" of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the experimental performance of turbo decoding is still lacking. However, we also show that Pearl's algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager's (1962) low-density parity-check codes, serially concatenated codes, and product codes. Thus, belief propagation provides a very attractive general methodology for devising low-complexity iterative decoding algorithms for hybrid coded systems.

989 citations


Journal ArticleDOI
TL;DR: An iterative decoding algorithm for any product code built using linear block codes based on soft-input/soft-output decoders for decoding the component codes so that near-optimum performance is obtained at each iteration.
Abstract: This paper describes an iterative decoding algorithm for any product code built using linear block codes. It is based on soft-input/soft-output decoders for decoding the component codes so that near-optimum performance is obtained at each iteration. This soft-input/soft-output decoder is a Chase decoder which delivers soft outputs instead of binary decisions. The soft output of the decoder is an estimation of the log-likelihood ratio (LLR) of the binary decisions given by the Chase decoder. The theoretical justifications of this algorithm are developed and the method used for computing the soft output is fully described. The iterative decoding of product codes is also known as the block turbo code (BTC) because the concept is quite similar to turbo codes based on iterative decoding of concatenated recursive convolutional codes. The performance of different Bose-Chaudhuri-Hocquenghem (BCH)-BTCs are given for the Gaussian and the Rayleigh channel. Performance on the Gaussian channel indicates that data transmission at 0.8 dB of Shannon's limit or more than 98% (R/C>0.98) of channel capacity can be achieved with high-code-rate BTC using only four iterations. For the Rayleigh channel, the slope of the bit-error rate (BER) curve is as steep as for the Gaussian channel without using channel state information.

970 citations


Proceedings ArticleDOI
08 Nov 1998
TL;DR: An improved list decoding algorithm for decoding Reed-Solomon codes and alternant codes and algebraic-geometric codes is presented, including a solution to a weighted curve fitting problem, which is of use in soft-decision decoding algorithms for Reed- Solomon codes.
Abstract: Given an error-correcting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding Reed-Solomon codes. The list decoding problem for Reed-Solomon codes reduces to the following "curve-fitting" problem over a field F: Given n points {(x/sub i/.y/sub i/)}/sub i=1//sup n/, x/sub i/,y/sub i//spl isin/F, and a degree parameter k and error parameter e, find all univariate polynomials p of degree at most k such that y/sub i/=p(x/sub i/) for all but at most e values of i/spl isin/{1....,n}. We give an algorithm that solves this problem for e 1/3, where the result yields the first asymptotic improvement in four decades. The algorithm generalizes to solve the list decoding problem for other algebraic codes, specifically alternant codes (a class of codes including BCH codes) and algebraic-geometric codes. In both cases, we obtain a list decoding algorithm that corrects up to n-/spl radic/(n-d-) errors, where n is the block length and d' is the designed distance of the code. The improvement for the case of algebraic-geometric codes extends the methods of Shokrollahi and Wasserman (1998) and improves upon their bound for every choice of n and d'. We also present some other consequences of our algorithm including a solution to a weighted curve fitting problem, which is of use in soft-decision decoding algorithms for Reed-Solomon codes.

532 citations


Journal ArticleDOI
TL;DR: It is pointed out that iterative decoding algorithms for various codes, including "turbo decoding" of parallel-concatenated convolutional codes, may be viewed as probability propagation in a graphical model of the code.
Abstract: We present a unified graphical model framework for describing compound codes and deriving iterative decoding algorithms. After reviewing a variety of graphical models (Markov random fields, Tanner graphs, and Bayesian networks), we derive a general distributed marginalization algorithm for functions described by factor graphs. From this general algorithm, Pearl's (1986) belief propagation algorithm is easily derived as a special case. We point out that iterative decoding algorithms for various codes, including "turbo decoding" of parallel-concatenated convolutional codes, may be viewed as probability propagation in a graphical model of the code. We focus on Bayesian network descriptions of codes, which give a natural input/state/output/channel description of a code and channel, and we indicate how iterative decoders can be developed for parallel-and serially concatenated coding systems, product codes, and low-density parity-check codes.

447 citations


Journal ArticleDOI
TL;DR: It is shown that for even codes the set of zero neighbors is strictly optimal in this class of algorithms, which implies that general asymptotic improvements of the zero-neighbors algorithm in the frame of gradient-like approach are impossible.
Abstract: Minimal vectors in linear codes arise in numerous applications, particularly, in constructing decoding algorithms and studying linear secret sharing schemes. However, properties and structure of minimal vectors have been largely unknown. We prove basic properties of minimal vectors in general linear codes. Then we characterize minimal vectors of a given weight and compute their number in several classes of codes, including the Hamming codes and second-order Reed-Muller codes. Further, we extend the concept of minimal vectors to codes over rings and compute them for several examples. Turning to applications, we introduce a general gradient-like decoding algorithm of which minimal-vectors decoding is an example. The complexity of minimal-vectors decoding for long codes is determined by the size of the set of minimal vectors. Therefore, we compute this size for long randomly chosen codes. Another example of algorithms in this class is given by zero-neighbors decoding. We discuss relations between the two decoding methods. In particular, we show that for even codes the set of zero neighbors is strictly optimal in this class of algorithms. This also implies that general asymptotic improvements of the zero-neighbors algorithm in the frame of gradient-like approach are impossible. We also discuss a link to secret-sharing schemes.

313 citations


Journal ArticleDOI
TL;DR: This paper proposes a low complexity method for decoding the resulting inner code (due to the spreading sequence), which allows iterative (turbo) decoding of the serially-concatenated code pair.
Abstract: A code-division multiple-access system with channel coding may be viewed as a serially-concatenated coded system. In this paper we propose a low complexity method for decoding the resulting inner code (due to the spreading sequence), which allows iterative (turbo) decoding of the serially-concatenated code pair. The per-bit complexity of the proposed decoder increases only linearly with the number of users. Performance within a fraction of a dB of the single user bound for heavily loaded asynchronous CDMA is shown both by simulation and analytically.

275 citations


Journal ArticleDOI
16 Aug 1998
TL;DR: A list decoding algorithm is presented for [n,k] Reed-Solomon (RS) codes over GF(q), which is capable of correcting more than [(n-k)/2] errors and improves on the time complexity O(n/sup 3/) needed for solving the equations of Sudan's algorithm by a naive Gaussian elimination.
Abstract: A list decoding algorithm is presented for [n,k] Reed-Solomon (RS) codes over GF(q), which is capable of correcting more than [(n-k)/2] errors. Based on a previous work of Sudan (see J. Compl., vol.13, p.180-93, 1997), an extended key equation (EKE) is derived for RS codes, which reduces to the classical key equation when the number of errors is limited to [(n-k)/2]. Generalizing Massey's (1969) algorithm that finds the shortest recurrence that generates a given sequence, an algorithm is obtained for solving the EKE in time complexity O(l/spl middot/(n-k)/sup 2/), where l is a design parameter, typically a small constant, which s an upper bound on the size of the list of decoded codewords. (The case l=1 corresponds to classical decoding of up to [(n-k)/2] errors where the decoding ends with at most one codeword.) This improves on the time complexity O(n/sup 3/) needed for solving the equations of Sudan's algorithm by a naive Gaussian elimination. The polynomials found by solving the EKE are then used for reconstructing the codewords in time complexity O((llog/sup 2/l)k(n+llogq)) using root-finders of degree-l univariate polynomials.

251 citations


Journal ArticleDOI
TL;DR: The MAP decoding algorithm of Bahl et al. (1974) is extended to the case of tail-biting trellis codes and an algorithm is given that is based on finding an eigenvector, and another that avoids this.
Abstract: We extend the MAP decoding algorithm of Bahl et al. (1974) to the case of tail-biting trellis codes. An algorithm is given that is based on finding an eigenvector, and another that avoids this. Several examples are given. The algorithm has application to turbo decoding and source-controlled channel decoding.

183 citations


01 Jan 1998
TL;DR: Turbo-codes as mentioned in this paper are a family of convolutional codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving.
Abstract: This paper presents a new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving. Decoding calls on iterative processing in which each component decoder takes advantage of the work of the other at the previous step, with the aid of the original concept of extrinsic information. For sufficiently large interleaving sizes, the correcting performance of turbo-codes, investigated by simulation, appears to be close to the theoretical limit predicted by Shannon.

136 citations


Journal ArticleDOI
TL;DR: A decoding algorithm which only uses parity check vectors of minimum weight is proposed, which gives results close to soft decision maximum likelihood (SDML) decoding for many code classes like BCH codes.
Abstract: Iterative decoding methods have gained interest, initiated by the results of the so-called "turbo" codes. The theoretical description of this decoding, however, seems to be difficult. Therefore, we study the iterative decoding of block codes. First, we discuss the iterative decoding algorithms developed by Gallager (1962), Battail et al. (1979), and Hagenauer et al. (1996). Based on their results, we propose a decoding algorithm which only uses parity check vectors of minimum weight. We give the relation of this iterative decoding to one-step majority-logic decoding, and interpret it as gradient optimization. It is shown that the used parity check set defines the region where the iterative decoding decides on a particular codeword. We make plausible that, in almost all cases, the iterative decoding converges to a codeword after some iterations. We derive a computationally efficient implementation using the minimal trellis representing the used parity check set. Simulations illustrate that our algorithm gives results close to soft decision maximum likelihood (SDML) decoding for many code classes like BCH codes. Reed-Muller codes, quadratic residue codes, double circulant codes, and cyclic finite geometry codes. We also present simulation results for product codes and parallel concatenated codes based on block codes.

129 citations


Proceedings ArticleDOI
31 May 1998
TL;DR: This paper presents a method for reducing the decoding delay by means of segmenting a block into several sub-blocks, which are partially overlapped, which allows for the parallel decoding of each component code by usingSeveral sub-block decoders.
Abstract: The recursive computations in the MAP-based decoding of turbo codes usually introduce a significant amount of decoding delay. In this paper, we present a method for reducing the decoding delay by means of segmenting a block into several sub-blocks, which are partially overlapped. The proposed sub-block segmentation scheme allows for the parallel decoding of each component code by using several sub-block decoders. The number of steps for the recursive computations in each sub-block decoder is reduced to O(N/W), where W is the number of segmented sub-blocks. The decoding delay is approximately one-Wth that of a conventional MAP-based turbo-coding system. The cost paid is a slight degradation in bit error rate performance and a reasonable increase in hardware complexity.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: Soft-in/soft-out decoding and equalization is performed by analog nonlinear networks where the log-likelihood values of bits are represented by currents and voltages.
Abstract: Soft-in/soft-out decoding and equalization is performed by analog nonlinear networks where the log-likelihood values of bits are represented by currents and voltages. Applications include decoding of block, tailbiting conv. codes, multilevel coded modulation and combined equalization and decoding.

Journal ArticleDOI
G. Battail1
TL;DR: It is shown that pseudorandom recursive convolutional codes belong to the turbo-code family, and it is suggested to use iterated nonexhaustive replication decoding to increase the encoder memory without inordinate complexity.
Abstract: For understanding turbo codes, we propose to locate them at the intersection of three main topics: the random-like criterion for designing codes, the idea of extending the decoding role to reassess probabilities, and that of combining several codes by product or concatenation. Concerning the idea of designing random-like (RL) codes, we distinguish strongly and weakly random-like codes depending on how the closeness of their weight distribution to that obtained in the average by random coding is measured. Using, e.g., the cross entropy as a closeness measure results in weakly RL codes. Although their word-error rate is bad, their bit-error rate (BER) remains low up to the vicinity of the channel capacity. We show that pseudorandom recursive convolutional codes belong to this family. Obtaining reasonably good performance with a single code of this type involves high complexity, and its specific decoding is difficult. However, using these codes as components in the turbo-code scheme is a simple means for improving the low-weight tail of the distribution and to adjust the BER to any specification. In order to increase the encoder memory without inordinate complexity, it is suggested to use iterated nonexhaustive replication decoding.

Proceedings ArticleDOI
07 Jun 1998
TL;DR: A very efficient sub-optimal soft-in-soft-out decoding rule is presented for the SPC code, costing only 3 addition-equivalent-operations per information bit.
Abstract: This paper is concerned with the decoding technique and performance of multi-dimensional concatenated single-parity-check (SPC) code. A very efficient sub-optimal soft-in-soft-out decoding rule is presented for the SPC code, costing only 3 addition-equivalent-operations per information bit. Multi-dimensional concatenated coding and decoding principles are investigated. Simulation results of rate 5/6 and 4/5 3-dimensional concatenated SPC codes are provided. Performance of BER=10/sup -4/-10/sup -5/ can be achieved by the MAP and max-log-MAP decoders, respectively, with E/sub b//N/sub 0/ only 1 and 1.5 dB away from the theoretical limits.

Proceedings ArticleDOI
A. Jimenez1, K.Sh. Zigangirov
16 Aug 1998
TL;DR: A class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for the decoding of these codes is presented, and the performance is close to the performance of turbo-decoding.
Abstract: We present a class of convolutional codes defined by a low-density parity-check matrix, and an iterative algorithm for the decoding of these codes. The performance of this decoding is close to the performance of turbo-decoding.

Patent
Vicki Ping Zhang1, Liangchi Hsu1
07 Dec 1998
TL;DR: In this article, an iterative decoder performs decoding on a coded information signal based on minimum and maximum values for the number of decoding iterations to be performed for a particular data transmission.
Abstract: A method and apparatus for iterative decoding of a coded information signal that allows quality of service, QoS, parameters to be dynamically balanced in a telecommunications system. In an embodiment, an iterative decoder performs decoding on a coded information signal based on minimum and maximum values for the number of decoding iterations to be performed for a particular data transmission. The minimum and maximum values for the number of decoding iterations are determined according to QoS requirements that are given in terms of BER and Tdelay.

Journal ArticleDOI
TL;DR: Upper and lower bounds are derived for the decoding complexity of a general lattice L in terms of the dimension n and the coding gain /spl gamma/ of L, and are obtained based on an improved version of Kannan's (1983) method.
Abstract: Upper and lower bounds are derived for the decoding complexity of a general lattice L. The bounds are in terms of the dimension n and the coding gain /spl gamma/ of L, and are obtained based on a decoding algorithm which is an improved version of Kannan's (1983) method. The latter is currently the fastest known method for the decoding of a general lattice. For the decoding of a point x, the proposed algorithm recursively searches inside an, n-dimensional rectangular parallelepiped (cube), centered at x, with its edges along the Gram-Schmidt vectors of a proper basis of L. We call algorithms of this type recursive cube search (RCS) algorithms. It is shown that Kannan's algorithm also belongs to this category. The complexity of RCS algorithms is measured in terms of the number of lattice points that need to be examined before a decision is made. To tighten the upper bound on the complexity, we select a lattice basis which is reduced in the sense of Korkin-Zolotarev (1873). It is shown that for any selected basis, the decoding complexity (using RCS algorithms) of any sequence of lattices with possible application in communications (/spl gamma//spl ges/1) grows at least exponentially with n and /spl gamma/. It is observed that the densest lattices, and almost all of the lattices used in communications, e.g., Barnes-Wall lattices and the Leech lattice, have equal successive minima (ESM). For the decoding complexity of ESM lattices, a tighter upper bound and a stronger lower bound result are derived.

Journal ArticleDOI
TL;DR: A practical list decoding algorithm based on the list output Viterbi algorithm (LOVA) is proposed as an approximation to the ML list decoder and results show that the proposed algorithm provides significant gains corroborating the analytical results.
Abstract: List decoding of turbo codes is analyzed under the assumption of a maximum-likelihood (ML) list decoder. It is shown that large asymptotic gains can be achieved on both the additive white Gaussian noise (AWGN) and fully interleaved flat Rayleigh-fading channels. It is also shown that the relative asymptotic gains for turbo codes are larger than those for convolutional codes. Finally, a practical list decoding algorithm based on the list output Viterbi algorithm (LOVA) is proposed as an approximation to the ML list decoder. Simulation results show that the proposed algorithm provides significant gains corroborating the analytical results. The asymptotic gain manifests itself as a reduction in the bit-error rate (BER) and frame-error rate (FER) floor of turbo codes.

Journal ArticleDOI
TL;DR: It is shown that minimum cross-entropy decoding is an optimal lossless decoding algorithm but its complexity limits its practical implementation, and use of a maximum a posteriori (MAP) symbol estimation algorithm provides practical algorithms that are identical to those proposed in the literature.
Abstract: In this correspondence, the relationship between iterative decoding and techniques for minimizing cross-entropy is explained. It is shown that minimum cross-entropy (MCE) decoding is an optimal lossless decoding algorithm but its complexity limits its practical implementation. Use of a maximum a posteriori (MAP) symbol estimation algorithm instead of the true MCE algorithm provides practical algorithms that are identical to those proposed in the literature. In particular, turbo decoding is shown to be equivalent to an optimal algorithm for iteratively minimizing cross-entropy under an implicit independence assumption.

Journal ArticleDOI
TL;DR: A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolved codes is presented and it is shown that iterative decoding of high- rate codes results in high-gain, moderate-complexity coding.
Abstract: A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased for codes of rate greater than 1/2. The discussed algorithms fulfil all requirements for iterative ("turbo") decoding schemes. Simulation results are presented for high-rate parallel concatenated convolutional codes ("turbo" codes) using an AWGN channel or a perfectly interleaved Rayleigh fading channel. It is shown that iterative decoding of high-rate codes results in high-gain, moderate-complexity coding.

Patent
Michael Bakhmutsky1
22 Oct 1998
TL;DR: In this paper, a variable length decoder for decoding an input digital data stream which includes a plurality of variable length code words which are coded in accordance with any plurality of different coding standards is presented.
Abstract: A variable length decoder for decoding an input digital data stream which includes a plurality of variable length code words which are coded in accordance with any of a plurality of different coding standards. The variable length decoder includes an input circuit which receives the input digital data stream and produces a decoding window that includes a leading word aligned bit stream, and a decoding circuit which is configurable into any selected one of a plurality of different decoding configurations, depending upon which coding standard the input digital data stream is coded in accordance with, the decoding circuit being coupled to the leading word aligned bit stream for decoding the length and value of each code word in the input digital data stream. The variable length decoder further includes a programmable controller for controlling the operation of the variable length decoder in accordance with any of a plurality of different decoding protocols, depending upon the coding standard employed in coding the input digital data stream. The programmable controller determines which coding standard was employed in coding the input digital data stream, and then automatically configures the decoding circuit into the decoding configuration which is appropriate for decoding the input digital data stream, based upon this determination. The variable length decoder also includes a code word value memory which is used for decoding the values of the code words in the input digital data stream. The code word value memory is logically organized into a plurality of individually addressable pages, at least one of which stores a code prefix and a plurality of sub-trees associated with another code prefix, to thereby maximize memory utilization and minimize the required memory size.

Patent
Yoshikazu Kobayashi1
31 Mar 1998
TL;DR: In this paper, an image decoding apparatus that generates a decoded image from a code sequence is presented, which includes an entropy decoding unit, achieved by the computer, for reading one code out of the code sequence, which is stored in the memory via the bus and performing entropy decoding on the read code in to generate a decode value.
Abstract: In an image decoding apparatus that generates a decoded image from a code sequence. The decoding apparatus has a bus, a computer and a memory, wherein the computer and the memory are connected to each other via the bus. The code sequence is generated by performing orthogonal transform, quantization and entropy coding on image data, which is stored in the memory. The decoding apparatus includes an entropy decoding unit, achieved by the computer, for reading one code out of the code sequence, which is stored in the memory, via the bus and performing entropy decoding on the read code in to generate a decode value. The apparatus also includes a coefficient generating unit, achieved by the computer, for generating at least one orthogonal transform coefficient according to the generated decode value. Also, a writing unit is achieved by the computer, for writing the generated at least one orthogonal transform coefficient into the memory via the bus. A decode controlling unit and the writing unit is provided for instructing the entropy decoding unit to process a next code out of the code sequence.

Journal ArticleDOI
TL;DR: Symbol-by-symbol maximum a posteriori decoding algorithms for nonbinary block and convolutional codes over an extension field GF(p/sup a/) are presented and meet all requirements needed for iterative decoding.
Abstract: Symbol-by-symbol maximum a posteriori (MAP) decoding algorithms for nonbinary block and convolutional codes over an extension field GF(p/sup a/) are presented. Equivalent MAP decoding rules employing the dual code are given which are computationally more efficient for high-rate codes. It is shown that these algorithms meet all requirements needed for iterative decoding as the output of the decoder can be split into three independent estimates: soft channel value, a priori term and extrinsic value. The discussed algorithms are then applied to a parallel concatenated coding scheme with nonbinary component codes in conjunction with orthogonal signaling.

Journal Article
TL;DR: In this paper, an improved list decoding algorithm for decoding Reed-Solomon codes was presented, where the list decoding problem was reduced to a curve-fitting problem over a field F and the algorithm was shown to solve this problem for e 1/3, the first asymptotic improvement in four decades.
Abstract: Given an error-correcting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding Reed-Solomon codes. The list decoding problem for Reed-Solomon codes reduces to the following "curve-fitting" problem over a field F: Given n points {(x/sub i/.y/sub i/)}/sub i=1//sup n/, x/sub i/,y/sub i//spl isin/F, and a degree parameter k and error parameter e, find all univariate polynomials p of degree at most k such that y/sub i/=p(x/sub i/) for all but at most e values of i/spl isin/{1....,n}. We give an algorithm that solves this problem for e 1/3, where the result yields the first asymptotic improvement in four decades. The algorithm generalizes to solve the list decoding problem for other algebraic codes, specifically alternant codes (a class of codes including BCH codes) and algebraic-geometric codes. In both cases, we obtain a list decoding algorithm that corrects up to n-/spl radic/(n-d-) errors, where n is the block length and d' is the designed distance of the code. The improvement for the case of algebraic-geometric codes extends the methods of Shokrollahi and Wasserman (1998) and improves upon their bound for every choice of n and d'. We also present some other consequences of our algorithm including a solution to a weighted curve fitting problem, which is of use in soft-decision decoding algorithms for Reed-Solomon codes.

Journal ArticleDOI
TL;DR: Using the close relationship between guessing and sequential decoding, a tight lower bound is given to the complexity of sequential decoding in joint source-channel coding systems, complementing earlier works by Koshelev and Hellman.
Abstract: We extend our earlier work on guessing subject to distortion to the joint source-channel coding context. We consider a system in which there is a source connected to a destination via a channel and the goal is to reconstruct the source output at the destination within a prescribed distortion level with respect to (w.r.t.) some distortion measure. The decoder is a guessing decoder in the sense that it is allowed to generate successive estimates of the source output until the distortion criterion is met. The problem is to design the encoder and the decoder so as to minimize the average number of estimates until successful reconstruction. We derive estimates on nonnegative moments of the number of guesses, which are asymptotically tight as the length of the source block goes to infinity. Using the close relationship between guessing and sequential decoding, we give a tight lower bound to the complexity of sequential decoding in joint source-channel coding systems, complementing earlier works by Koshelev (1973) and Hellman (1975). Another topic explored here is the probability of error for list decoders with exponential list sizes for joint source-channel coding systems, for which we obtain tight bounds as well. It is noteworthy that optimal performance w.r.t. the performance measures considered here can be achieved in a manner that separates source coding and channel coding.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: This work considers a generalized version of low-density parity-check codes, where the decoding procedure can be based on the decoding of Hamming component codes, and shows that a performance close to the Shannon capacity limit can be achieved.
Abstract: A generalization of Gallager's low-density parity-check codes is introduced, where as component codes single-error correcting Hamming codes are used instead of single-error detecting parity-check codes. Low-density (LD) parity-check codes were first introduced by Gallager in 1963. These codes are, in combination with iterative decoding, very promising for achieving low error probabilities at a reasonable cost. Results of computer simulations for long LD codes show, that a performance close to the Shannon capacity limit can be achieved. In this work, we consider a generalized version of low-density parity-check codes, where the decoding procedure can be based on the decoding of Hamming component codes.

Journal ArticleDOI
TL;DR: A method called early detection is presented that can be used to reduce the computational complexity of a variety of iterative decoders, using a confidence criterion, some information symbols, state variables, and codeword symbols are detected early on during decoding.
Abstract: The bit-error rate (BER) performance of new iterative decoding algorithms (e,g,, turbodecoding) is achieved at the expense of a computationally burdensome decoding procedure. We present a method called early detection that can be used to reduce the computational complexity of a variety of iterative decoders. Using a confidence criterion, some information symbols, state variables, and codeword symbols are detected early on during decoding. In this way, the computational complexity of further processing is reduced with a controllable increase in the BER. We present an easily implemented instance of this algorithm, called trellis splicing, that can be used with turbodecoding. For a simulated system of this type, we obtain a reduction in the computational complexity of up to a factor of four, relative to a turbodecoder that obtains the same increase in the BER by performing fewer iterations.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: In this paper, it was shown that for strictly positive local kernels, the iterations of the turbo decoding algorithm converge to a unique fixed point, which was also observed by Anderson and Hladik (1998) and Weiss (1997).
Abstract: It is now understood that the turbo decoding algorithm is an instance of a probability propagation algorithm (PPA) on a graph with many cycles. In this paper we investigate the behavior of an PPA in graphs with a single cycle such as the graph of a tail-biting code. First, we show that for strictly positive local kernels, the iterations of the PPA converge to a unique fixed point, (which was also observed by Anderson and Hladik (1998) and Weiss (1997)). Secondly, we shall generalize a result of McEliece and Rodemich (1995), by showing that if the hidden variables in the cycle are binary-valued, the PPA will always make an optimal decision. (This was also observed independently by Weiss). When the hidden variables can assume 3 or more values, the behavior of the PPA is much harder to characterize.

Journal ArticleDOI
TL;DR: Although the gains achieved at practical bit-error rates are only a fraction of a decibel, they remain meaningful as they are of the same orders as the error performance differences between optimum and suboptimum decodings.
Abstract: In this correspondence, the bit-error probability P/sub b/ for maximum-likelihood decoding of binary linear block codes is investigated. The contribution P/sub b/(j) of each information bit j to P/sub b/ is considered and an upper bound on P/sub b/(j) is derived. For randomly generated codes, it is shown that the conventional approximation at high SNR P/sub b//spl ap/(d/sub H//N).P/sub s/, where P/sub s/ represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P/sub b/ when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit-error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for soft-decision decoding methods which require a generator matrix with a particular structure such as trellis decoding, multistage decoding, or algebraic-based soft-decision decoding, equivalent schemes that reduce the bit-error probability are discussed. Although the gains achieved at practical bit-error rates are only a fraction of a decibel, they remain meaningful as they are of the same orders as the error performance differences between optimum and suboptimum decodings. Most importantly, these gains are free as they are achieved with no or little additional circuitry which is transparent to the conventional implementation.

Journal ArticleDOI
TL;DR: The impact of the interleaver, embedded in the encoder for a parallel concatenated code, called the turbo code, is studied and it is shown that an increased minimum Hamming distance can be obtained by using a structured interleavers.
Abstract: The impact of the interleaver, embedded in the encoder for a parallel concatenated code, called the turbo code, is studied. The known turbo codes consist of long random interleavers, whose purpose is to reduce the value of the error coefficients. It is shown that an increased minimum Hamming distance can be obtained by using a structured interleaver. For low bit-error rates (BERs), we show that the performance of turbo codes with a structured interleaver is better than that obtained with a random interleaver. Another important advantage of the structured interleaver is the short length required, which yields a short decoding delay and reduced decoding complexity (in terms of memory). We also consider the use of turbo codes as component codes in multilevel codes. Powerful coding structures that consist of two component codes are suggested. Computer simulations are performed in order to evaluate the reduction in coding gain due to suboptimal iterative decoding. From the results of these simulations we deduce that the degradation in the performance (due to suboptimal decoding) is very small.