scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1998"


Journal ArticleDOI
TL;DR: It is shown that Pearl's algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager's low-density parity-check codes, serially concatenated codes, and product codes.
Abstract: We describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. (1993) and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl's (1982) belief propagation algorithm. We see that if Pearl's algorithm is applied to the "belief network" of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the experimental performance of turbo decoding is still lacking. However, we also show that Pearl's algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager's (1962) low-density parity-check codes, serially concatenated codes, and product codes. Thus, belief propagation provides a very attractive general methodology for devising low-complexity iterative decoding algorithms for hybrid coded systems.

989 citations


Journal ArticleDOI
TL;DR: An iterative decoding algorithm for any product code built using linear block codes based on soft-input/soft-output decoders for decoding the component codes so that near-optimum performance is obtained at each iteration.
Abstract: This paper describes an iterative decoding algorithm for any product code built using linear block codes. It is based on soft-input/soft-output decoders for decoding the component codes so that near-optimum performance is obtained at each iteration. This soft-input/soft-output decoder is a Chase decoder which delivers soft outputs instead of binary decisions. The soft output of the decoder is an estimation of the log-likelihood ratio (LLR) of the binary decisions given by the Chase decoder. The theoretical justifications of this algorithm are developed and the method used for computing the soft output is fully described. The iterative decoding of product codes is also known as the block turbo code (BTC) because the concept is quite similar to turbo codes based on iterative decoding of concatenated recursive convolutional codes. The performance of different Bose-Chaudhuri-Hocquenghem (BCH)-BTCs are given for the Gaussian and the Rayleigh channel. Performance on the Gaussian channel indicates that data transmission at 0.8 dB of Shannon's limit or more than 98% (R/C>0.98) of channel capacity can be achieved with high-code-rate BTC using only four iterations. For the Rayleigh channel, the slope of the bit-error rate (BER) curve is as steep as for the Gaussian channel without using channel state information.

970 citations


Proceedings ArticleDOI
08 Nov 1998
TL;DR: An improved list decoding algorithm for decoding Reed-Solomon codes and alternant codes and algebraic-geometric codes is presented, including a solution to a weighted curve fitting problem, which is of use in soft-decision decoding algorithms for Reed- Solomon codes.
Abstract: Given an error-correcting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding Reed-Solomon codes. The list decoding problem for Reed-Solomon codes reduces to the following "curve-fitting" problem over a field F: Given n points {(x/sub i/.y/sub i/)}/sub i=1//sup n/, x/sub i/,y/sub i//spl isin/F, and a degree parameter k and error parameter e, find all univariate polynomials p of degree at most k such that y/sub i/=p(x/sub i/) for all but at most e values of i/spl isin/{1....,n}. We give an algorithm that solves this problem for e 1/3, where the result yields the first asymptotic improvement in four decades. The algorithm generalizes to solve the list decoding problem for other algebraic codes, specifically alternant codes (a class of codes including BCH codes) and algebraic-geometric codes. In both cases, we obtain a list decoding algorithm that corrects up to n-/spl radic/(n-d-) errors, where n is the block length and d' is the designed distance of the code. The improvement for the case of algebraic-geometric codes extends the methods of Shokrollahi and Wasserman (1998) and improves upon their bound for every choice of n and d'. We also present some other consequences of our algorithm including a solution to a weighted curve fitting problem, which is of use in soft-decision decoding algorithms for Reed-Solomon codes.

532 citations


Proceedings ArticleDOI
22 Jun 1998
TL;DR: The results of Monte Carlo simulations of the decoding of infinite LDPC codes which can be used to obtain good constructions for finite codes and empirical results for the Gaussian channel are presented.
Abstract: Binary low density parity check (LDPC) codes have been shown to have near Shannon limit performance when decoded using a probabilistic decoding algorithm. The analogous codes defined over finite fields GF(q) of order q>2 show significantly improved performance. We present the results of Monte Carlo simulations of the decoding of infinite LDPC codes which can be used to obtain good constructions for finite codes. We also present empirical results for the Gaussian channel including a rate 1/4 code with bit error probability of 10/sup -4/ at E/sub b//N/sub 0/=-0.05 dB.

502 citations


Journal ArticleDOI
TL;DR: It is pointed out that iterative decoding algorithms for various codes, including "turbo decoding" of parallel-concatenated convolutional codes, may be viewed as probability propagation in a graphical model of the code.
Abstract: We present a unified graphical model framework for describing compound codes and deriving iterative decoding algorithms. After reviewing a variety of graphical models (Markov random fields, Tanner graphs, and Bayesian networks), we derive a general distributed marginalization algorithm for functions described by factor graphs. From this general algorithm, Pearl's (1986) belief propagation algorithm is easily derived as a special case. We point out that iterative decoding algorithms for various codes, including "turbo decoding" of parallel-concatenated convolutional codes, may be viewed as probability propagation in a graphical model of the code. We focus on Bayesian network descriptions of codes, which give a natural input/state/output/channel description of a code and channel, and we indicate how iterative decoders can be developed for parallel-and serially concatenated coding systems, product codes, and low-density parity-check codes.

447 citations


Journal ArticleDOI
TL;DR: It is shown that for even codes the set of zero neighbors is strictly optimal in this class of algorithms, which implies that general asymptotic improvements of the zero-neighbors algorithm in the frame of gradient-like approach are impossible.
Abstract: Minimal vectors in linear codes arise in numerous applications, particularly, in constructing decoding algorithms and studying linear secret sharing schemes. However, properties and structure of minimal vectors have been largely unknown. We prove basic properties of minimal vectors in general linear codes. Then we characterize minimal vectors of a given weight and compute their number in several classes of codes, including the Hamming codes and second-order Reed-Muller codes. Further, we extend the concept of minimal vectors to codes over rings and compute them for several examples. Turning to applications, we introduce a general gradient-like decoding algorithm of which minimal-vectors decoding is an example. The complexity of minimal-vectors decoding for long codes is determined by the size of the set of minimal vectors. Therefore, we compute this size for long randomly chosen codes. Another example of algorithms in this class is given by zero-neighbors decoding. We discuss relations between the two decoding methods. In particular, we show that for even codes the set of zero neighbors is strictly optimal in this class of algorithms. This also implies that general asymptotic improvements of the zero-neighbors algorithm in the frame of gradient-like approach are impossible. We also discuss a link to secret-sharing schemes.

313 citations


Journal ArticleDOI
TL;DR: It is shown that after a proper simple modification, the soft-output Viterbi algorithm (SOVA) proposed by Hagenauer and Hoeher (1989) becomes equivalent to the max-log-maximum a posteriori (MAP) decoding algorithm.
Abstract: It is shown that after a proper simple modification, the soft-output Viterbi algorithm (SOVA) proposed by Hagenauer and Hoeher (1989) becomes equivalent to the max-log-maximum a posteriori (MAP) decoding algorithm. Consequently, this modified SOVA allows to implement the max-log-MAP decoding algorithm by simply adjusting the conventional Viterbi algorithm. Hence, it provides an attractive solution to achieve low-complexity near-optimum soft-input soft-output decoding.

197 citations


Journal ArticleDOI
TL;DR: The MAP decoding algorithm of Bahl et al. (1974) is extended to the case of tail-biting trellis codes and an algorithm is given that is based on finding an eigenvector, and another that avoids this.
Abstract: We extend the MAP decoding algorithm of Bahl et al. (1974) to the case of tail-biting trellis codes. An algorithm is given that is based on finding an eigenvector, and another that avoids this. Several examples are given. The algorithm has application to turbo decoding and source-controlled channel decoding.

183 citations


Journal ArticleDOI
16 Aug 1998
TL;DR: The sum-product algorithm (belief/probability propagation) can be naturally mapped into analog transistor circuits, which enable the construction of analog-VLSI decoders for turbo codes, low-density parity-check codes, and similar codes.
Abstract: The sum-product algorithm (belief/probability propagation) can be naturally mapped into analog transistor circuits. These circuits enable the construction of analog-VLSI decoders for turbo codes, low-density parity-check codes, and similar codes.

174 citations


Journal ArticleDOI
TL;DR: A decoding algorithm which only uses parity check vectors of minimum weight is proposed, which gives results close to soft decision maximum likelihood (SDML) decoding for many code classes like BCH codes.
Abstract: Iterative decoding methods have gained interest, initiated by the results of the so-called "turbo" codes. The theoretical description of this decoding, however, seems to be difficult. Therefore, we study the iterative decoding of block codes. First, we discuss the iterative decoding algorithms developed by Gallager (1962), Battail et al. (1979), and Hagenauer et al. (1996). Based on their results, we propose a decoding algorithm which only uses parity check vectors of minimum weight. We give the relation of this iterative decoding to one-step majority-logic decoding, and interpret it as gradient optimization. It is shown that the used parity check set defines the region where the iterative decoding decides on a particular codeword. We make plausible that, in almost all cases, the iterative decoding converges to a codeword after some iterations. We derive a computationally efficient implementation using the minimal trellis representing the used parity check set. Simulations illustrate that our algorithm gives results close to soft decision maximum likelihood (SDML) decoding for many code classes like BCH codes. Reed-Muller codes, quadratic residue codes, double circulant codes, and cyclic finite geometry codes. We also present simulation results for product codes and parallel concatenated codes based on block codes.

129 citations


Journal ArticleDOI
TL;DR: The performance of the convolutional codes is analyzed for the two modulation techniques and a new metric is developed for soft decision decoding of DAPSK modulated signals.
Abstract: The multilevel modulation techniques of 64-quadrature amplitude modulation (QAM) and 64-differential amplitude and phase-shift keying (DAPSK) have been proposed in combination with the orthogonal frequency-division multiplexing (OFDM) scheme for digital terrestrial video broadcasting (DTVB). With this system a data rate of 34 Mb/s can be transmitted over an 8-MHz radio channel. A comparison of these modulation methods in the uncoded case has been presented by Engels and Rohling (see European Trans. Telecommun., vol.6, p.633-40, 1995). The channel coding scheme proposed for DTVB by Schafer (see Proc. Int. Broadcasting Convention, Amsterdam, The Netherlands, p.79-84, 1995) consists of an inner convolutional code concatenated with an outer Reed-Solomon (RS) code. In this paper the performance of the convolutional codes is analyzed for the two modulation techniques. This analysis includes soft decision Viterbi (1971) decoding of the convolutional code. For soft decision decoding of DAPSK modulated signals a new metric is developed.

Proceedings ArticleDOI
07 Jun 1998
TL;DR: This work discusses the applicability of high rate turbo codes for magnetic recording, citing in particular the attractiveness of interleaver gain as opposed to coding gain, and examines the performance of rate 4/5, 8/9, and 16/17 turbo codes on a PRA-equalized magnetic recording channel at a user density of S/sub u/=2.0.
Abstract: We discuss the applicability of high rate turbo codes for magnetic recording, citing in particular the attractiveness of interleaver gain as opposed to coding gain. We then examine the performance of rate 4/5, 8/9, and 16/17 turbo codes on a PRA-equalized magnetic recording channel at a user density of S/sub u/=2.0. Simulation results show that a gain of 7.1 dB relative to the uncoded situation is attainable at an error rate of 10/sup -5/ for the rate 4/5 and 8/9 codes, whereas the rate 16/17 code achieves a gain of 6.5 dB.

Proceedings ArticleDOI
31 May 1998
TL;DR: This paper presents a method for reducing the decoding delay by means of segmenting a block into several sub-blocks, which are partially overlapped, which allows for the parallel decoding of each component code by usingSeveral sub-block decoders.
Abstract: The recursive computations in the MAP-based decoding of turbo codes usually introduce a significant amount of decoding delay. In this paper, we present a method for reducing the decoding delay by means of segmenting a block into several sub-blocks, which are partially overlapped. The proposed sub-block segmentation scheme allows for the parallel decoding of each component code by using several sub-block decoders. The number of steps for the recursive computations in each sub-block decoder is reduced to O(N/W), where W is the number of segmented sub-blocks. The decoding delay is approximately one-Wth that of a conventional MAP-based turbo-coding system. The cost paid is a slight degradation in bit error rate performance and a reasonable increase in hardware complexity.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: Soft-in/soft-out decoding and equalization is performed by analog nonlinear networks where the log-likelihood values of bits are represented by currents and voltages.
Abstract: Soft-in/soft-out decoding and equalization is performed by analog nonlinear networks where the log-likelihood values of bits are represented by currents and voltages. Applications include decoding of block, tailbiting conv. codes, multilevel coded modulation and combined equalization and decoding.

Book
30 Apr 1998
TL;DR: Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes as discussed by the authors combines trellises and trellis based decoding algorithms for linear codes together in a simple and unified form.
Abstract: From the Publisher: Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes combines trellises and trellis-based decoding algorithms for linear codes together in a simple and unified form. The approach is to explain the material in an easily understood manner with minimal mathematical rigor. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes is intended for practicing communication engineers who want to have a fast grasp and understanding of the subject. This book can also be used as a text for advanced courses on the subject.

Journal ArticleDOI
G. Battail1
TL;DR: It is shown that pseudorandom recursive convolutional codes belong to the turbo-code family, and it is suggested to use iterated nonexhaustive replication decoding to increase the encoder memory without inordinate complexity.
Abstract: For understanding turbo codes, we propose to locate them at the intersection of three main topics: the random-like criterion for designing codes, the idea of extending the decoding role to reassess probabilities, and that of combining several codes by product or concatenation. Concerning the idea of designing random-like (RL) codes, we distinguish strongly and weakly random-like codes depending on how the closeness of their weight distribution to that obtained in the average by random coding is measured. Using, e.g., the cross entropy as a closeness measure results in weakly RL codes. Although their word-error rate is bad, their bit-error rate (BER) remains low up to the vicinity of the channel capacity. We show that pseudorandom recursive convolutional codes belong to this family. Obtaining reasonably good performance with a single code of this type involves high complexity, and its specific decoding is difficult. However, using these codes as components in the turbo-code scheme is a simple means for improving the low-weight tail of the distribution and to adjust the BER to any specification. In order to increase the encoder memory without inordinate complexity, it is suggested to use iterated nonexhaustive replication decoding.

Proceedings ArticleDOI
07 Jun 1998
TL;DR: A very efficient sub-optimal soft-in-soft-out decoding rule is presented for the SPC code, costing only 3 addition-equivalent-operations per information bit.
Abstract: This paper is concerned with the decoding technique and performance of multi-dimensional concatenated single-parity-check (SPC) code. A very efficient sub-optimal soft-in-soft-out decoding rule is presented for the SPC code, costing only 3 addition-equivalent-operations per information bit. Multi-dimensional concatenated coding and decoding principles are investigated. Simulation results of rate 5/6 and 4/5 3-dimensional concatenated SPC codes are provided. Performance of BER=10/sup -4/-10/sup -5/ can be achieved by the MAP and max-log-MAP decoders, respectively, with E/sub b//N/sub 0/ only 1 and 1.5 dB away from the theoretical limits.

Proceedings ArticleDOI
22 Jun 1998
TL;DR: The proposed technique combines the source model, the source decoder and the channel decoder to construct a joint decoder with a conventional Viterbi structure, and results comparing the performance of this approach with the conventional tandem decoding approach are presented.
Abstract: In systems that employ source coding, imperfect compression may leave some redundancy in the data at the input to the channel encoder. This paper presents a new method for exploiting that redundancy via joint source-channel maximum a posteriori (MAP) decoding. The technique may be applied to systems that employ a variable-length source code in conjunction with a channel code; the decoder uses the residual redundancy that remains after compression to enhance robustness. The proposed technique (applicable to both memoryless and Markov sources) combines the source model, the source decoder and the channel decoder to construct a joint decoder with a conventional Viterbi structure. Simulation results comparing the performance of this approach with the conventional tandem decoding approach are presented.

Proceedings ArticleDOI
A. Jimenez1, K.Sh. Zigangirov
16 Aug 1998
TL;DR: A class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for the decoding of these codes is presented, and the performance is close to the performance of turbo-decoding.
Abstract: We present a class of convolutional codes defined by a low-density parity-check matrix, and an iterative algorithm for the decoding of these codes. The performance of this decoding is close to the performance of turbo-decoding.

Patent
Vicki Ping Zhang1, Liangchi Hsu1
07 Dec 1998
TL;DR: In this article, an iterative decoder performs decoding on a coded information signal based on minimum and maximum values for the number of decoding iterations to be performed for a particular data transmission.
Abstract: A method and apparatus for iterative decoding of a coded information signal that allows quality of service, QoS, parameters to be dynamically balanced in a telecommunications system. In an embodiment, an iterative decoder performs decoding on a coded information signal based on minimum and maximum values for the number of decoding iterations to be performed for a particular data transmission. The minimum and maximum values for the number of decoding iterations are determined according to QoS requirements that are given in terms of BER and Tdelay.

Journal ArticleDOI
TL;DR: For a Markovian sequence of encoder-produced symbols and a discrete memoryless channel, the optimal decoder computes expected values based on a discrete hidden Markov model, using the wellknown forward/backward (F/B) algorithm.
Abstract: In previous work on source coding over noisy channels it was recognized that when the source has memory, there is typically "residual redundancy" between the discrete symbols produced by the encoder, which can be capitalized upon by the decoder to improve the overall quantizer performance. Sayood and Borkenhagen (1991) and Phamdo and Farvardin (see IEEE Trans. Inform. Theory, vol.40, p.186-93, 1994) proposed "detectors" at the decoder which optimize suitable criteria in order to estimate the sequence of transmitted symbols. Phamdo and Farvardin also proposed an instantaneous approximate minimum mean-squared error (IAMMSE) decoder. These methods provide a performance advantage over conventional systems, but the maximum a posteriori (MAP) structure is suboptimal, while the IAMMSE decoder makes limited use of the redundancy. Alternatively, combining aspects of both approaches, we propose a sequence-based approximate MMSE (SAMMSE) decoder. For a Markovian sequence of encoder-produced symbols and a discrete memoryless channel, we approximate the expected distortion at the decoder under the constraint of fixed decoder complexity. For this simplified cost, the optimal decoder computes expected values based on a discrete hidden Markov model, using the wellknown forward/backward (F/B) algorithm. Performance gains for this scheme are demonstrated over previous techniques in quantizing Gauss-Markov sources over a range of noisy channel conditions. Moreover, a constrained delay version is also suggested.

Proceedings ArticleDOI
08 Nov 1998
TL;DR: The near-optimum low complexity soft decision decoding algorithm based on ordered statistics of Fossorier and Lin (1995) is modified so that soft outputs are delivered by the decoder, allowing to achieve practically the same error performance as the max-log-MAP decoding algorithm in iterative or concatenated systems with block codes for which the associated trellis complexity is simply too large for implementation of trellIS based decoding algorithms.
Abstract: In this paper, reduced-complexity soft-input soft-output decoding of linear block codes is considered. The near-optimum low complexity soft decision decoding algorithm based on ordered statistics of Fossorier and Lin (1995) is modified so that soft outputs are delivered by the decoder. This algorithm performs nearly as well as the max-log-MAP decoding algorithm since in most cases, only the soft outputs corresponding to the least reliable bits may differ. For good (N,K,d/sub H/) block codes of length N/spl les/128, dimension K and minimum Hamming distance dH/spl les/4K, the corresponding decoding complexity is O((N-K)(K+1)n(d/sub H/,K)) real operations with n(d/sub H/,K)=/spl Sigma//sub i=0/([d/sub H//4])(/sub i//sup K/). Consequently, this algorithm allows to achieve practically the same error performance as the max-log-MAP decoding algorithm in iterative or concatenated systems with block codes for which the associated trellis complexity is simply too large for implementation of trellis based decoding algorithms.

Journal ArticleDOI
TL;DR: Upper and lower bounds are derived for the decoding complexity of a general lattice L in terms of the dimension n and the coding gain /spl gamma/ of L, and are obtained based on an improved version of Kannan's (1983) method.
Abstract: Upper and lower bounds are derived for the decoding complexity of a general lattice L. The bounds are in terms of the dimension n and the coding gain /spl gamma/ of L, and are obtained based on a decoding algorithm which is an improved version of Kannan's (1983) method. The latter is currently the fastest known method for the decoding of a general lattice. For the decoding of a point x, the proposed algorithm recursively searches inside an, n-dimensional rectangular parallelepiped (cube), centered at x, with its edges along the Gram-Schmidt vectors of a proper basis of L. We call algorithms of this type recursive cube search (RCS) algorithms. It is shown that Kannan's algorithm also belongs to this category. The complexity of RCS algorithms is measured in terms of the number of lattice points that need to be examined before a decision is made. To tighten the upper bound on the complexity, we select a lattice basis which is reduced in the sense of Korkin-Zolotarev (1873). It is shown that for any selected basis, the decoding complexity (using RCS algorithms) of any sequence of lattices with possible application in communications (/spl gamma//spl ges/1) grows at least exponentially with n and /spl gamma/. It is observed that the densest lattices, and almost all of the lattices used in communications, e.g., Barnes-Wall lattices and the Leech lattice, have equal successive minima (ESM). For the decoding complexity of ESM lattices, a tighter upper bound and a stronger lower bound result are derived.

Journal ArticleDOI
TL;DR: A practical list decoding algorithm based on the list output Viterbi algorithm (LOVA) is proposed as an approximation to the ML list decoder and results show that the proposed algorithm provides significant gains corroborating the analytical results.
Abstract: List decoding of turbo codes is analyzed under the assumption of a maximum-likelihood (ML) list decoder. It is shown that large asymptotic gains can be achieved on both the additive white Gaussian noise (AWGN) and fully interleaved flat Rayleigh-fading channels. It is also shown that the relative asymptotic gains for turbo codes are larger than those for convolutional codes. Finally, a practical list decoding algorithm based on the list output Viterbi algorithm (LOVA) is proposed as an approximation to the ML list decoder. Simulation results show that the proposed algorithm provides significant gains corroborating the analytical results. The asymptotic gain manifests itself as a reduction in the bit-error rate (BER) and frame-error rate (FER) floor of turbo codes.

Journal ArticleDOI
TL;DR: It is shown that minimum cross-entropy decoding is an optimal lossless decoding algorithm but its complexity limits its practical implementation, and use of a maximum a posteriori (MAP) symbol estimation algorithm provides practical algorithms that are identical to those proposed in the literature.
Abstract: In this correspondence, the relationship between iterative decoding and techniques for minimizing cross-entropy is explained. It is shown that minimum cross-entropy (MCE) decoding is an optimal lossless decoding algorithm but its complexity limits its practical implementation. Use of a maximum a posteriori (MAP) symbol estimation algorithm instead of the true MCE algorithm provides practical algorithms that are identical to those proposed in the literature. In particular, turbo decoding is shown to be equivalent to an optimal algorithm for iteratively minimizing cross-entropy under an implicit independence assumption.

Journal ArticleDOI
TL;DR: A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolved codes is presented and it is shown that iterative decoding of high- rate codes results in high-gain, moderate-complexity coding.
Abstract: A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased for codes of rate greater than 1/2. The discussed algorithms fulfil all requirements for iterative ("turbo") decoding schemes. Simulation results are presented for high-rate parallel concatenated convolutional codes ("turbo" codes) using an AWGN channel or a perfectly interleaved Rayleigh fading channel. It is shown that iterative decoding of high-rate codes results in high-gain, moderate-complexity coding.

Patent
Yoshikazu Kobayashi1
31 Mar 1998
TL;DR: In this paper, an image decoding apparatus that generates a decoded image from a code sequence is presented, which includes an entropy decoding unit, achieved by the computer, for reading one code out of the code sequence, which is stored in the memory via the bus and performing entropy decoding on the read code in to generate a decode value.
Abstract: In an image decoding apparatus that generates a decoded image from a code sequence. The decoding apparatus has a bus, a computer and a memory, wherein the computer and the memory are connected to each other via the bus. The code sequence is generated by performing orthogonal transform, quantization and entropy coding on image data, which is stored in the memory. The decoding apparatus includes an entropy decoding unit, achieved by the computer, for reading one code out of the code sequence, which is stored in the memory, via the bus and performing entropy decoding on the read code in to generate a decode value. The apparatus also includes a coefficient generating unit, achieved by the computer, for generating at least one orthogonal transform coefficient according to the generated decode value. Also, a writing unit is achieved by the computer, for writing the generated at least one orthogonal transform coefficient into the memory via the bus. A decode controlling unit and the writing unit is provided for instructing the entropy decoding unit to process a next code out of the code sequence.

Journal ArticleDOI
TL;DR: Symbol-by-symbol maximum a posteriori decoding algorithms for nonbinary block and convolutional codes over an extension field GF(p/sup a/) are presented and meet all requirements needed for iterative decoding.
Abstract: Symbol-by-symbol maximum a posteriori (MAP) decoding algorithms for nonbinary block and convolutional codes over an extension field GF(p/sup a/) are presented. Equivalent MAP decoding rules employing the dual code are given which are computationally more efficient for high-rate codes. It is shown that these algorithms meet all requirements needed for iterative decoding as the output of the decoder can be split into three independent estimates: soft channel value, a priori term and extrinsic value. The discussed algorithms are then applied to a parallel concatenated coding scheme with nonbinary component codes in conjunction with orthogonal signaling.

Journal ArticleDOI
TL;DR: Using the close relationship between guessing and sequential decoding, a tight lower bound is given to the complexity of sequential decoding in joint source-channel coding systems, complementing earlier works by Koshelev and Hellman.
Abstract: We extend our earlier work on guessing subject to distortion to the joint source-channel coding context. We consider a system in which there is a source connected to a destination via a channel and the goal is to reconstruct the source output at the destination within a prescribed distortion level with respect to (w.r.t.) some distortion measure. The decoder is a guessing decoder in the sense that it is allowed to generate successive estimates of the source output until the distortion criterion is met. The problem is to design the encoder and the decoder so as to minimize the average number of estimates until successful reconstruction. We derive estimates on nonnegative moments of the number of guesses, which are asymptotically tight as the length of the source block goes to infinity. Using the close relationship between guessing and sequential decoding, we give a tight lower bound to the complexity of sequential decoding in joint source-channel coding systems, complementing earlier works by Koshelev (1973) and Hellman (1975). Another topic explored here is the probability of error for list decoders with exponential list sizes for joint source-channel coding systems, for which we obtain tight bounds as well. It is noteworthy that optimal performance w.r.t. the performance measures considered here can be achieved in a manner that separates source coding and channel coding.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: This work considers a generalized version of low-density parity-check codes, where the decoding procedure can be based on the decoding of Hamming component codes, and shows that a performance close to the Shannon capacity limit can be achieved.
Abstract: A generalization of Gallager's low-density parity-check codes is introduced, where as component codes single-error correcting Hamming codes are used instead of single-error detecting parity-check codes. Low-density (LD) parity-check codes were first introduced by Gallager in 1963. These codes are, in combination with iterative decoding, very promising for achieving low error probabilities at a reasonable cost. Results of computer simulations for long LD codes show, that a performance close to the Shannon capacity limit can be achieved. In this work, we consider a generalized version of low-density parity-check codes, where the decoding procedure can be based on the decoding of Hamming component codes.