scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2001"


Journal ArticleDOI
TL;DR: By using the Gaussian approximation for message densities under density evolution, the sum-product decoding algorithm can be visualize and the optimization of degree distributions can be understood and done graphically using the visualization.
Abstract: Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under message-passing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densities under density evolution to simplify the analysis of the decoding algorithm. We convert the infinite-dimensional problem of iteratively calculating message densities, which is needed to find the exact threshold, to a one-dimensional problem of updating the means of the Gaussian densities. This simplification not only allows us to calculate the threshold quickly and to understand the behavior of the decoder better, but also makes it easier to design good irregular LDPC codes for AWGN channels. For various regular LDPC codes we have examined, thresholds can be estimated within 0.1 dB of the exact value. For rates between 0.5 and 0.9, codes designed using the Gaussian approximation perform within 0.02 dB of the best performing codes found so far by using density evolution when the maximum variable degree is 10. We show that by using the Gaussian approximation, we can visualize the sum-product decoding algorithm. We also show that the optimization of degree distributions can be understood and done graphically using the visualization.

1,204 citations


Journal ArticleDOI
TL;DR: A new efficient decoding algorithm based on QR decomposition is presented, which requires only a fraction of the computational effort compared with the standard decoding algorithm requiring the multiple calculation of the pseudo inverse of the channel matrix.
Abstract: Layered space-time codes have been designed to exploit the capacity advantage of multiple antenna systems in Rayleigh fading environments. A new efficient decoding algorithm based on QR decomposition is presented, which requires only a fraction of the computational effort compared with the standard decoding algorithm requiring the multiple calculation of the pseudo inverse of the channel matrix.

560 citations


Proceedings ArticleDOI
25 Nov 2001
TL;DR: By exploiting the inherent robustness of LLRs, it is shown, via simulations, that coarse quantization tables are sufficient to implement complex core operations with negligible or no loss in performance.
Abstract: Efficient implementations of the sum-product algorithm (SPA) are presented for decoding low-density parity-check (LDPC) codes using log-likelihood ratios (LLR) as messages between symbol and parity-check nodes. Various reduced-complexity derivatives of the LLR-SPA are proposed. Both serial and parallel implementations are investigated, leading to trellis and tree topologies, respectively. Furthermore, by exploiting the inherent robustness of LLRs, it is shown, via simulations, that coarse quantization tables are sufficient to implement complex core operations with negligible or no loss in performance. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate design point in high-speed applications from a performance, latency and computational complexity perspective.

397 citations


Journal ArticleDOI
TL;DR: In this paper, the authors track the density of extrinsic information in iterative turbo decoders by actual density evolution, and also approximate it by symmetric Gaussian density functions.
Abstract: We track the density of extrinsic information in iterative turbo decoders by actual density evolution, and also approximate it by symmetric Gaussian density functions. The approximate model is verified by experimental measurements. We view the evolution of these density functions through an iterative decoder as a nonlinear dynamical system with feedback. Iterative decoding of turbo codes and of serially concatenated codes is analyzed by examining whether a signal-to-noise ratio (SNR) for the extrinsic information keeps growing with iterations. We define a "noise figure" for the iterative decoder, such that the turbo decoder will converge to the correct codeword if the noise figure is bounded by a number below zero dB. By decomposing the code's noise figure into individual curves of output SNR versus input SNR corresponding to the individual constituent codes, we gain many new insights into the performance of the iterative decoder for different constituents. Many mysteries of turbo codes are explained based on this analysis. For example, we show why certain codes converge better with iterative decoding than more powerful codes which are only suitable for maximum likelihood decoding. The roles of systematic bits and of recursive convolutional codes as constituents of turbo codes are crystallized. The analysis is generalized to serial concatenations of mixtures of complementary outer and inner constituent codes. Design examples are given to optimize mixture codes to achieve low iterative decoding thresholds on the signal-to-noise ratio of the channel.

322 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: This paper compares the speed and output quality of a traditional stack-based decoding algorithm with two new decoders: a fast greedy decoder and a slow but optimal decoder that treats decoding as an integer-programming optimization problem.
Abstract: A good decoding algorithm is critical to the success of any statistical machine translation system. The decoder's job is to find the translation that is most likely according to set of previously learned parameters (and a formula for combining them). Since the space of possible translations is extremely large, typical decoding algorithms are only able to examine a portion of it, thus risking to miss good solutions. In this paper, we compare the speed and output quality of a traditional stack-based decoding algorithm with two new decoders: a fast greedy decoder and a slow but optimal decoder that treats decoding as an integer-programming optimization problem.

300 citations


Journal ArticleDOI
TL;DR: A new interleaver design for turbo codes with short block length based on the distance spectrum of the code and the correlation between the information input data and the soft output of each decoder corresponding to its parity bits is described.
Abstract: The performance of a turbo code with short block length depends critically on the interleaver design. There are two major criteria in the design of an interleaver: the distance spectrum of the code and the correlation between the information input data and the soft output of each decoder corresponding to its parity bits. This paper describes a new interleaver design for turbo codes with short block length based on these two criteria. A deterministic interleaver suitable for turbo codes is also described. Simulation results compare the new interleaver design to different existing interleavers.

184 citations


Journal ArticleDOI
TL;DR: A stopping criterion which reduces the average number of iterations at the expense of very little performance degradation is proposed for this combined decoding approach to bridge the error performance gap between belief propagation decoding which remains suboptimum, and maximum likelihood decoding which is too complex to be implemented for the codes considered.
Abstract: In this paper, reliability based decoding is combined with belief propagation (BP) decoding for low-density parity check (LDPC) codes. At each iteration, the soft output values delivered by the BP algorithm are used as reliability values to perform reduced complexity soft decision decoding of the code considered. This approach allows to bridge the error performance gap between belief propagation decoding which remains suboptimum, and maximum likelihood decoding which is too complex to be implemented for the codes considered. Trade-offs between decoding complexity and error performance are also investigated. In particular, a stopping criterion which reduces the average number of iterations at the expense of very little performance degradation is proposed for this combined decoding approach. Simulation results for several Gallager (1963, 1968) LDPC codes and different set cyclic codes of hundreds of information bits are given and elaborated.

183 citations


Journal ArticleDOI
TL;DR: A squaring method is presented to simplify the decoding of orthogonal space-time block codes in a wireless communication system with an arbitrary number of transmit and receive antennas and gives the same decoding performance as the maximum-likelihood ratio decoding while it shows much lower complexity.
Abstract: We present a squaring method to simplify the decoding of orthogonal space-time block codes in a wireless communication system with an arbitrary number of transmit and receive antennas. Using this squaring method, a closed-form expression of signal-to-noise ratio after space-time decoding is also derived. It gives the same decoding performance as the maximum-likelihood ratio decoding while it shows much lower complexity.

163 citations


Proceedings ArticleDOI
25 Nov 2001
TL;DR: Two decoding schedules and the corresponding serialized architectures for low-density parity-check (LDPC) decoders are presented and the performance of these decoding schedules is evaluated through simulations on a magnetic recording channel.
Abstract: Two decoding schedules and the corresponding serialized architectures for low-density parity-check (LDPC) decoders are presented. They are applied to codes with parity-check matrices generated either randomly or using geometric properties of elements in Galois fields. Both decoding schedules have low computational requirements. The original concurrent decoding schedule has a large storage requirement that is dependent on the total number of edges in the underlying bipartite graph, while a new, staggered decoding schedule which uses an approximation of the belief propagation, has a reduced memory requirement that is dependent only on the number of bits in the block. The performance of these decoding schedules is evaluated through simulations on a magnetic recording channel.

154 citations


Journal ArticleDOI
TL;DR: This work introduces a variation on their decoding algorithm that, with no extra cost in complexity, provably corrects up to 12 times more errors.
Abstract: Sipser and Spielman (see ibid., vol.42, p.1717-22, Nov. 1996) have introduced a constructive family of asymptotically good linear error-correcting codes-expander codes-together with a simple parallel algorithm that will always remove a constant fraction of errors. We introduce a variation on their decoding algorithm that, with no extra cost in complexity, provably corrects up to 12 times more errors.

138 citations


Proceedings ArticleDOI
27 Mar 2001
TL;DR: A bit-level soft-in/soft-out decoder based on this trellis is used as an outer component decoder in an iterative decoding scheme for a serially concatenated source/channel coding system.
Abstract: We focus on a trellis-based decoding technique for variable length codes (VLCs) which does not require any additional side information besides the number of bits in the coded sequence. A bit-level soft-in/soft-out decoder based on this trellis is used as an outer component decoder in an iterative decoding scheme for a serially concatenated source/channel coding system. In contrast to previous approaches using this kind of trellis we do not consider the received sequence as a concatenation of variable length codewords, but as one long code word of a (weak) binary channel code which can be soft-in/soft-out decoded. By evaluating the distance properties of selected variable length codes we show that some codes are more suitable for trellis-based decoding than others. Finally we present simulation results which show the performance of the iterative decoding approach.

Proceedings ArticleDOI
14 Oct 2001
TL;DR: Several novel constructions of codes are presented which share the common thread of using expander (or expander-like) graphs as a component and enable the design of efficient decoding algorithms that correct a large number of errors through various forms of "voting" procedures.
Abstract: We present several novel constructions of codes which share the common thread of using expander (or expander-like) graphs as a component. The expanders enable the design of efficient decoding algorithms that correct a large number of errors through various forms of "voting" procedures. We consider both the notions of unique and list decoding, and in all cases obtain asymptotically good codes which are decodable up to a "maximum" possible radius and either: (a) achieve a similar rate as the previously best known codes but come with significantly faster algorithms, or (b) achieve a rate better than any prior construction with similar error-correction properties. Among our main results are: i) codes of rate /spl Omega/(/spl epsi//sup 2/) over constant-sized alphabet that can be list decoded in quadratic time from (1-/spl epsi/) errors; ii) codes of rate /spl Omega/(/spl epsi/) over constant-sized alphabet that can be uniquely decoded from (1/2-/spl epsi/) errors in near-linear time (this matches AG-codes with much faster algorithms); iii) linear-time encodable and decodable binary codes of positive rate (in fact, rate /spl Omega/(/spl epsi//sup 2/)) that can correct up to (1/4-/spl epsi/) fraction errors.

Journal ArticleDOI
TL;DR: Based on the two-dimensional (2-D) weight distribution of tail-biting codes, guidelines on how to choose tail biting component codes that are especially suited for parallel concatenated coding schemes are given.
Abstract: Based on the two-dimensional (2-D) weight distribution of tail-biting codes we give guidelines on how to choose tail biting component codes that are especially suited for parallel concatenated coding schemes. Employing these guidelines, we tabulate tail-biting codes of different rate, length, and complexity. The performance of parallel concatenated block codes (PCBCs) using iterative (turbo) decoding is evaluated by simulation and bounds are calculated in order to study their asymptotic performance.

Journal ArticleDOI
24 Jun 2001
TL;DR: It is shown that expander codes attain the capacity of the binary-symmetric channel under iterative decoding with a positive exponent for all rates between zero and the channel capacity.
Abstract: We show that expander codes attain the capacity of the binary-symmetric channel under iterative decoding. The error probability has a positive exponent for all rates between zero and the channel capacity. The decoding complexity grows linearly with the code length.

Journal ArticleDOI
TL;DR: A novel iterative procedure for the approximation of the optimal solution of joint source-channel decoding is introduced, based on the principle of iterative decoding of turbo codes, to derive the iterative approximation.
Abstract: Joint source-channel decoding is formulated as an estimation problem. The optimal solution is stated and it is shown that it is not feasible in many practical systems due to its complexity. Therefore, a novel iterative procedure for the approximation of the optimal solution is introduced, which is based on the principle of iterative decoding of turbo codes. New analytical expressions for different types of information in the optimal algorithm are used to derive the iterative approximation. A direct comparison of the performance of the optimal algorithm and its iterative approximation is given for a simple transmission system with "short" channel codewords. Furthermore, the performance of iterative joint source-channel decoding is investigated for a more realistic system.

Proceedings ArticleDOI
C. Howland1, A. Blanksby1
06 May 2001
TL;DR: A parallel architecture for decoding low density parity check (LDPC) codes is proposed that achieves high coding gain together with extremely low power dissipation, and high throughput.
Abstract: A parallel architecture for decoding low density parity check (LDPC) codes is proposed that achieves high coding gain together with extremely low power dissipation, and high throughput. The feasibility of this architecture is demonstrated through the design and implementation of a 1024 bit, rate-1/2, soft decision parallel LDPC decoder.

Journal ArticleDOI
TL;DR: This correspondence provides an elementary construction of MDS convolutional codes for each rate k/n and each degree /spl delta/.
Abstract: Maximum-distance separable (MDS) convolutional codes are characterized through the property that the free distance attains the generalized singleton bound. The existence of MDS convolutional codes was established by two of the authors by using methods from algebraic geometry. This correspondence provides an elementary construction of MDS convolutional codes for each rate k/n and each degree /spl delta/. The construction is based on a well-known connection between quasi-cyclic codes and convolutional codes.

Proceedings ArticleDOI
07 May 2001
TL;DR: This contribution deals with an iterative source-channel decoding approach where a simple channel decoder and a softbit-source decoder are concatenated, and derives a new formula that shows how the residual redundancy transforms into extrinsic information utilizable for iterative decoding.
Abstract: In digital mobile communications, efficient compression algorithms are needed to encode speech or audio signals. As the determined source parameters are highly sensitive to transmission errors, robust source and channel decoding schemes are required. This contribution deals with an iterative source-channel decoding approach where a simple channel decoder and a softbit-source decoder are concatenated. We mainly focus on softbit-source decoding which can be considered as an error concealment technique. This technique utilizes residual redundancy remaining after source coding. We derive a new formula that shows how the residual redundancy transforms into extrinsic information utilizable for iterative decoding. The derived formula opens several starting points for optimizations, e.g. it helps to find a robust index assignment. Furthermore, it allows the conclusion that softbit-source decoding is the limiting factor if applied to iterative decoding processes. Therefore, no significant gain will be obtainable by more than two iterations. This will be demonstrated by simulation.

Journal ArticleDOI
TL;DR: A theoretical and experimental analysis of iterative decoding of low-density convolutional (LDC) codes is given and two families are investigated: homogeneous LDC codes and a Convolutional code version of turbo-codes.
Abstract: A theoretical and experimental analysis of iterative decoding of low-density convolutional (LDC) codes is given. Two families are investigated: homogeneous LDC codes and a convolutional code version of turbo-codes.

Journal ArticleDOI
TL;DR: It is proved that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited range and has advantages of decreasing the computational complexity remarkably and maintaining high-leveldecoding accuracy.
Abstract: This study investigates a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model that neglects the pair-wise correlation between neuronal activities and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited range. The performance of UMLI is compared with that of the maximum likelihood inference based on the faithful model and that of the center-of-mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkably and maintaining high-level decoding accuracy. Moreover, it can be implemented by a biologically feasible recurrent network (Pouget, Zhang, Deneve, & Latham, 1998). The effect of correlation on the decoding accuracy is also discussed.

Journal ArticleDOI
TL;DR: A new reduced-complexity decoding algorithm for low-density parity-check codes that operates entirely in the log-likelihood domain is presented.
Abstract: A new reduced-complexity decoding algorithm for low-density parity-check codes that operates entirely in the log-likelihood domain is presented. The computationally expensive check-node updates of the sum-product algorithm are simplified by using a difference-metric approach on a two-state trellis and by employing the dual-max approximation. The dual-max approximation is further improved by using a correction factor that allows the performance to approach that of full sum-product decoding.

Proceedings ArticleDOI
07 Oct 2001
TL;DR: RNS arithmetic and redundant residue number system (RRNS) based codes as well as their properties are reviewed in order to simplify the associated systems by unifying the entire encoding and decoding procedure across global communication systems.
Abstract: In this paper residue number system (RNS) arithmetic and redundant residue number system (RRNS) based codes as well as their properties are reviewed. We propose a number of applications for RRNS codes and demonstrate how RRNS codes can be employed in global communication systems, in order to simplify the associated systems by unifying the entire encoding and decoding procedure across global communication systems.

Journal ArticleDOI
TL;DR: This joint source/channel coder design provides significant packet loss recovery with minimal rate overhead, and compares favorably with conventional schemes.
Abstract: Reserving space fur a symbol that is not in the source alphabet has been shown to provide excellent error detection. In this paper, we show how to exploit this capability using two sequential decoder structures to provide powerful error correction capability. This joint source/channel coder design provides significant packet loss recovery with minimal rate overhead, and compares favorably with conventional schemes.

Journal ArticleDOI
TL;DR: A new message-passing schedule for the decoding of low-density parity-check (LDPC) codes is presented, designated "probabilistic schedule", which takes into account the structure of the Tanner graph of the code.
Abstract: We present a new message-passing schedule for the decoding of low-density parity-check (LDPC) codes. This approach, designated "probabilistic schedule", takes into account the structure of the Tanner graph (TG) of the code. We show by simulation that the new schedule offers a much better performance/complexity trade-off. This work also suggests that scheduling plays an important role in iterative decoding and that a schedule that matches the structure of the TG is desirable.

Proceedings ArticleDOI
S. ten Brink1
24 Jun 2001
TL;DR: The paper describes inner and outer code doping to enable iterative decoding of serially concatenated codes (SCC) which use inner rate one recursive convolutional codes of memory greater than one.
Abstract: The paper describes inner and outer code doping to enable iterative decoding of serially concatenated codes (SCC) which use inner rate one recursive convolutional codes of memory greater than one.

Book ChapterDOI
02 Apr 2001
TL;DR: An improved method for the fast correlation attack on certain stream ciphers is presented and its theoretical analyzibility is considered, so that its performance can also be estimated in cases where corresponding experiments are not feasible due to the current technological limitations.
Abstract: An improved method for the fast correlation attack on certain stream ciphers is presented. The proposed algorithm employs the following decoding approaches: list decoding in which a candidate is assigned to the list based on the most reliable information sets, and minimum distance decoding based on Hamming distance. Performance and complexity of the proposed algorithm are considered. A desirable characteristic of the proposed algorithm is its theoretical analyzibility, so that its performance can also be estimated in cases where corresponding experiments are not feasible due to the current technological limitations. The algorithm is compared with relevant recently reported algorithms, and its advantages are pointed out. Finally, the proposed algorithm is considered in a security evaluation context of a proposal (NESSIE) for stream ciphers.

Journal ArticleDOI
TL;DR: The objective behind this work is to provide motivation for decoding of data compressed by standard source coding schemes, that is, to view the compressed bitstreams as being the output of variable-length coders and to make use of the redundancy in the bit Streams to assist in decoding.
Abstract: Motivated by previous results in joint source-channel coding and decoding, we consider the problem of decoding of variable-length codes using soft channel values. We present results of decoding of selected codes using the maximum a posteriori (MAP) decoder and the sequential decoder, and show the performance gains over decoding using hard decisions alone. The objective behind this work is to provide motivation for decoding of data compressed by standard source coding schemes, that is, to view the compressed bitstreams as being the output of variable-length coders and to make use of the redundancy in the bitstreams to assist in decoding. In order to illustrate the performance achievable by soft decoding, we provide results for decoding of MPEG-4 reversible variable-length codes as well as for decoding of MPEG-4 overhead information, under the assumption that this information is transmitted without channel coding over an additive white Gaussian noise channel. Finally, we present a method of unequal error protection for an MPEG-4 bitstream using the MAP and sequential source decoders, and show results comparable to those achievable by serial application of source and channel coding.

Journal ArticleDOI
TL;DR: A class of algorithms that combines Chase-2 and GMD (generalized minimum distance) decoding algorithms is presented for nonbinary block codes, which provides additional trade-offs between error performance and decoding complexity.
Abstract: In this letter, a class of algorithms that combines Chase-2 and GMD (generalized minimum distance) decoding algorithms is presented for nonbinary block codes. This approach provides additional trade-offs between error performance and decoding complexity. Reduced-complexity versions of the algorithms with practical interests are then provided and simulated.

Journal ArticleDOI
TL;DR: A list decoding for an error-correcting code is a decoding algorithm that generates a list of codewords within a Hamming distance t from the received vector, where t can be greater than the error-correction bound, and an efficient list-decoding algorithm for algebraic-geometric codes is given.
Abstract: A list decoding for an error-correcting code is a decoding algorithm that generates a list of codewords within a Hamming distance t from the received vector, where t can be greater than the error-correction bound. In previous work by M. Shokrollahi and H. Wasserman (see ibid., vol.45, p.432-7, March 1999) a list-decoding procedure for Reed-Solomon codes was generalized to algebraic-geometric codes. Recent work by V. Guruswami and M. Sudan (see ibid., vol.45, p.1757-67, Sept. 1999) gives improved list decodings for Reed-Solomon codes and algebraic-geometric codes that work for all rates and have many applications. However, these list-decoding algorithms are rather complicated. R. Roth and G. Ruckenstein (see ibid., vol.46, p.246-57, Jan. 2000) proposed an efficient implementation of the list decoding of Reed-Solomon codes. In this correspondence, extending Roth and Ruckenstein's fast algorithm for finding roots of univariate polynomials over polynomial rings, i.e., the reconstruct algorithm, we present an efficient algorithm for finding the roots of univariate polynomials over function fields. Based on the extended algorithm, we give an efficient list-decoding algorithm for algebraic-geometric codes.

Journal ArticleDOI
TL;DR: The algorithm proposed here presents a major advantage over existing decoding algorithms for BTCs by providing ample flexibility in terms of performance-complexity tradeoff, which makes the algorithm well suited for wireless multimedia applications.
Abstract: An efficient soft-input soft-output iterative decoding algorithm for block turbo codes (BTCs) is proposed. The proposed algorithm utilizes Kaneko's (1994) decoding algorithm for soft-input hard-output decoding. These hard outputs are converted to soft-decisions using reliability calculations. Three different schemes for reliability calculations incorporating different levels of approximation are suggested. The algorithm proposed here presents a major advantage over existing decoding algorithms for BTCs by providing ample flexibility in terms of performance-complexity tradeoff. This makes the algorithm well suited for wireless multimedia applications. The algorithm can be used for optimal as well as suboptimal decoding. The suboptimal versions of the algorithm can be developed by changing a single parameter (the number of error patterns to be generated). For any performance, the computational complexity of the proposed algorithm is less than the computational complexity of similar existing algorithms. Simulation results for the decoding algorithm for different two-dimensional BTCs over an additive white Gaussian noise channel are shown. A performance comparison of the proposed algorithm with similar existing algorithms is also presented.