scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2001"


Proceedings ArticleDOI
06 Jul 2001
TL;DR: This paper compares the speed and output quality of a traditional stack-based decoding algorithm with two new decoders: a fast greedy decoder and a slow but optimal decoder that treats decoding as an integer-programming optimization problem.
Abstract: A good decoding algorithm is critical to the success of any statistical machine translation system. The decoder's job is to find the translation that is most likely according to set of previously learned parameters (and a formula for combining them). Since the space of possible translations is extremely large, typical decoding algorithms are only able to examine a portion of it, thus risking to miss good solutions. In this paper, we compare the speed and output quality of a traditional stack-based decoding algorithm with two new decoders: a fast greedy decoder and a slow but optimal decoder that treats decoding as an integer-programming optimization problem.

300 citations


Journal ArticleDOI
TL;DR: A stopping criterion which reduces the average number of iterations at the expense of very little performance degradation is proposed for this combined decoding approach to bridge the error performance gap between belief propagation decoding which remains suboptimum, and maximum likelihood decoding which is too complex to be implemented for the codes considered.
Abstract: In this paper, reliability based decoding is combined with belief propagation (BP) decoding for low-density parity check (LDPC) codes. At each iteration, the soft output values delivered by the BP algorithm are used as reliability values to perform reduced complexity soft decision decoding of the code considered. This approach allows to bridge the error performance gap between belief propagation decoding which remains suboptimum, and maximum likelihood decoding which is too complex to be implemented for the codes considered. Trade-offs between decoding complexity and error performance are also investigated. In particular, a stopping criterion which reduces the average number of iterations at the expense of very little performance degradation is proposed for this combined decoding approach. Simulation results for several Gallager (1963, 1968) LDPC codes and different set cyclic codes of hundreds of information bits are given and elaborated.

183 citations


Journal ArticleDOI
TL;DR: A squaring method is presented to simplify the decoding of orthogonal space-time block codes in a wireless communication system with an arbitrary number of transmit and receive antennas and gives the same decoding performance as the maximum-likelihood ratio decoding while it shows much lower complexity.
Abstract: We present a squaring method to simplify the decoding of orthogonal space-time block codes in a wireless communication system with an arbitrary number of transmit and receive antennas. Using this squaring method, a closed-form expression of signal-to-noise ratio after space-time decoding is also derived. It gives the same decoding performance as the maximum-likelihood ratio decoding while it shows much lower complexity.

163 citations


Proceedings ArticleDOI
25 Nov 2001
TL;DR: Two decoding schedules and the corresponding serialized architectures for low-density parity-check (LDPC) decoders are presented and the performance of these decoding schedules is evaluated through simulations on a magnetic recording channel.
Abstract: Two decoding schedules and the corresponding serialized architectures for low-density parity-check (LDPC) decoders are presented. They are applied to codes with parity-check matrices generated either randomly or using geometric properties of elements in Galois fields. Both decoding schedules have low computational requirements. The original concurrent decoding schedule has a large storage requirement that is dependent on the total number of edges in the underlying bipartite graph, while a new, staggered decoding schedule which uses an approximation of the belief propagation, has a reduced memory requirement that is dependent only on the number of bits in the block. The performance of these decoding schedules is evaluated through simulations on a magnetic recording channel.

154 citations


Journal ArticleDOI
TL;DR: This work introduces a variation on their decoding algorithm that, with no extra cost in complexity, provably corrects up to 12 times more errors.
Abstract: Sipser and Spielman (see ibid., vol.42, p.1717-22, Nov. 1996) have introduced a constructive family of asymptotically good linear error-correcting codes-expander codes-together with a simple parallel algorithm that will always remove a constant fraction of errors. We introduce a variation on their decoding algorithm that, with no extra cost in complexity, provably corrects up to 12 times more errors.

138 citations


Proceedings ArticleDOI
27 Mar 2001
TL;DR: A bit-level soft-in/soft-out decoder based on this trellis is used as an outer component decoder in an iterative decoding scheme for a serially concatenated source/channel coding system.
Abstract: We focus on a trellis-based decoding technique for variable length codes (VLCs) which does not require any additional side information besides the number of bits in the coded sequence. A bit-level soft-in/soft-out decoder based on this trellis is used as an outer component decoder in an iterative decoding scheme for a serially concatenated source/channel coding system. In contrast to previous approaches using this kind of trellis we do not consider the received sequence as a concatenation of variable length codewords, but as one long code word of a (weak) binary channel code which can be soft-in/soft-out decoded. By evaluating the distance properties of selected variable length codes we show that some codes are more suitable for trellis-based decoding than others. Finally we present simulation results which show the performance of the iterative decoding approach.

138 citations


Proceedings ArticleDOI
14 Oct 2001
TL;DR: Several novel constructions of codes are presented which share the common thread of using expander (or expander-like) graphs as a component and enable the design of efficient decoding algorithms that correct a large number of errors through various forms of "voting" procedures.
Abstract: We present several novel constructions of codes which share the common thread of using expander (or expander-like) graphs as a component. The expanders enable the design of efficient decoding algorithms that correct a large number of errors through various forms of "voting" procedures. We consider both the notions of unique and list decoding, and in all cases obtain asymptotically good codes which are decodable up to a "maximum" possible radius and either: (a) achieve a similar rate as the previously best known codes but come with significantly faster algorithms, or (b) achieve a rate better than any prior construction with similar error-correction properties. Among our main results are: i) codes of rate /spl Omega/(/spl epsi//sup 2/) over constant-sized alphabet that can be list decoded in quadratic time from (1-/spl epsi/) errors; ii) codes of rate /spl Omega/(/spl epsi/) over constant-sized alphabet that can be uniquely decoded from (1/2-/spl epsi/) errors in near-linear time (this matches AG-codes with much faster algorithms); iii) linear-time encodable and decodable binary codes of positive rate (in fact, rate /spl Omega/(/spl epsi//sup 2/)) that can correct up to (1/4-/spl epsi/) fraction errors.

137 citations


Journal ArticleDOI
TL;DR: Based on the two-dimensional (2-D) weight distribution of tail-biting codes, guidelines on how to choose tail biting component codes that are especially suited for parallel concatenated coding schemes are given.
Abstract: Based on the two-dimensional (2-D) weight distribution of tail-biting codes we give guidelines on how to choose tail biting component codes that are especially suited for parallel concatenated coding schemes. Employing these guidelines, we tabulate tail-biting codes of different rate, length, and complexity. The performance of parallel concatenated block codes (PCBCs) using iterative (turbo) decoding is evaluated by simulation and bounds are calculated in order to study their asymptotic performance.

112 citations


Proceedings ArticleDOI
C. Howland1, A. Blanksby1
06 May 2001
TL;DR: A parallel architecture for decoding low density parity check (LDPC) codes is proposed that achieves high coding gain together with extremely low power dissipation, and high throughput.
Abstract: A parallel architecture for decoding low density parity check (LDPC) codes is proposed that achieves high coding gain together with extremely low power dissipation, and high throughput. The feasibility of this architecture is demonstrated through the design and implementation of a 1024 bit, rate-1/2, soft decision parallel LDPC decoder.

100 citations


Proceedings ArticleDOI
07 May 2001
TL;DR: This contribution deals with an iterative source-channel decoding approach where a simple channel decoder and a softbit-source decoder are concatenated, and derives a new formula that shows how the residual redundancy transforms into extrinsic information utilizable for iterative decoding.
Abstract: In digital mobile communications, efficient compression algorithms are needed to encode speech or audio signals. As the determined source parameters are highly sensitive to transmission errors, robust source and channel decoding schemes are required. This contribution deals with an iterative source-channel decoding approach where a simple channel decoder and a softbit-source decoder are concatenated. We mainly focus on softbit-source decoding which can be considered as an error concealment technique. This technique utilizes residual redundancy remaining after source coding. We derive a new formula that shows how the residual redundancy transforms into extrinsic information utilizable for iterative decoding. The derived formula opens several starting points for optimizations, e.g. it helps to find a robust index assignment. Furthermore, it allows the conclusion that softbit-source decoding is the limiting factor if applied to iterative decoding processes. Therefore, no significant gain will be obtainable by more than two iterations. This will be demonstrated by simulation.

94 citations


Journal ArticleDOI
TL;DR: It is proved that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited range and has advantages of decreasing the computational complexity remarkably and maintaining high-leveldecoding accuracy.
Abstract: This study investigates a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model that neglects the pair-wise correlation between neuronal activities and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited range. The performance of UMLI is compared with that of the maximum likelihood inference based on the faithful model and that of the center-of-mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkably and maintaining high-level decoding accuracy. Moreover, it can be implemented by a biologically feasible recurrent network (Pouget, Zhang, Deneve, & Latham, 1998). The effect of correlation on the decoding accuracy is also discussed.

Journal ArticleDOI
TL;DR: A new reduced-complexity decoding algorithm for low-density parity-check codes that operates entirely in the log-likelihood domain is presented.
Abstract: A new reduced-complexity decoding algorithm for low-density parity-check codes that operates entirely in the log-likelihood domain is presented. The computationally expensive check-node updates of the sum-product algorithm are simplified by using a difference-metric approach on a two-state trellis and by employing the dual-max approximation. The dual-max approximation is further improved by using a correction factor that allows the performance to approach that of full sum-product decoding.

Proceedings ArticleDOI
06 Jul 2001
TL;DR: A new explicit error-correcting code based on Trevisan's extractor that can handle high-noise, almost-optimal rate list-decodable codes over large alphabets and soft decoding is proposed.
Abstract: We define new error correcting codes based on extractors. We show that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than Reed-Solomon codes. We further show that codes with strong list decoding properties are equivalent to slice extractors, a variant of extractors. We give an application of extractor codes to extracting many hardcore bits from a one-way function, using few auxiliary random bits. Finally, we show that explicit slice extractors for certain other parameters would yield optimal bipartite Ramsey graphs.

Journal ArticleDOI
TL;DR: A new message-passing schedule for the decoding of low-density parity-check (LDPC) codes is presented, designated "probabilistic schedule", which takes into account the structure of the Tanner graph of the code.
Abstract: We present a new message-passing schedule for the decoding of low-density parity-check (LDPC) codes. This approach, designated "probabilistic schedule", takes into account the structure of the Tanner graph (TG) of the code. We show by simulation that the new schedule offers a much better performance/complexity trade-off. This work also suggests that scheduling plays an important role in iterative decoding and that a schedule that matches the structure of the TG is desirable.

Proceedings ArticleDOI
S. ten Brink1
24 Jun 2001
TL;DR: The paper describes inner and outer code doping to enable iterative decoding of serially concatenated codes (SCC) which use inner rate one recursive convolutional codes of memory greater than one.
Abstract: The paper describes inner and outer code doping to enable iterative decoding of serially concatenated codes (SCC) which use inner rate one recursive convolutional codes of memory greater than one.

Book ChapterDOI
02 Apr 2001
TL;DR: An improved method for the fast correlation attack on certain stream ciphers is presented and its theoretical analyzibility is considered, so that its performance can also be estimated in cases where corresponding experiments are not feasible due to the current technological limitations.
Abstract: An improved method for the fast correlation attack on certain stream ciphers is presented. The proposed algorithm employs the following decoding approaches: list decoding in which a candidate is assigned to the list based on the most reliable information sets, and minimum distance decoding based on Hamming distance. Performance and complexity of the proposed algorithm are considered. A desirable characteristic of the proposed algorithm is its theoretical analyzibility, so that its performance can also be estimated in cases where corresponding experiments are not feasible due to the current technological limitations. The algorithm is compared with relevant recently reported algorithms, and its advantages are pointed out. Finally, the proposed algorithm is considered in a security evaluation context of a proposal (NESSIE) for stream ciphers.

Journal ArticleDOI
TL;DR: The objective behind this work is to provide motivation for decoding of data compressed by standard source coding schemes, that is, to view the compressed bitstreams as being the output of variable-length coders and to make use of the redundancy in the bit Streams to assist in decoding.
Abstract: Motivated by previous results in joint source-channel coding and decoding, we consider the problem of decoding of variable-length codes using soft channel values. We present results of decoding of selected codes using the maximum a posteriori (MAP) decoder and the sequential decoder, and show the performance gains over decoding using hard decisions alone. The objective behind this work is to provide motivation for decoding of data compressed by standard source coding schemes, that is, to view the compressed bitstreams as being the output of variable-length coders and to make use of the redundancy in the bitstreams to assist in decoding. In order to illustrate the performance achievable by soft decoding, we provide results for decoding of MPEG-4 reversible variable-length codes as well as for decoding of MPEG-4 overhead information, under the assumption that this information is transmitted without channel coding over an additive white Gaussian noise channel. Finally, we present a method of unequal error protection for an MPEG-4 bitstream using the MAP and sequential source decoders, and show results comparable to those achievable by serial application of source and channel coding.

Journal ArticleDOI
TL;DR: A class of algorithms that combines Chase-2 and GMD (generalized minimum distance) decoding algorithms is presented for nonbinary block codes, which provides additional trade-offs between error performance and decoding complexity.
Abstract: In this letter, a class of algorithms that combines Chase-2 and GMD (generalized minimum distance) decoding algorithms is presented for nonbinary block codes. This approach provides additional trade-offs between error performance and decoding complexity. Reduced-complexity versions of the algorithms with practical interests are then provided and simulated.

Journal ArticleDOI
TL;DR: A list decoding for an error-correcting code is a decoding algorithm that generates a list of codewords within a Hamming distance t from the received vector, where t can be greater than the error-correction bound, and an efficient list-decoding algorithm for algebraic-geometric codes is given.
Abstract: A list decoding for an error-correcting code is a decoding algorithm that generates a list of codewords within a Hamming distance t from the received vector, where t can be greater than the error-correction bound. In previous work by M. Shokrollahi and H. Wasserman (see ibid., vol.45, p.432-7, March 1999) a list-decoding procedure for Reed-Solomon codes was generalized to algebraic-geometric codes. Recent work by V. Guruswami and M. Sudan (see ibid., vol.45, p.1757-67, Sept. 1999) gives improved list decodings for Reed-Solomon codes and algebraic-geometric codes that work for all rates and have many applications. However, these list-decoding algorithms are rather complicated. R. Roth and G. Ruckenstein (see ibid., vol.46, p.246-57, Jan. 2000) proposed an efficient implementation of the list decoding of Reed-Solomon codes. In this correspondence, extending Roth and Ruckenstein's fast algorithm for finding roots of univariate polynomials over polynomial rings, i.e., the reconstruct algorithm, we present an efficient algorithm for finding the roots of univariate polynomials over function fields. Based on the extended algorithm, we give an efficient list-decoding algorithm for algebraic-geometric codes.

Journal ArticleDOI
TL;DR: The algorithm proposed here presents a major advantage over existing decoding algorithms for BTCs by providing ample flexibility in terms of performance-complexity tradeoff, which makes the algorithm well suited for wireless multimedia applications.
Abstract: An efficient soft-input soft-output iterative decoding algorithm for block turbo codes (BTCs) is proposed. The proposed algorithm utilizes Kaneko's (1994) decoding algorithm for soft-input hard-output decoding. These hard outputs are converted to soft-decisions using reliability calculations. Three different schemes for reliability calculations incorporating different levels of approximation are suggested. The algorithm proposed here presents a major advantage over existing decoding algorithms for BTCs by providing ample flexibility in terms of performance-complexity tradeoff. This makes the algorithm well suited for wireless multimedia applications. The algorithm can be used for optimal as well as suboptimal decoding. The suboptimal versions of the algorithm can be developed by changing a single parameter (the number of error patterns to be generated). For any performance, the computational complexity of the proposed algorithm is less than the computational complexity of similar existing algorithms. Simulation results for the decoding algorithm for different two-dimensional BTCs over an additive white Gaussian noise channel are shown. A performance comparison of the proposed algorithm with similar existing algorithms is also presented.

Proceedings ArticleDOI
06 May 2001
TL;DR: In order to improve the performance of the multi-input multi-output (MIMO) Bell-Labs Layered Space Time (BLAST) wireless communication algorithm, the combination of BLAST and iterative decoding is examined.
Abstract: Research has shown that the performance of demapping a multilevel modulated signal can be improved by using anti-Gray mapping and iterative demapping and decoding. Iterative demapping and decoding is based on the turbo-decoding principle. In order to improve the performance of the multi-input multi-output (MIMO) Bell-Labs Layered Space Time (BLAST) wireless communication algorithm, the combination of BLAST and iterative decoding is examined. This principle is called turbo-BLAST. Turbo-BLAST is evaluated using the extrinsic information transfer (EXIT) chart method.

Proceedings ArticleDOI
25 Nov 2001
TL;DR: This paper extends the techniques of ten Brink to non-binary codes, and reveals several subtle points concerning the motivation and interpretation of the technique for binary codes.
Abstract: Mutual information transfer characteristics have been proposed by ten Brink (see IEE Electron. Lett., vol.35, no.10, p.806-9, 1999) as a quasi-analytical tool for the performance analysis of iterative decoding of concatenated codes. Given the individual transfer characteristics of the component codes in a concatenated coding system (obtained by simulation), the convergence region and average decoding trajectory of the concatenated system may be accurately predicted. In this paper, we extend the techniques of ten Brink to non-binary codes. In addition to providing a useful tool for the analysis of non-binary codes, our extension reveals several subtle points concerning the motivation and interpretation of the technique for binary codes. We propose a method for practical implementation of our extension, and apply the technique to the evaluation or non-binary turbo codes and space-time turbo codes.

Proceedings ArticleDOI
24 Jun 2001
TL;DR: The general concept of coding for a channel producing errors, where a side-channel possibly informs the decoder about a part of the information that is encoded in the transmitted codeword, and two constructions for coding for informed decoders are described.
Abstract: We consider coding for a channel producing errors, where a side-channel (not known to the encoder) possibly informs the decoder about a part of the information that is encoded in the transmitted codeword. The objective is to design a code of which the correction power is enhanced if some information symbols are known to the decoder. Such a form of channel coding we call "coding for informed decoders". A possible application is in the field of address retrieval on optical media. The sector address on optical media is part of a header which is protected by an error correcting code. Under many circumstances much of the header information of the current sector can be inferred from the previously read sectors and the table of contents. The application of coding for informed decoders to sector header protection allows for reliable local address retrieval, which is especially important during writing. We describe the general concept, and provide two constructions for coding for informed decoders.

Proceedings ArticleDOI
25 Nov 2001
TL;DR: A normalized a posteriori probability (APP) based algorithm for the decoding of low-density parity check (LDPC) codes utilizes normalization to improve the accuracy of the soft values delivered by the simplified APP-based algorithm from one iteration to another during the iterative decoding.
Abstract: We propose a normalized a posteriori probability (APP) based algorithm for the decoding of low-density parity check (LDPC) codes. The normalized APP-based algorithm utilizes normalization to improve the accuracy of the soft values delivered by the simplified APP-based algorithm from one iteration to another during the iterative decoding, and can achieve very good tradeoff between decoding complexity and performance.

BookDOI
01 Jan 2001
TL;DR: This workshop discusses iterative decoding of cycle codes of graphs using Goebner bases and discusses Handelman's theorem on polynomials with positive multiples and a spanning tree invariant for Markov shifts.
Abstract: Foreword * Preface * Part I: Overviews * An introduction to the analysis of iterative coding * Connections between linear systems and convolutional codes * Multi-dimensional symbolic dynamical systems * Part II: Codes on Graphs * Linear-congruence constructions of low-density parity-check codes * On the effective weights of pseudocode words for codes defined on graphs with cycles * Evaluation of Gallager codes for short block length and high rate applications * Two small Gallager codes * Mildly non-linear codes * Capacity-achieving sequences * Hypertrellis: A generalization of trellis and factor graph * Part III: Decoding techniques * BSC thresholds for code ensembles based on 'typical pairs' decoding * Properties of the tailbiting BCJR decoder * Iterative decoding of tail-biting trellises and connections with symbolic dynamics * Algorithms for decoding and interpolation * An algebraic description of iterative decoding schemes * Recursive construction of Goebner bases for the solution of polynomial congruences * On iterative decoding of cycle codes of graphs * Part IV: Convolutional codes over finite Abelian groups: Some basic results * Symbolic dynamics and convolutional codes * Linear codes and their duals over artinian rings * Unit memory convolutional codes with maximum distance * Basic properties of multidimensional convolutional codes * Part V: Symbolic Dynamics and Automata Theory * Length distributions and regular sequences * Handelman's theorem on polynomials with positive multiples * Topological dynamics of cellular automata * A spanning tree invariant for Markov shifts * List of workshop participants

Journal ArticleDOI
24 Jun 2001
TL;DR: Gallager's (1963) soft-decoding (belief propagation) algorithm for decoding low-density parity-check (LDPC) codes, when applied to an arbitrary binary-input symmetric-output channel is considered.
Abstract: We consider Gallager's (1963) soft-decoding (belief propagation) algorithm for decoding low-density parity-check (LDPC) codes, when applied to an arbitrary binary-input symmetric-output channel. By considering the expected values of the messages, we derive both lower and upper bounds on the performance of the algorithm. We also derive various properties of the decoding algorithm, such as a certain robustness to the details of the channel noise. Our results apply both to regular and irregular LDPC codes.

Journal ArticleDOI
TL;DR: A sequential, finite-delay, joint source-channel decoder that uses a novel state-space to deal with the problem of variable-length source codes in the decoder and is robust to inaccuracies in the estimation of channel statistics.
Abstract: This paper proposes an optimal maximum a posteriori probability decoder for variable-length encoded sources over binary symmetric channels (BSC) that uses a novel state-space to deal with the problem of variable-length source codes in the decoder. This sequential, finite-delay, joint source-channel decoder delivers substantial improvements over the conventional decoder and also over a system that uses a standard forward error correcting code operating at the same over all bit rates. This decoder is also robust to inaccuracies in the estimation of channel statistics.

Patent
25 May 2001
TL;DR: In this paper, a method for generating new forward error correction codes, called skew codes, for the reliable transmission of data in noisy channels is disclosed, and an improved decoding method for decoding skew codes and any code that is defined by a set of sum-to-identity parity equations initially decoded using an algorithm that provides confidence values on all the symbols at every symbol time.
Abstract: A method for generating new forward error correction codes, called skew codes, for the reliable transmission of data in noisy channels is disclosed. The method involves adding additional sets of parity equations across the third dimension of a cubic array of bits. The parity equations are applied to the cubic array such that the rectangular patterns of one square array do not match up with a rectangular pattern in another square array. By selecting skew mapping parameters of the parity equations from a set of quadratic residues of prime numbers according to specific design rules, the resulting codes are well suited to low-complexity high-speed iterative decoding, and have error correction performance and error detection capability, particularly for applications requiring high code rates. An improved decoding method for decoding skew codes and any code that is defined by a set of sum-to-identity parity equations initially decoded using an algorithm that provides confidence values on all the symbols at every symbol time is also disclosed. Generally, the improved decoding method will make hard decisions based upon the soft decisions passed from the failed iterative decoding system to provide a valid code word, through manipulation of the parity check matrix and reduction of its columns and rows.

Book ChapterDOI
TL;DR: A new decoding algorithm for general linear block codes that generates a direct estimate of the error locations based on exploiting the statistical information embedded in the classical syndrome decoding.
Abstract: This paper introduces a new decoding algorithm for general linear block codes. The algorithm generates a direct estimate of the error locations based on exploiting the statistical information embedded in the classical syndrome decoding. The algorithm can be used to cryptanalyze many algebraic-code public-key crypto and identification systems. In particular results show that the McEliece public-key cryptosystem with its original parameters is not secure.

Journal ArticleDOI
TL;DR: The class of multilevel codes achieving practically important bit-error performance near the Shannon limit becomes far wider with iterative decoding, which often significantly compensates the suboptimality of a staged decoder.
Abstract: Iterative decoding of multilevel coded modulation is discussed. Despite its asymptotic optimality with proper design, the error correcting capability of multilevel codes may not be fully exploited for finite block length with conventional multistage decoding. This fact stems from the suboptimality of multistage decoding giving rise to increased error multiplicity at lower index stages and the associated error propagation to higher stages. Such problems can be overcome in many situations by introducing iterative decoding which often significantly compensates the suboptimality of a staged decoder. The class of multilevel codes achieving practically important bit-error performance near the Shannon limit becomes far wider with iterative decoding.