scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2000"


Journal ArticleDOI
26 Oct 2000
TL;DR: A polynomial-time soft-decision decoding algorithm for Reed-Solomon codes is developed and it is shown that the asymptotic performance can be approached as closely as desired with a list size that does not depend on the length of the code.
Abstract: A polynomial-time soft-decision decoding algorithm for Reed-Solomon codes is developed. This list-decoding algorithm is algebraic in nature and builds upon the interpolation procedure proposed by Guruswami and Sudan(see ibid., vol.45, p.1757-67, Sept. 1999) for hard-decision decoding. Algebraic soft-decision decoding is achieved by means of converting the probabilistic reliability information into a set of interpolation points, along with their multiplicities. The proposed conversion procedure is shown to be asymptotically optimal for a certain probabilistic model. The resulting soft-decoding algorithm significantly outperforms both the Guruswami-Sudan decoding and the generalized minimum distance (GMD) decoding of Reed-Solomon codes, while maintaining a complexity that is polynomial in the length of the code. Asymptotic analysis for alarge number of interpolation points is presented, leading to a geo- metric characterization of the decoding regions of the proposed algorithm. It is then shown that the asymptotic performance can be approached as closely as desired with a list size that does not depend on the length of the code.

672 citations


Proceedings ArticleDOI
06 Sep 2000
TL;DR: A linear decoding based on iterative interference cancellation between parts of the code approaches the maximal likelihood decoding performance.
Abstract: We propose a full rate space-time block code for 3+ Tx antennas. The code is chosen to minimize the non-orthonormality that arises from increasing the rate above the maximum allowed by orthogonality. A linear decoding based on iterative interference cancellation between parts of the code approaches the maximal likelihood decoding performance.

578 citations


01 Jan 2000
TL;DR: The EXIT chart is used to illuminate the convergence properties of bit– interleaved coded modulation, equalization, and serially concatenated binary codes, facilitating the construction of code concatenations with iterative decoders operating close to the capacity limit.
Abstract: Since the discovery of parallel concatenated (turbo) codes, iterative decoding has become a vital field of research in digital communications. Applications of the "turbo princi- ple" to many detection and decoding problems have been found. While most studies have focused on designing code concate- nations with respect to maximum likelihood decoding perfor- mance, the convergence properties of iterative decoding schemes have gained a considerable amount of interest just recently. In this paper we use the extrinsic information transfer chart (EXIT chart) to illuminate the convergence properties of bit- interleaved coded modulation, equalization, and serially con- catenated binary codes. The EXIT chart leads to new design ideas like inner and outer doping, facilitating the construction of code concatenations with iterative decoders operating close to the capacity limit.

350 citations


Journal ArticleDOI
TL;DR: Simulations using the 1MT-2000/3GPP parameters demonstrate that this method gives /spl sim/0.2 to 0.4 dB performance gain compared to the standard max-log-MAP algorithm.
Abstract: Decoding turbo codes with the max-log-MAP algorithm is a good compromise between performance and complexity. The decoding quality of the max-log-MAP decoder is improved by using a scaling factor within the extrinsic calculation. Simulations using the 1MT-2000/3GPP parameters demonstrate that this method gives /spl sim/0.2 to 0.4 dB performance gain compared to the standard max-log-MAP algorithm.

294 citations


Journal ArticleDOI
TL;DR: The wide-band limit, and, in particular, the mismatch capacity per unit cost, and the achievable rates on an additive-noise spread-spectrum system with single-letter decoding and binary signaling are studied.
Abstract: The mismatch capacity of a channel is the highest rate at which reliable communication is possible over the channel with a given (possibly suboptimal) decoding rule. This quantity has been studied extensively for single-letter decoding rules over discrete memoryless channels (DMCs). Here we extend the study to memoryless channels with general alphabets and to channels with memory with possibly non-single-letter decoding rules. We also study the wide-band limit, and, in particular, the mismatch capacity per unit cost, and the achievable rates on an additive-noise spread-spectrum system with single-letter decoding and binary signaling.

291 citations


Journal ArticleDOI
25 Jun 2000
TL;DR: Both upper and lower bounds on the decoding error probability of maximum-likelihood (ML) decoded low-density parity-check (LDPC) codes are derived, indicating that for various appropriately chosen ensembles of LDPC codes, reliable communication is possible up to channel capacity.
Abstract: We derive both upper and lower bounds on the decoding error probability of maximum-likelihood (ML) decoded low-density parity-check (LDPC) codes. The results hold for any binary-input symmetric-output channel. Our results indicate that for various appropriately chosen ensembles of LDPC codes, reliable communication is possible up to channel capacity. However, the ensemble averaged decoding error probability decreases polynomially, and not exponentially. The lower and upper bounds coincide asymptotically, thus showing the tightness of the bounds. However, for ensembles with suitably chosen parameters, the error probability of almost all codes is exponentially decreasing, with an error exponent that can be set arbitrarily close to the standard random coding exponent.

223 citations


Proceedings ArticleDOI
18 Jun 2000
TL;DR: This paper proposes to combine maximum-likelihood decoding and decision feedback equalization (DFE) for the Vertical Bell Laboratories Layered Space-Time (V-BLAST) system, and proposes an ordering scheme which gives the best performance for the worst subchannel.
Abstract: This paper proposes to combine maximum-likelihood (ML) decoding and decision feedback equalization (DFE) for the Vertical Bell Laboratories Layered Space-Time (V-BLAST) system. In the new decoding algorithm, we perform ML decoding for the first p subchannels, and use the DFE procedure for the remaining subchannels. We mathematically show that the new decoding scheme increases the diversity order for the worst subchannel from 1 to p, and verify it by computer simulation. Also, we propose an ordering scheme which gives the best performance for the worst subchannel, and show that an SNR gain equal to the number of transmit antennas can be achieved by the suggested ordering.

200 citations


Journal ArticleDOI
TL;DR: Although it has been believed that OSMLD codes are far inferior to LDPC codes, it is shown that for medium code lengths, the BP decoding of OS MLD codes can significantly outperform BP decode of their equivalentLDPC codes.
Abstract: Previously, the belief propagation (BP) algorithm has received a lot of attention in the coding community, mostly due to its near-optimum decoding for low-density parity check (LDPC) codes and its connection to turbo decoding. In this paper, we investigate the performance achieved by the BP algorithm for decoding one-step majority logic decodable (OSMLD) codes. The BP algorithm is expressed in terms of likelihood ratios rather than probabilities, as conventionally presented. The proposed algorithm fits better the decoding of OSMLD codes with respect to its numerical stability due to the fact that the weights of their check sums are often much higher than that of the corresponding LDPC codes. Although it has been believed that OSMLD codes are far inferior to LDPC codes, we show that for medium code lengths (say between 200-1000 bits), the BP decoding of OSMLD codes can significantly outperform BP decoding of their equivalent LDPC codes. The reasons for this behavior are elaborated.

179 citations


Journal ArticleDOI
Thomas Richardson1
TL;DR: The geometric perspective clearly indicates the relationship between turbo-decoding and maximum-likelihood decoding, and analysis of the geometry leads to new results concerning existence of fixed points, condition for uniqueness, conditions for stability, and proximity to maximum- likelihood decoding.
Abstract: The spectacular performance offered by turbo codes sparked intense interest in them. A considerable amount of research has simplified, formalized, and extended the ideas inherent in the original turbo code construction. Nevertheless, the nature of the relatively simple ad hoc turbo-decoding algorithm has remained something of a mystery. We present a geometric interpretation of the turbo-decoding algorithm. The geometric perspective clearly indicates the relationship between turbo-decoding and maximum-likelihood decoding. Analysis of the geometry leads to new results concerning existence of fixed points, conditions for uniqueness, conditions for stability, and proximity to maximum-likelihood decoding.

154 citations


Journal ArticleDOI
TL;DR: The first solution has a structure similar to that of the well-known algorithm by Bahl et al. (1974), whereas the second is based on noncoherent sequence detection and a reduced-state soft-output Viterbi algorithm.
Abstract: Previously, noncoherent sequence detection schemes for coded linear and continuous phase modulations have been proposed, which deliver hard decisions by means of a Viterbi algorithm. The current trend in digital transmission systems toward iterative decoding algorithms motivates an extension of these schemes. In this paper, we propose two noncoherent soft-output decoding algorithms. The first solution has a structure similar to that of the well-known algorithm by Bahl et al. (1974), whereas the second is based on noncoherent sequence detection and a reduced-state soft-output Viterbi algorithm. Applications to the combined detection and decoding of differential or convolutional codes are considered. Further applications to noncoherent iterative decoding of turbo codes and serially concatenated interleaved codes are also considered. The proposed noncoherent detection schemes exhibit moderate performance loss with respect to corresponding coherent schemes and are very robust to phase and frequency instabilities.

119 citations


Journal ArticleDOI
TL;DR: The tangential sphere upper bound is employed to provide improved upper bounds on the block and bit error probabilities of these ensembles of codes for the binary-input additive white Gaussian noise (AWGN) channel, based on coherent detection of equi-energy antipodal signals and maximum-likelihood decoding.
Abstract: The ensemble performance of parallel and serial concatenated turbo codes is considered, where the ensemble is generated by a uniform choice of the interleaver and of the component codes taken from the set of time-varying recursive systematic convolutional codes. Following the derivation of the input-output weight enumeration functions of the ensembles of random parallel and serial concatenated turbo codes, the tangential sphere upper bound is employed to provide improved upper bounds on the block and bit error probabilities of these ensembles of codes for the binary-input additive white Gaussian noise (AWGN) channel, based on coherent detection of equi-energy antipodal signals and maximum-likelihood decoding. The influence of the interleaver length and the memory length of the component codes is investigated. The improved bounding technique proposed here is compared to the conventional union bound and to a alternative bounding technique by Duman and Salehi (1998) which incorporates modified Gallager bounds. The advantage of the derived bounds is demonstrated for a variety of parallel and serial concatenated coding schemes with either fixed or random recursive systematic convolutional component codes, and it is especially pronounced in the region exceeding the cutoff rate, where the performance of turbo codes is most appealing. These upper bounds are also compared to simulation results of the iterative decoding algorithm.

Journal ArticleDOI
25 Jun 2000
TL;DR: This work develops sharper lower bounds with the simple decoding framework for the deletion channel by analyzing it for Markovian codebooks, and shows that the difference between the deletion and erasure capacities is even smaller than that with i.i.d. input codebooks.
Abstract: We study information transmission through a finite buffer queue. We model the channel as a finite-state channel whose state is given by the buffer occupancy upon packet arrival; a loss occurs when a packet arrives to a full queue. We study this problem in two contexts: one where the state of the buffer is known at the receiver, and the other where it is unknown. In the former case, we show that the capacity of the channel depends on the long-term loss probability of the buffer. Thus, even though the channel itself has memory, the capacity depends only on the stationary loss probability of the buffer. The main focus of this correspondence is on the latter case. When the receiver does not know the buffer state, this leads to the study of deletion channels, where symbols are randomly dropped and a subsequence of the transmitted symbols is received. In deletion channels, unlike erasure channels, there is no side-information about which symbols are dropped. We study the achievable rate for deletion channels, and focus our attention on simple (mismatched) decoding schemes. We show that even with simple decoding schemes, with independent and identically distributed (i.i.d.) input codebooks, the achievable rate in deletion channels differs from that of erasure channels by at most H0(pd)-pd logK/(K-1) bits, for pd<1-K-1, where p d is the deletion probability, K is the alphabet size, and H 0(middot) is the binary entropy function. Therefore, the difference in transmission rates between the erasure and deletion channels is not large for reasonable alphabet sizes. We also develop sharper lower bounds with the simple decoding framework for the deletion channel by analyzing it for Markovian codebooks. Here, it is shown that the difference between the deletion and erasure capacities is even smaller than that with i.i.d. input codebooks and for a larger range of deletion probabilities. We also examine the noisy deletion channel where a deletion channel is cascaded with a symmetric discrete memoryless channel (DMC). We derive a single letter expression for an achievable rate for such channels. For the binary case, we show that this result simplifies to max(0,1-[H0(thetas)+thetasH0(p e)]) where pe is the cross-over probability for the binary symmetric channel

Journal ArticleDOI
TL;DR: It is shown that for comparable performance the new method can be implemented with much less quantization bits, which can lead to considerably lower decoding cost.
Abstract: This letter is concerned with the implementation issue of the sum-product algorithm (SPA) for decoding the low density parity check codes. It is shown that the direct implementation of the original form of SPA is sensitive to the quantization effect. We propose a parity likelihood ratio technique to overcome the problem. It is shown that for comparable performance the new method can be implemented with much less quantization bits, which can lead to considerably lower decoding cost.

Proceedings ArticleDOI
Dan Boneh1
01 May 2000
TL;DR: This work defines and solves a generalized CRT list decoding problem and discusses how it might be used within the quadratic sieve factoring method, and gives a new application for CRt list decoding: finding smooth integers in short intervals.
Abstract: We present a new algorithm for CRT list decoding. An instance of the, CRT list decoding problem consists of integers B, 〈p1, ..., pn〉 and 〈r1, ..., rn〉, where p1 n/3. The bounds we obtain are similar to the bounds obtained by Guruswami and Sudan for Reed-Solomon list decoding. Hence, our algorithm reduces the gap between CRT list decoding and list decoding of Reed-Solomon codes. In addition, we give a new application for CRT list decoding: finding smooth integers in short intervals. Problems of this type come up in several algorithms for factoring large integers. We define and solve a generalized CRT list decoding problem and discuss how it might be used within the quadratic sieve factoring method.

Proceedings ArticleDOI
07 Feb 2000
TL;DR: The authors describe an implementation of such an analog decoder using a 0.25 /spl mu/m BiCMOS process, which incorporates a closed loop which allows the development of a highly-parallel analog network.
Abstract: Digital decoders play a fundamental role in extracting signals from a noisy background. Mixed-signal decoders are recently employed in applications to capture the potential of smaller size, higher speed or lower power consumption when compared to an equivalent digital implementation. Developing an all-analog decoder can further improve on these parameters. Furthermore, decoders for tailbiting convolutional codes and concatenated coding schemes (e.g., turbo codes) incorporate a closed loop which allows the development of a highly-parallel analog network. The authors describe an implementation of such an analog decoder using a 0.25 /spl mu/m BiCMOS process.

01 Jan 2000
TL;DR: This work proposes mutual information transfer characteristics for soft in/soft out decoders to design serially concatenated codes based on the convergence behavior of iterative decoding.
Abstract: The design of serially concatenated codes has yet been dominated by optimizing asymptotic slopes of error probability curves. We propose mutual information transfer characteristics for soft in/soft out decoders to design serially concatenated codes based on the convergence behavior of iterative decoding. The exchange of extrinsic information is visualized as a decoding trajectory in the Extrinsic Information Transfer Chart (EXIT chart).

Journal ArticleDOI
TL;DR: This work presents low complexity, suboptimal alternatives which are inspired by the classical Reed decoding algorithm for binary RM codes, and simulates these new algorithms along with the existing decoding algorithms using additive white Gaussian noise and two-path fading models for a particular choice of code.
Abstract: Previously, a class of generalized Reed-Muller (RM) codes has been suggested for use in orthogonal frequency-division multiplexing. These codes offer error correcting capability combined with substantially reduced peak-to mean power ratios. A number of approaches to decoding these codes have already been developed. Here, we present low complexity, suboptimal alternatives which are inspired by the classical Reed decoding algorithm for binary RM codes. We simulate these new algorithms along with the existing decoding algorithms using additive white Gaussian noise and two-path fading models for a particular choice of code. The simulations show that one of our new algorithms outperforms all existing suboptimal algorithms and offers performance that is within 0.5 dB of maximum-likelihood decoding, yet has complexity comparable to or lower than existing decoding approaches.

Book ChapterDOI
10 Apr 2000
TL;DR: The novel algorithm is a method for the fast correlation attack with significantly better performance than other reported methods, assuming a lower complexity and the same inputs, and the underlying principles, performance and complexity are compared.
Abstract: An algorithm for cryptanalysis of certain keystream generators is proposed. The developed algorithm has the following two advantages over other reported ones: (i) it is more powerful and (ii) it provides a high-speed software implementation, as well as a simple hard-ware one, suitable for high parallel architectures. The novel algorithm is a method for the fast correlation attack with significantly better performance than other reported methods, assuming a lower complexity and the same inputs. The algorithm is based on decoding procedures of the corresponding binary block code with novel constructions of the parity-checks, and the following two decoding approaches are employed: the a posterior probability based threshold decoding and the belief propagation based bit-flipping iterative decoding. These decoding procedures offer good trade-offs between the required sample length, overall complexity and performance. The novel algorithm is compared with recently proposed improved fast correlation attacks based on convolutional codes and turbo decoding. The underlying principles, performance and complexity are compared, and the gain obtained with the novel approach is pointed out.

Journal ArticleDOI
25 Jun 2000
TL;DR: It is shown that the performance of Reed-Solomon codes, for certain parameter values, is limited by worst case codeword configurations, but that with randomly chosen codes over large alphabets, more errors can be corrected.
Abstract: We derive upper bounds on the number of errors that can be corrected by list decoding of maximum-distance separable (MDS) codes using small lists. We show that the performance of Reed-Solomon (RS) codes, for certain parameter values, is limited by worst case codeword configurations, but that with randomly chosen codes over large alphabets, more errors can be corrected.

Proceedings ArticleDOI
L. Guivarch1, J.-C. Carlach, P. Siohan
28 Mar 2000
TL;DR: A joint source-channel decoding algorithm is obtained which takes advantage of the a priori knowledge derived from the symbol probabilities of Huffman codes and achieves good performances when a turbo code-like iterative decoding is carried out.
Abstract: We present a new method for using a priori information at a bit level in the decoding of Huffman codes. Its basic principle is the computation and use of the bit probabilities derived from the symbol probabilities of Huffman codes. Thus, we obtain a joint source-channel decoding algorithm which takes advantage of this a priori knowledge. Even better performances are obtained when a turbo code-like iterative decoding is carried out. Compared to separate decoding systems, the gain in signal to noise ratio varies from 0.5 up to 1 dB with an added complexity which is relatively limited.

Patent
23 Jun 2000
TL;DR: In this paper, a soft interpolation is performed to find the non-trivial polynomial QM(X,Y) of the lowest (weighted) degree whose zeros and their multiplicities are as specified by the matrix M.
Abstract: An algorithmic soft-decision decoding method for Reed-Solomon codes proceeds as follows. Given the reliability matrix Π showing the probability that a code symbol of a particular value was transmitted at each position, computing a multiplicity matrix M which determines the interpolation points and their multiplicities. Given this multiplicity matrix M, soft interpolation is performed to find the non-trivial polynomial QM(X,Y) of the lowest (weighted) degree whose zeros and their multiplicities are as specified by the matrix M. Given this non-trivial polynomial QM(X,Y), all factors of QM(X,Y) of type Y−ƒ(X) are found, where ƒ(X) is a polynomial in X whose degree is less than the dimension k of the Reed-Solomon code. Given these polynomials ƒ(X), a codeword is reconstructed from each of them, and the most likely of these codewords selected as the output of the algorithm. The algorithmic method is algebraic, operates in polynomial time, and significantly outperforms conventional hard-decision decoding, generalized minimum distance decoding, and Guruswami-Sudan decoding of Reed-Solomon codes. By varying the total number of interpolation points recorded in the multiplicity matrix M, the complexity of decoding can be adjusted in real time to any feasible level of performance. The algorithmic method extends to algebraic soft-decision decoding of Bose-Chaudhuri-Hocquenghem codes and algebraic-geometry codes.

Journal ArticleDOI
25 Jun 2000
TL;DR: The statistical approach proposed by D. Agrawal and A. Vardy to evaluate the error performance of the generalized minimum distance (GMD) decoding is extended to other reliability based decoding algorithms for binary linear block codes, namely Chase-type, combined GMD and Chase- type, and ordered statistic decoding.
Abstract: The statistical approach proposed by D. Agrawal and A. Vardy (see IEEE Trans. Inform. Theory, vol.46, pp.60-83, Jan. 2000) to evaluate the error performance of the generalized minimum distance (GMD) decoding is extended to other reliability based decoding algorithms for binary linear block codes, namely Chase-type, combined GMD and Chase-type, and ordered statistic decoding. In all cases, tighter and simpler bounds than previously proposed ones have been obtained with this approach.

Proceedings ArticleDOI
Hamid R. Sadjadpour1
26 Jul 2000
TL;DR: The symbol-by-symbol maximum a posteriori (MAP) known also as BCJR algorithm is described and a new hardware architecture for implementing the MAP-based decoding algorithms suitable for chip design is presented.
Abstract: The symbol-by-symbol maximum a posteriori (MAP) known also as BCJR algorithm is described. The logarithmic versions of the MAP algorithm, namely, Log-MAP and Max-Log-MAP decoding algorithms along with a new Simplified-Log-MAP algorithm, are presented here. Their bit error rate performance and computational complexity of these algorithms are compared. A new hardware architecture for implementing the MAP-based decoding algorithms suitable for chip design is also presented here.

Journal ArticleDOI
TL;DR: This paper describes a channel-tolerant approach identified as "telesonar type-B signaling," designed to accommodate network architectures requiring multiple access to the channel while simultaneously providing covertness and energy efficiency.
Abstract: Undersea acoustic channels can exhibit multipath propagation with impulse-response duration and coherence time both of the order of tens to hundreds of milliseconds. Signal reception is further impaired by the presence of time-varying nonwhite ambient-noise spectra having a dynamic range of 30 dB or more. Acoustic communication requires appropriate waveform design and associated signal processing to accommodate these adverse transmission characteristics while also providing desired performance features such as low-probability-of-detection (LPD) and multi-access networking. Adaptive-equalization techniques provide good performance only for channels with stable multi-paths and high signal-to-noise ratios (SNR's) accommodating the signaling rates needed to sample and compensate for rapid changes. An alternative approach is to design for robustness against channel fluctuations. This paper describes a channel-tolerant approach identified as "telesonar type-B signaling." The technique has been designed to accommodate network architectures requiring multiple access to the channel while simultaneously providing covertness and energy efficiency. Specialized frequency-hopped M-ary frequency-shift-key (FH-MFSK) waveforms are combined with related signal processing, including nonlinear adaptive techniques to mitigate the effects of all types of interference. This effectively results in a channel that has uniformly distributed noise in both time and frequency. Powerful error-correction coding permits low SNR transmissions. Nonbinary, long-constraint-length, convolutional coding and related sequential decoding is a classical solution for difficult low-rate channels. The probability of bit errors below 10/sup -10/ is obtainable, even in Rayleigh-faded channels near the computational cutoff rate, and the probability of failure to decode frames of data is extremely small. Both simulations and analyses of at-sea experiments demonstrate the performance of this noncoherent approach to reliable acoustic communications.

Proceedings ArticleDOI
07 Mar 2000
TL;DR: This paper combines techniques for early stopping and error detection in order to stop the iterative turbo code decoding process and detect if errors are present in the decoded bit sequence.
Abstract: In this paper we combine techniques for early stopping and error detection in order to stop the iterative turbo code decoding process and detect if errors are present in the decoded bit sequence. The simple and efficient approaches we introduce are based on monitoring the mean of the absolute values of the log-likelihood ratios at the output of the component decoders over each frame.

Journal ArticleDOI
01 Apr 2000
TL;DR: The maximum a posterioriprobability (MAP) algorithm and its variation's are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings to reduce the decoding complexity and delay of the MAP algorithms.
Abstract: The maximum a posterioriprobability (MAP) algorithm is a trellis-based MAP decoding algorithm. It is the heart of turbo (or iterative) decoding that achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in a long decoding delay. For practical applications, this decoding algorithm must be simplifled and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variation's, such as log-MAP and max-log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bidirectional and parallel MAP decodings.

Journal ArticleDOI
25 Jun 2000
TL;DR: It is proved that for any 2/spl times/2 (information bits) product codes, there is a unique and stable fixed point and the interpretation of these conditions provides an insight to the behavior of the decoding algorithm.
Abstract: Geometric interpretation of turbo decoding has founded an analytical basis, and provided tools for the analysis of this algorithm. We focus on turbo decoding of product codes, and based on the geometric framework, we extend the analytical results and show how analysis tools can be practically adapted for this case. Specifically, we investigate the algorithm's stability and its convergence rate. We present new results concerning the structure and properties of stability matrices of the algorithm, and develop upper bounds on the algorithm's convergence rate. We prove that for any 2/spl times/2 (information bits) product codes, there is a unique and stable fixed point. For the general case, we present sufficient conditions for stability. The interpretation of these conditions provides an insight to the behavior of the decoding algorithm. Simulation results, which support and extend the theoretical analysis, are presented for Hamming [(7,4,3)]/sup 2/ and Golay [(24,12,8)]/sup 2/ product codes.

Journal ArticleDOI
TL;DR: This article presents an algorithmic improvement to Sudan's list-decoding algorithm for Reed-Solomon codes and its generalization to algebraic-geometric codes from Shokrollahi and Wasserman by computing sufficiently many coefficients of a Hensel development to reconstruct the functions that correspond to codewords.
Abstract: This article presents an algorithmic improvement to Sudan's (see J. Complexity, vol.13, p.180-93, 1997) list-decoding algorithm for Reed-Solomon codes and its generalization to algebraic-geometric codes from Shokrollahi and Wasserman (see ibid., vol.45, p.432-37, 1999). Instead of completely factoring the interpolation polynomial over the function field of the curve, we compute sufficiently many coefficients of a Hensel development to reconstruct the functions that correspond to codewords. We prove that these Hensel developments can be found efficiently using Newton's method. We also describe the algorithm in the special case of Reed-Solomon codes.

Journal ArticleDOI
TL;DR: A new data structure is investigated, which allows fast decoding of texts encoded by canonical Huffman codes, with storage requirements much lower than for conventional Huffman trees, and decoding is faster, because a part of the bit-comparisons necessary for the decoding may be saved.
Abstract: A new data structure is investigated, which allows fast decoding of texts encoded by canonical Huffman codes. The storage requirements are much lower than for conventional Huffman trees, O(log^2 n) for trees of depth O(log n), and decoding is faster, because a part of the bit-comparisons necessary for the decoding may be saved. Empirical results on large real-life distributions show a reduction of up to 50% and more in the number of bit operations. The basic idea is then generalized, yielding further savings.

Proceedings ArticleDOI
TL;DR: To what extent error protection techniques extensively studied for digital communication through Gaussian channels can be used advantageously for watermarking is shown.
Abstract: This paper presents an other way of considering watermarking methods, which are analyzed from the point of view of the Information Theory. Watermarking is thus a communication problem in which some information bits have to be transmitted through an additive noise channel subjected to distortions and attacks. Designing watermarking methods in such a way that this channel is Gaussian can be profitable. This paper shows to what extent error protection techniques extensively studied for digital communication through Gaussian channels can be used advantageously for watermarking. Convolutional codes combined with soft-decision decoding are the best example. Especially, when soft-decision Viterbi decoding is employed, this kind of coding schemes can achieve much better performance than BCH codes, at comparable levels of complexity and redundancy, both for still and moving images.