scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2003"


Proceedings ArticleDOI
04 Aug 2003
TL;DR: A modification of belief propagation is presented that enables us to decode LDPC codes defined on high order Galois fields with a complexity that scales as p log/sub 2/ (p), p being the field order.
Abstract: We present a modification of belief propagation that enables us to decode LDPC codes defined on high order Galois fields with a complexity that scales as p log/sub 2/ (p), p being the field order. With this low complexity algorithm, we are able to decode GF(2/sup q/) LDPC codes up to a field order value of 256. We show by simulation that ultra-sparse regular LDPC codes in GF(64) and GF(256) exhibit very good performance.

280 citations


Dissertation
01 Jan 2003
TL;DR: This thesis investigates the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code, and provides specific LP decoders for two major families of codes: turbo codes and low-density parity-check codes.
Abstract: Error-correcting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full error-correcting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new “LP decoders” have tight combinatorial characterizations of decoding success that can be used to analyze error-correcting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the channel. We refer to this property as the ML certificate property. We provide specific LP decoders for two major families of codes: turbo codes and low-density parity-check (LDPC) codes. These codes have received a great deal of attention recently due to their unprecedented error-correcting performance. Our decoder is particularly attractive for analysis of these codes because the standard message-passing algorithms used for decoding are often difficult to analyze. For turbo codes, we give a relaxation very close to min-cost flow, and show that the success of the decoder depends on the costs in a certain residual graph. For the case of rate-1/2 repeat-accumulate codes (a certain type of turbo code), we give an inverse polynomial upper bound on the probability of decoding failure. For LDPC codes (or any binary linear code), we give a relaxation based on the factor graph representation of the code. We introduce the concept of fractional distance, which is a function of the relaxation, and show that LP decoding always corrects a number of errors up to half the fractional distance. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) (Abstract shortened by UMI.)

266 citations


Journal ArticleDOI
TL;DR: The use of linear codes with low density generator matrix with iterative decoding techniques - message passing -over the corresponding graph achieves a performance close to the Shannon theoretical limit.
Abstract: We propose the use of linear codes with low density generator matrix to achieve a performance similar to that of turbo and standard low-density parity check codes. The use of iterative decoding techniques - message passing -over the corresponding graph achieves a performance close to the Shannon theoretical limit. As an advantage with respect to turbo and standard low-density parity check codes, the complexity of the decoding and encoding procedures is very low.

219 citations


Journal ArticleDOI
TL;DR: An iterative decoding architecture based on stochastic computational elements is proposed that provides an alternative to analogue decoding for high-speed/low-power applications.
Abstract: An iterative decoding architecture based on stochastic computational elements is proposed. Simulation results for a simple low-density parity-check code demonstrate near-optimal performance with respect to a maximum likelihood decoder. The proposed method provides an alternative to analogue decoding for high-speed/low-power applications.

209 citations


Journal ArticleDOI
TL;DR: The basic structure of LDPC codes and the iterative algorithms that are used to decode them are reviewed and the state of the art is considered.
Abstract: LDPC codes were invented in 1960 by R. Gallager. They were largely ignored until the discovery of turbo codes in 1993. Since then, LDPC codes have experienced a renaissance and are now one of the most intensely studied areas in coding. In this article we review the basic structure of LDPC codes and the iterative algorithms that are used to decode them. We also briefly consider the state of the art of LDPC design.

199 citations


Book ChapterDOI
Shuhong Gao1
01 Jan 2003
TL;DR: A new algorithm is developed for decoding Reed-Solomon codes that uses fast Fourier transforms and computes the message symbols directly without explicitly finding error locations or error magnitudes.
Abstract: A new algorithm is developed for decoding Reed-Solomon codes. It uses fast Fourier transforms and computes the message symbols directly without explicitly finding error locations or error magnitudes. In the decoding radius (up to half of the minimum distance), the new method is easily adapted for error and erasure decoding. It can also detect all errors outside the decoding radius. Compared with the BerlekampMassey algorithm, discovered in the late 1960’s, the new method seems simpler and more natural yet it has a similar time complexity.

181 citations


Proceedings ArticleDOI
11 Oct 2003
TL;DR: A unifying framework for proving that predicate P is hard-core for a one-way function f is introduced, and it is applied to a broad family of functions and predicates, reproving old results in an entirely different way as well as showing newhard-core predicates for well known one- way function candidates.
Abstract: We introduce a unifying framework for proving that predicate P is hard-core for a one-way function f, and apply it to a broad family of functions and predicates, reproving old results in an entirely different way as well as showing new hard-core predicates for well known one-way function candidates. Our framework extends the list-coding method of Goldreich and Levin for showing hard-core predicates. Namely, a predicate will correspond to some error correcting code, predicting a predicate will correspond to access to a corrupted codeword, and the task of inverting one-way functions will correspond to the task of list decoding a corrupted codeword. A characteristic of the error correcting codes which emerge and are addressed by our framework is that codewords can be approximated by a small number of heavy coefficients in their Fourier representation. Moreover, as long as corrupted words are close enough to legal codewords, they will share a heavy Fourier coefficient. We list decodes, by devising a learning algorithm applied to corrupted codewords for learning heavy Fourier coefficients. For codes defined over {0, 1}/sup n/ domain, a learning algorithm by Kushilevitz and Mansour already exists. For codes defined over Z/sub N/, which are the codes which emerge for predicates based on number theoretic one-way functions such as the RSA and Exponentiation modulo primes, we develop a new learning algorithm. This latter algorithm may be of independent interest outside the realm of hard-core predicates.

148 citations


Journal ArticleDOI
TL;DR: Simulation results show that both proposed Viterbi decoding-based suboptimal algorithms effectively achieve practically optimum performance for tailbiting codes of any length.
Abstract: The paper presents two efficient Viterbi decoding-based suboptimal algorithms for tailbiting codes. The first algorithm, the wrap-around Viterbi algorithm (WAVA), falls into the circular decoding category. It processes the tailbiting trellis iteratively, explores the initial state of the transmitted sequence through continuous Viterbi decoding, and improves the decoding decision with iterations. A sufficient condition for the decision to be optimal is derived. For long tailbiting codes, the WAVA gives essentially optimal performance with about one round of Viterbi trial. For short- and medium-length tailbiting codes, simulations show that the WAVA achieves closer-to-optimum performance with fewer decoding stages compared with the other suboptimal circular decoding algorithms. The second algorithm, the bidirectional Viterbi algorithm (BVA), employs two wrap-around Viterbi decoders to process the tailbiting trellis from both ends in opposite directions. The surviving paths from the two decoders are combined to form composite paths once the decoders meet in the middle of the trellis. The composite paths at each stage thereafter serve as candidates for decision update. The bidirectional process improves the error performance and shortens the decoding latency of unidirectional decoding with additional storage and computation requirements. Simulation results show that both proposed algorithms effectively achieve practically optimum performance for tailbiting codes of any length.

121 citations


Proceedings ArticleDOI
01 Dec 2003
TL;DR: This work addresses the problem of finding the most suitable index assignments to arbitrary, high order signal constellations and proposes a new method based on the binary switching algorithm that finds optimized mappings outperforming previously known ones.
Abstract: We investigate bit-interleaved coded modulation with iterative decoding (BICM-ID) for bandwidth efficient transmission, where the bit error rate is reduced through iterations between a multilevel demapper and a simple channel decoder. In order to achieve a significant turbo-gain, the assignment strategy of the binary indices to signal points is crucial. We address the problem of finding the most suitable index assignments to arbitrary, high order signal constellations. A new method based on the binary switching algorithm is proposed that finds optimized mappings outperforming previously known ones.

107 citations


Journal ArticleDOI
15 Sep 2003
TL;DR: It is proved that the authors can construct codes by using low-density parity-check (LDPC) matrices with maximum-likelihood (or typical set) decoding and a coding theorem of parity- check codes for general channels is proved.
Abstract: Linear codes for a coding problem of correlated sources are considered. It is proved that we can construct codes by using low-density parity-check (LDPC) matrices with maximum-likelihood (or typical set) decoding. As applications of the above coding problem, a construction of codes is presented for multiple-access channel with correlated additive noises and a coding theorem of parity-check codes for general channels is proved.

100 citations


Patent
19 Feb 2003
TL;DR: In this article, a novel solution is presented that completely eliminates and/or substantially reduces the oscillations that are oftentimes encountered with the various iterative decoding approaches that are employed to decode LDPC coded signals.
Abstract: Stopping or reducing oscillations in Low Density Parity Check (LDPC) codes. A novel solution is presented that completely eliminates and/or substantially reduces the oscillations that are oftentimes encountered with the various iterative decoding approaches that are employed to decode LDPC coded signals. This novel approach may be implemented in any one of the following three ways. One way involves combining the Sum-Product (SP) soft decision decoding approach with the Bit-Flip (BF) hard decision decoding approach in an intelligent manner that may adaptively select the number of iterations performed during the SP soft decoding process. The other two ways involve modification of the manner in which the SP soft decoding approach and the BF hard decision decoding approach are implemented. One modification involves changing the initialization of the SP soft decoding process, and another modification involves the updating procedure employed during the SP soft decoding approach process.

Proceedings ArticleDOI
11 Oct 2003
TL;DR: It is shown how to reduce advice in Impagliazzo's proof of the Direct Product Lemma for pairwise independent inputs, which leads to error-correcting codes with O(n/sup 2/) encoding length, 0/sup /spl tilde//(n) encoding time, and probabilistic 0/Sup /splTilde// (n) list-decoding time.
Abstract: We show that Yao's XOR Lemma, and its essentially equivalent rephrasing as a Direct Product Lemma, can be re-interpreted as a way of obtaining error-correcting codes with good list-decoding algorithms from error-correcting codes having weak unique-decoding algorithms. To get codes with good rate and efficient list decoding algorithms, one needs a proof of the Direct Product Lemma that, respectively, is strongly derandomized, and uses very small advice. We show how to reduce advice in Impagliazzo's proof of the Direct Product Lemma for pairwise independent inputs, which leads to error-correcting codes with O(n/sup 2/) encoding length, 0/sup /spl tilde//(n/sup 2/) encoding time, and probabilistic 0/sup /spl tilde//(n) list-decoding time. (Note that the decoding time is sub-linear in the length of the encoding.) Back to complexity theory, our advice-efficient proof of Impagliazzo's hard-core set results yields a (weak) uniform version of O'Donnell results on amplification of hardness in NP. We show that if there is a problem in NP that cannot be solved by BPP algorithms on more than a 1 - 1/(log n)/sup c/ fraction of inputs, then there is a problem in NP that cannot be solved by BPP algorithms on more than a 3/4 + 1/(log n)/sup c/ fraction of inputs, where c > 0 is an absolute constant.

Proceedings ArticleDOI
04 Aug 2003
TL;DR: The main computational steps in algebraic soft decoding, as well as Sudan-type list decoding, of Reed-Solomon codes are interpolation and factorization, and a series of transformations are given for the interpolation problem that arises in these decoding algorithms.
Abstract: The main computational steps in algebraic soft decoding, as well as Sudan-type list decoding, of Reed-Solomon codes are interpolation and factorization. A series of transformations is given for the interpolation problem that arises in these decoding algorithms. These transformations reduce the space and time complexity to a small fraction of the complexity of the original interpolation problem. A factorization procedure that applies directly to the reduced interpolation problem is also presented.

Journal Article
TL;DR: In this paper, Yao's XOR Lemma and its essentially equivalent rephrasing as a Direct Product Lemma were interpreted as a way of obtaining error-correcting codes with good list-decoding algorithms from errorcorrecting code having weak unique decoding algorithms.
Abstract: We show that Yao's XOR Lemma, and its essentially equivalent rephrasing as a Direct Product Lemma, can be re-interpreted as a way of obtaining error-correcting codes with good list-decoding algorithms from error-correcting codes having weak unique-decoding algorithms. To get codes with good rate and efficient list decoding algorithms, one needs a proof of the Direct Product Lemma that, respectively, is strongly derandomized, and uses very small advice. We show how to reduce advice in Impagliazzo's proof of the Direct Product Lemma for pairwise independent inputs, which leads to error-correcting codes with O(n/sup 2/) encoding length, 0/sup /spl tilde//(n/sup 2/) encoding time, and probabilistic 0/sup /spl tilde//(n) list-decoding time. (Note that the decoding time is sub-linear in the length of the encoding.) Back to complexity theory, our advice-efficient proof of Impagliazzo's hard-core set results yields a (weak) uniform version of O'Donnell results on amplification of hardness in NP. We show that if there is a problem in NP that cannot be solved by BPP algorithms on more than a 1 - 1/(log n)/sup c/ fraction of inputs, then there is a problem in NP that cannot be solved by BPP algorithms on more than a 3/4 + 1/(log n)/sup c/ fraction of inputs, where c > 0 is an absolute constant.

Journal ArticleDOI
TL;DR: Three decoders for the QR codes with parameters (71, 36, 11), (79, 40, 15), and (97, 49, 15) are developed, which have not been decoded before.
Abstract: Recently, a new algebraic decoding algorithm for quadratic residue (QR) codes was proposed by Truong et al. Using that decoding scheme, we now develop three decoders for the QR codes with parameters (71, 36, 11), (79, 40, 15), and (97, 49, 15), which have not been decoded before. To confirm our results, an exhaustive computer simulation has been executed successfully.

Journal ArticleDOI
TL;DR: Models and algorithms are applied to context-based arithmetic coding widely used in practical systems (e.g., JPEG-2000) and reveal very good error resilience performances.
Abstract: The paper addresses the issue of robust and joint source-channel decoding of arithmetic codes. We first analyze dependencies between the variables involved in arithmetic coding by means of the Bayesian formalism. This provides a suitable framework for designing a soft decoding algorithm that provides high error-resilience. It also provides a natural setting for "soft synchronization", i.e., to introduce anchors favoring the likelihood of "synchronized" paths. In order to maintain the complexity of the estimation within a realistic range, a simple, yet efficient, pruning method is described. The algorithm can be placed in an iterative source-channel decoding structure, in the spirit of serial turbo codes. Models and algorithms are then applied to context-based arithmetic coding widely used in practical systems (e.g., JPEG-2000). Experimentation results with both theoretical sources and with real images coded with JPEG-2000 reveal very good error resilience performances.

Journal ArticleDOI
TL;DR: It is demonstrated that list decoding techniques can be used to find all possible pirate coalitions and raised some related open questions about linear codes, and suggests uses for other decoding techniques in the presence of additional information about traitor behavior.
Abstract: We apply results from algebraic coding theory to solve problems in cryptography, by using recent results on list decoding of error-correcting codes to efficiently find traitors who collude to create pirates. We produce schemes for which the traceability (TA) traitor tracing algorithm is very fast. We compare the TA and identifiable parent property (IPP) traitor tracing algorithms, and give evidence that when using an algebraic structure, the ability to trace traitors with the IPP algorithm implies the ability to trace with the TA algorithm. We also demonstrate that list decoding techniques can be used to find all possible pirate coalitions. Finally, we raise some related open questions about linear codes, and suggest uses for other decoding techniques in the presence of additional information about traitor behavior.

Journal ArticleDOI
TL;DR: In this paper, the authors review factor graphs, which can be used to describe codes and the joint probability distributions that must be dealt with in decoding, and show how this algorithm leads to iterative decoding algorithms for codes defined on graphs.
Abstract: Low-density parity-check codes, turbo codes, and indeed most practically decodable capacity-approaching error correcting codes can all be understood as codes defined on graphs. Graphs not only describe the codes, but, more important, they structure the operation of the sum-product decoding algorithm (or one of many possible variations), which can be used for iterative decoding. Such coding schemes have the potential to approach channel capacity, while maintaining reasonable decoding complexity. In this tutorial article we review factor graphs, which can be used to describe codes and the joint probability distributions that must be dealt with in decoding. We also review the sum-product algorithm, and show how this algorithm leads to iterative decoding algorithms for codes defined on graphs.

Journal ArticleDOI
TL;DR: Lower and upper bounds are established on the rate of a (binary linear) code that can be list decoded with list size L when up to a fraction p of its symbols are adversarially erased.
Abstract: We consider the problem of list decoding from erasures. We establish lower and upper bounds on the rate of a (binary linear) code that can be list decoded with list size L when up to a fraction p of its symbols are adversarially erased. Such bounds already exist in the literature, albeit under the label of generalized Hamming weights, and we make their connection to list decoding from erasures explicit. Our bounds show that in the limit of large L, the rate of such a code approaches the "capacity" (1 - p) of the erasure channel. Such nicely list decodable codes are then used as inner codes in a suitable concatenation scheme to give a uniformly constructive family of asymptotically good binary linear codes of rate /spl Omega/(/spl epsiv//sup 2//log(1//spl epsiv/)) that can be efficiently list-decoded using lists of size O(1//spl epsiv/) when an adversarially chosen (1 - /spl epsiv/) fraction of symbols are erased, for arbitrary /spl epsiv/ > 0. This improves previous results in this vein, which achieved a rate of /spl Omega/(/spl epsiv//sup 3/log(1//spl epsiv/)).

Proceedings ArticleDOI
09 Jun 2003
TL;DR: The first construction of error-correcting codes which can be (list) decoded from a noise fraction arbitrarily close to 1 in linear time from a fraction (1-ε) of errors for arbitrary ε > 0 is presented.
Abstract: We present the first construction of error-correcting codes which can be (list) decoded from a noise fraction arbitrarily close to 1 in linear time. Specifically, we present an explicit construction of codes which can be encoded in linear time as well as list decoded in linear time from a fraction (1-e) of errors for arbitrary e > 0. The rate and alphabet size of the construction are constants that depend only on e. Our construction involves devising a new combinatorial approach to list decoding, in contrast to all previous approaches which relied on the power of decoding algorithms for algebraic codes like Reed-Solomon codes.Our result implies that it is possible to have, and in fact explicitly specifies, a coding scheme for arbitrarily large noise thresholds with only constant redundancy in the encoding and constant amount of work (at both the sending and receiving ends) for each bit of information to be communicated. Such a result was known for certain probabilistic error models, and here we show that this is possible under the stronger adversarial noise model as well.

Journal ArticleDOI
TL;DR: It is necessary to consider efficient realizations of iterative decoders when area, power, and throughput of the decoding implementation are constrained by practical design issues of communications receivers.
Abstract: Implementation constraints on iterative decoders applying message-passing algorithms are investigated. Serial implementations similar to traditional microprocessor datapaths are compared against architectures with multiple processing elements that exploit the inherent parallelism in the decoding algorithm. Turbo codes and low-density parity check codes, in particular, are evaluated in terms of their suitability for VLSI implementation in addition to their bit error rate performance as a function of signal-to-noise ratio. It is necessary to consider efficient realizations of iterative decoders when area, power, and throughput of the decoding implementation are constrained by practical design issues of communications receivers.

Journal ArticleDOI
TL;DR: This paper constructs "good" binary linear block codes at any rate r<1 by serially concatenating an arbitrary outer code of rate r with a large number of rate-1 inner codes through uniform random interleavers and proves that long codes from this ensemble will achieve the Gilbert-Varshamov (1952) bound with high probability.
Abstract: Until the analysis of repeat accumulate codes by Divsalar et al. (1998), few people would have guessed that simple rate-1 codes could play a crucial role in the construction of "good" binary codes. We construct "good" binary linear block codes at any rate r<1 by serially concatenating an arbitrary outer code of rate r with a large number of rate-1 inner codes through uniform random interleavers. We derive the average output weight enumerator (WE) for this ensemble in the limit as the number of inner codes goes to infinity. Using a probabilistic upper bound on the minimum distance, we prove that long codes from this ensemble will achieve the Gilbert-Varshamov (1952) bound with high probability. Numerical evaluation of the minimum distance shows that the asymptotic bound can be achieved with a small number of inner codes. In essence, this construction produces codes with good distance properties which are also compatible with iterative "turbo" style decoding. For selected codes, we also present bounds on the probability of maximum-likelihood decoding (MLD) error and simulation results for the probability of iterative decoding error.

Proceedings ArticleDOI
15 Sep 2003
TL;DR: Upper and lower bounds on the size of the smallest stopping set in LDPC codes derived from projective and Euclidean geometries are derived and examples of codes that achieve these bounds are provided.
Abstract: The size of the smallest stopping set in LDPC codes helps in analyzing their performance under iterative decoding, just a minimum distance helps in analyzing the performance under maximum likelihood decoding. We study stopping sets in LDPC codes arising from 2-designs, in particular LDPC codes derived from projective and Euclidean geometries. We derive upper and lower bounds on the size of the smallest stopping set in such codes, and provide examples of codes that achieve these bounds.

Journal ArticleDOI
TL;DR: A novel density evolution approach to analyze the iterative decoding algorithms of low-density parity-check (LDPC) codes and product codes, based on Gaussian densities, is proposed, whose iterates directly represent the error probability both for the additive white Gaussian noise (AWGN) and the Rayleigh-fading channel.
Abstract: We propose a novel density evolution approach to analyze the iterative decoding algorithms of low-density parity-check (LDPC) codes and product codes, based on Gaussian densities. Namely, for these classes of codes we derive a one-dimensional (1D) map whose iterates directly represent the error probability both for the additive white Gaussian noise (AWGN) and the Rayleigh-fading channel. These simple models allow a qualitative analysis of the nonlinear dynamics of the decoding algorithm. As an application, we compute the decoding thresholds and show that they are consistent with the simulation results available in the literature.

Patent
28 Oct 2003
TL;DR: In this article, a two-dimensional code with superior decoding property which is possible to control the level of error correcting codes, and method for encoding and decoding the two dimensional code is provided, which includes finding pattern area comprised finding patterns for discriminating the code area from whole image, timing pattern area included timing patterns for checking a position of data area from the whole code and positions of each cells of the data area.
Abstract: Two-dimensional Code having superior decoding property which is possible to control the level of error correcting codes, and method for encoding and decoding the two-dimensional Code is provided. The two-dimensional code includes finding pattern area comprised finding patterns for discriminating the code area from whole image, timing pattern area comprised timing patterns for checking a position of data area from the whole code and positions of each cells of the data area, and data area inputted various kinds of data and decoding information of data itself.

Journal ArticleDOI
TL;DR: The proposed3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.
Abstract: We propose a fully three-dimensional (3-D) wavelet-based coding system featuring 3-D encoding/two-dimensional (2-D) decoding functionalities. A fully 3-D transform is combined with context adaptive arithmetic coding; 2-D decoding is enabled by encoding every 2-D subband image independently. The system allows a finely graded up to lossless quality scalability on any 2-D image of the dataset. Fast access to 2-D images is obtained by decoding only the corresponding information thus avoiding the reconstruction of the entire volume. The performance has been evaluated on a set of volumetric data and compared to that provided by other 3-D as well as 2-D coding systems. Results show a substantial improvement in coding efficiency (up to 33%) on volumes featuring good correlation properties along the z axis. Even though we did not address the complexity issue, we expect a decoding time of the order of one second/image after optimization. In summary, the proposed 3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.

Journal ArticleDOI
TL;DR: This article presents a tutorial overview of the class of concatenated convolutional codes with interleavers, also known as turbo-like codes, endowed by a decoding algorithm that splits the decoding burden into separate decoding of each individual code.
Abstract: This article presents a tutorial overview of the class of concatenated convolutional codes with interleavers, also known as turbo-like codes. They are powerful codes, formed by a number of encoders connected through interleavers, endowed by a decoding algorithm that splits the decoding burden into separate decoding of each individual code. Refinement of successive estimates of the information sequence is obtained by iterating the procedure of passing from one decoder to the other likelihood information decorrelated by the interleaver action. The key issues of code analysis and design are covered at the level of broad comprehension, without paying attention to analytical details.

Journal ArticleDOI
TL;DR: A novel iterative error control technique based on the threshold decoding algorithm and new convolutional self-doubly orthogonal codes is proposed, providing good tradeoff between complexity, latency and error performance.
Abstract: A novel iterative error control technique based on the threshold decoding algorithm and new convolutional self-doubly orthogonal codes is proposed. It differs from parallel concatenated turbo decoding as it uses a single convolutional encoder, a single decoder and hence no interleaver, neither at encoding nor at decoding. Decoding is performed iteratively using a single threshold decoder at each iteration, thereby providing good tradeoff between complexity, latency and error performance.

Proceedings ArticleDOI
11 May 2003
TL;DR: This novel scheme overcomes an existing difficulty in the IS practice that requires codebook information and shows large IS gains for single parity-check codes and short-length block codes.
Abstract: We introduce an importance sampling scheme for linear block codes with message-passing decoding. This novel scheme overcomes an existing difficulty in the IS practice that requires codebook information. Experiments show large IS gains for single parity-check codes and short-length block codes. For medium-length block codes, IS gains in the order of 10/sup 3/ and higher are observed at high signal-to-noise ratio.

Proceedings ArticleDOI
07 Jul 2003
TL;DR: It is proved that deterministic schemes, which guarantee correct recovery of the message, provide no savings and essentially the entire message has to be sent as side information and randomized schemes only need side information of length logarithmic in the message length.
Abstract: Under list decoding of error-correcting codes, the decoding algorithm is allowed to output a small list of codewords that are close to the noisy received word. This relaxation permits recovery even under very high noise thresholds. We consider one possible scenario that would permit disambiguating between the elements of the list, namely where the sender of the message provides some hopefully small amount of side information about the transmitted message on a separate auxiliary channel that is noise-free. This setting becomes meaningful and useful when the amount of side information that needs to be communicated is much smaller than the length of the message. We study what kind of side information is necessary and sufficient in the above context. The short, conceptual answer is that the side information must be randomized and the message recovery is with a small failure probability. Specifically, we prove that deterministic schemes, which guarantee correct recovery of the message, provide no savings and essentially the entire message has to be sent as side information. However there exist randomized schemes, which only need side information of length logarithmic in the message length. In fact, in the limit of repeated communication of several messages, amortized amount of side information needed per message can be a constant independent of the message length or the failure probability. Concretely, we can correct up to a fraction (1/2-/spl gamma/) of errors for binary codes using only 2log(1//spl gamma/)+O(1) amortized bits of side information per message, and this is in fact the best possible (up to additive constant terms).