scispace - formally typeset
Search or ask a question
Topic

List decoding

About: List decoding is a research topic. Over the lifetime, 7251 publications have been published within this topic receiving 151182 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: An algorithm is given that decodes the Leech lattice with not much more than twice the complexity of soft-decision decoding of the Golay code, and is readily generalized to lattices that can be expressed in terms of binary code formulas.
Abstract: An algorithm is given that decodes the Leech lattice with not much more than twice the complexity of soft-decision decoding of the Golay code. The algorithm has the same effective minimum distance as maximum-likelihood decoding and increases the effective error coefficient by less than a factor or two. The algorithm can be recognized as a member of the class of multistage algorithms that are applicable to hierarchical constructions. It is readily generalized to lattices that can be expressed in terms of binary code formulas, and in particular to construction B lattices. >

68 citations

Proceedings ArticleDOI
18 Nov 1996
TL;DR: A first approach of the iterative decoding of Reed-Solomon (RS) product codes: "Turbo codes RS", which is very attractive for data storage applications where the RS product codes are often used.
Abstract: Thanks to recent progress in the iterative decoding of concatenated codes, several new fields of investigation have appeared. In this paper, we present a first approach of the iterative decoding of Reed-Solomon (RS) product codes: "Turbo codes RS". Two methods to construct RS product codes are given. The iterative decoding of the RS product codes is based on the soft decoding and the soft decision of the component codes. The performance of RS turbo codes have been evaluated on the Gaussian and Rayleigh channels using Monte Carlo simulation. Coding gains up to 5.5 dB for a BER (bit error rate) of 10/sup -5/ have been obtained on the Gaussian channel. This new coding scheme is very attractive for data storage applications where the RS product codes are often used.

68 citations

Proceedings ArticleDOI
31 May 2009
TL;DR: A novel understanding of LP decoding is obtained, which allows us to establish a 0.05 fraction of correctable errors for rate-½ codes; this comes very close to the performance of iterative decoders and is significantly higher than the best previously noted correctable bit error rate for LP decoding.
Abstract: Linear programming decoding for low-density parity check codes (and related domains such as compressed sensing) has received increased attention over recent years because of its practical performance --coming close to that of iterative decoding algorithms--- and its amenability to finite-blocklength analysis. Several works starting with the work of Feldman et al. showed how to analyze LP decoding using properties of expander graphs. This line of analysis works for only low error rates, about a couple of orders of magnitude lower than the empirically observed performance. It is possible to do better for the case of random noise, as shown by Daskalakis et al. and Koetter and Vontobel. Building on work of Koetter and Vontobel, we obtain a novel understanding of LP decoding, which allows us to establish a 0.05-fraction of correctable errors for rate-1/2 codes; this comes very close to the performance of iterative decoders and is significantly higher than the best previously noted correctable bit error rate for LP decoding. Unlike other techniques, our analysis directly works with the primal linear program and exploits an explicit connection between LP decoding and message passing algorithms.An interesting byproduct of our method is a notion of a "locally optimal" solution that we show to always be globally optimal (i.e., it is the nearest codeword). Such a solution can in fact be found in near-linear time by a "re-weighted" version of the min-sum algorithm, obviating the need for linear programming. Our analysis implies, in particular, that this re-weighted version of the min-sum decoder corrects up to a 0.05-fraction of errors.

68 citations

Proceedings ArticleDOI
21 May 2006
TL;DR: In this paper, an explicit construction of error-correcting codes of rate R that can be list decoded in polynomial time up to a fraction (1-R-e) of errors is presented.
Abstract: For every 0 0, we present an explicit construction of error-correcting codes of rate R that can be list decoded in polynomial time up to a fraction (1-R-e) of errors. These codes achieve the "capacity" for decoding from adversarial errors, i.e., achieve the optimal trade-off between rate and error-correction radius. At least theoretically, this meets one of the central challenges in coding theory.Prior to this work, explicit codes achieving capacity were not known for any rate R. In fact, our codes are the first to beat the error-correction radius of 1-√R, that was achieved for Reed-Solomon (RS) codes in [9], for all rates R. (For rates R

68 citations

Proceedings ArticleDOI
16 Nov 2002
TL;DR: This paper gives another fast algorithm for the soft decoding of Reed-Solomon codes different from the procedure proposed by Feng, which works in time (w/r) O(1)nlog2n, where r is the rate of the code, and w is the maximal weight assigned to a vertical line.
Abstract: We generalize the classical Knuth-Schonhage algorithm computing GCD of two polynomials for solving arbitrary linear Diophantine systems over polynomials in time, quasi-linear in the maximal degree As an application, we consider the following weighted curve fitting problem: given a set of points in the plain, find an algebraic curve (satisfying certain degree conditions) that goes through each point the prescribed number of times The main motivation for this problem comes from coding theory, namely it is ultimately related to the list decoding of Reed-Solomon codes We present a new fast algorithm for the weighted curve fitting problem, based on the explicit construction of Groebner basis This gives another fast algorithm for soft-decoding of Reed-Solomon codes different from the procedure proposed by Feng (1999), which works in time (w/r)/sup O(1)/ n log/sup 2/ n loglogn, where r is the rate of the code, and w is the maximal weight assigned to a vertical line

67 citations


Network Information
Related Topics (5)
Base station
85.8K papers, 1M citations
89% related
Fading
55.4K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202384
2022153
202179
202078
201982
201894