scispace - formally typeset
Search or ask a question
Topic

List decoding

About: List decoding is a research topic. Over the lifetime, 7251 publications have been published within this topic receiving 151182 citations.


Papers
More filters
Book ChapterDOI
04 Dec 2011
TL;DR: A new algorithm for decoding linear codes that is inspired by a representation technique due to Howgrave-Graham and Joux in the context of subset sum algorithms is presented that offers a rigorous complexity analysis for random linear codes and brings the time complexity down.
Abstract: Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all code-based cryptography. The best attacks on the most prominent code-based cryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decoding algorithm for random linear codes of length n was for a long time Stern's variant of information-set decoding running in time $\tilde{\mathcal{O}}\left(2^{0.05563n}\right)$ . Recently, Bernstein, Lange and Peters proposed a new technique called Ball-collision decoding which offers a speed-up over Stern's algorithm by improving the running time to $\tilde{\mathcal{O}}\left(2^{0.05558n}\right)$ . In this paper, we present a new algorithm for decoding linear codes that is inspired by a representation technique due to Howgrave-Graham and Joux in the context of subset sum algorithms. Our decoding algorithm offers a rigorous complexity analysis for random linear codes and brings the time complexity down to $\tilde{\mathcal{O}}\left(2^{0.05363n}\right)$ .

123 citations

Journal ArticleDOI
TL;DR: Simulation results show that both proposed Viterbi decoding-based suboptimal algorithms effectively achieve practically optimum performance for tailbiting codes of any length.
Abstract: The paper presents two efficient Viterbi decoding-based suboptimal algorithms for tailbiting codes. The first algorithm, the wrap-around Viterbi algorithm (WAVA), falls into the circular decoding category. It processes the tailbiting trellis iteratively, explores the initial state of the transmitted sequence through continuous Viterbi decoding, and improves the decoding decision with iterations. A sufficient condition for the decision to be optimal is derived. For long tailbiting codes, the WAVA gives essentially optimal performance with about one round of Viterbi trial. For short- and medium-length tailbiting codes, simulations show that the WAVA achieves closer-to-optimum performance with fewer decoding stages compared with the other suboptimal circular decoding algorithms. The second algorithm, the bidirectional Viterbi algorithm (BVA), employs two wrap-around Viterbi decoders to process the tailbiting trellis from both ends in opposite directions. The surviving paths from the two decoders are combined to form composite paths once the decoders meet in the middle of the trellis. The composite paths at each stage thereafter serve as candidates for decision update. The bidirectional process improves the error performance and shortens the decoding latency of unidirectional decoding with additional storage and computation requirements. Simulation results show that both proposed algorithms effectively achieve practically optimum performance for tailbiting codes of any length.

121 citations

Proceedings ArticleDOI
30 Sep 2010
TL;DR: It is proved that the projection of P in the original space is tighter than the fundamental polytope based on the parity check matrix, and the new LP decoder is equivalent to the belief propagation decoder operating on the sparse factor graph representation, and hence achieves capacity.
Abstract: Polar codes are the first codes to provably achieve capacity on the symmetric binary-input discrete memoryless channel (B-DMC) with low encoding and decoding complexity. The parity check matrix of polar codes is high-density and we show that linear program (LP) decoding fails on the fundamental polytope of the parity check matrix. The recursive structure of the code permits a sparse factor graph representation. We define a new polytope based on the fundamental polytope of the sparse graph representation. This new polytope P is defined in a space of dimension O(N logN) where N is the block length. We prove that the projection of P in the original space is tighter than the fundamental polytope based on the parity check matrix. The LP decoder over P obtains the ML-certificate property. In the case of the binary erasure channel (BEC), the new LP decoder is equivalent to the belief propagation (BP) decoder operating on the sparse factor graph representation, and hence achieves capacity. Simulation results of SC (successive cancelation) decoding, LP decoding over tightened polytopes, and (ML) maximum likelihood decoding are provided. For channels other than the BEC, we discuss why LP decoding over P with a linear objective function is insufficient.

121 citations

Journal ArticleDOI
TL;DR: A new class of random-error-correcting cyclic codes is defined, which has two very desirable features: the binary members of the class are nearly as powerful as the best-known codes in the range of interest, and they can be decoded with the simplest known decoding algorithm.
Abstract: Codes exist which are capable of correcting large numbers of random errors Such codes are rarely used in practical data transmission systems, however, because the equipment necessary to realize their capabilities — that is, to actually correct the errors — is usually prohibitively complex and expensive The problem of finding simply implemented decoding algorithms or, equivalently, codes which can be decoded simply with existing methods, is perhaps the outstanding unsolved problem in coding theory today In this paper, a new class of random-error-correcting cyclic codes is defined These codes have two very desirable features: the binary members of the class are nearly as powerful as the best-known codes in the range of interest, and they can be decoded with the simplest known decoding algorithm Unfortunately there are relatively few codes with useful parameters in this class, despite the fact that the class is infinite

121 citations

Book
01 Jan 2007
TL;DR: This book introduces and motivates the problem of list decoding, and discusses the central algorithmic results of the subject, culminating with the recent results on achieving "list decoding capacity."
Abstract: Error-correcting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the original data even from a somewhat distorted version of the codeword. The central trade-off in coding theory is the one between the data rate (amount of non-redundant information per bit of codeword) and the error rate (the fraction of symbols that could be corrupted while still enabling data recovery). The traditional decoding algorithms did as badly at correcting any error pattern as they would do for the worst possible error pattern. This severely limited the maximum fraction of errors those algorithms could tolerate. In turn, this was the source of a big hiatus between the error-correction performance known for probabilistic noise models (pioneered by Shannon) and what was thought to be the limit for the more powerful, worst-case noise models (suggested by Hamming). In the last decade or so, there has been much algorithmic progress in coding theory that has bridged this gap (and in fact nearly eliminated it for codes over large alphabets). These developments rely on an error-recovery model called "list decoding," wherein for the pathological error patterns, the decoder is permitted to output a small list of candidates that will include the original message. This book introduces and motivates the problem of list decoding, and discusses the central algorithmic results of the subject, culminating with the recent results on achieving "list decoding capacity."

120 citations


Network Information
Related Topics (5)
Base station
85.8K papers, 1M citations
89% related
Fading
55.4K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202384
2022153
202179
202078
201982
201894