scispace - formally typeset
Search or ask a question
Topic

List decoding

About: List decoding is a research topic. Over the lifetime, 7251 publications have been published within this topic receiving 151182 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper compares the speed and output quality of a traditional stack-based decoding algorithm with two new decoders: a fast but non-optimal greedy decoder and a slow but optimal decoder that treats decoding as an integer-programming optimization problem.

51 citations

01 Jan 2007
TL;DR: The first explicit error correcting codes are presented along with efficient list-decoding algorithms that can correct a number of errors that approaches the information-theoretic limit and it is proved that an existing algorithm for a specific code family called Reed-Solomon codes is optimal for "list recovery," a generalization of list decoding.
Abstract: Error correcting codes systematically introduce redundancy into data so that the original information can be recovered when parts of the redundant data are corrupted. Error correcting codes are used ubiquitously in communication and data storage. The process of recovering the original information from corrupted data is called decoding. Given the limitations imposed by the amount of redundancy used by the error correcting code, an ideal decoder must efficiently recover from as many errors as information-theoretically possible. In this thesis, we consider two relaxations of the usual decoding procedure: list decoding and property testing. A list decoding algorithm is allowed to output a small list of possibilities for the original information that could result in the given corrupted data. This relaxation allows for efficient correction of significantly more errors than what is possible through usual decoding procedure which is always constrained to output the transmitted information. (1) We present the first explicit error correcting codes along with efficient list-decoding algorithms that can correct a number of errors that approaches the information-theoretic limit. This meets one of the central challenges in the theory of error correcting codes. (2) We also present explicit codes defined over smaller symbols that can correct significantly more errors using efficient list-decoding algorithms than existing codes, while using the same amount of redundancy. (3) We prove that an existing algorithm for a specific code family called Reed-Solomon codes is optimal for "list recovery," a generalization of list decoding. Property testing of error correcting codes entails "spot checking" corrupted data to quickly determine if the data is very corrupted or has few errors. Such spot checkers are closely related to the beautiful theory of Probabilistically Checkable Proofs (or PCPs). (1) We present spot checkers that only access a nearly optimal number of data symbols for an important family of codes called Reed-Muller codes. Our results are the first for certain classes of such codes. (2) We define a generalization of the "usual" testers for error correcting codes by endowing them with the very natural property of "tolerance," which allows slightly corrupted data to pass the test.

51 citations

Proceedings ArticleDOI
01 Jan 2016
TL;DR: This paper starts from non-binary LDPC codes to analyze the principle application of correcting matrix and Tanner diagram and puts forward Min-Max non binary LD PC codes algorithm, which simplifies calculation of non binaryLDPC codes and can effectively promote enhancement of communication in error correction.
Abstract: with the increase in requirements on fiber-optical communication system by people, transmission quantity has been increased. In order to guarantee quick and effective fiber-optical communication, people have to use LDPC codes to implement correction modulation, of which, the ability of non-binary LDPC codes to correct abrupt error and random error become more stronger, and it is corresponds with high-order modulation, it is suitable to be used in optical transmission system with super speed and long distance, so it has become to the key point researched by people. This paper starts from non-binary LDPC codes to analyze the principle application of correcting matrix and Tanner diagram, it also illustrates coding and decoding of LDCP codes and puts forward Min-Max non binary LDPC codes algorithm, which simplifies calculation of non binary LDPC codes and can effectively promote enhancement of communication in error correction. Introduction As the soft decision technology universally researched by people, LDPC has excellent ability in error correction; error level is low and can be concurrently realized. Meanwhile, research on binary LDPC codes is relatively mature, including binary LDPC codes and decoding algorithm, capacity analysis method, search algorithm of optimal degree based on Gaussian white noise channel and Rayleigh fading channel with irregular code. However, compare non-binary LDPC code with LDPC codes, its ability in abrupt error correction and random error correction is stronger, and it is more suitable with high-order modulation, so it is suitable to be used in optical transmission system with super speed and long distance. Overview on non-binary LDPC codes LDPC codes was put forward by Gallager in 1960, it is the linear block code, ix represents better capacity in data transmission and data storage, it is exclusively determined by generation matrix G or parity check matrix H, so it can define LDPC codes by parity check matrix H. Of which, H has 4 natures, which means each line has p 1, each kind has r and 1, the position between any 2 lines is the same and the number of valuing 1 dose not exceed 1, the line number comparison between p, t as well as code length and H is very small. From these natures we can see that parity check matrix and H respectively has special line weight p and parallel weight r, and any 2 lines have 1 exceed the same position, the density of 1 is very small, so it is called as parity check matrix with low density, the details are indicated by the following figure 1: Figure 1 Parity check matrix with low=density of LDPC Except to use check matrix to express LDPC codes, it can also use bidirectional graph model to 4th International Conference on Machinery, Materials and Information Technology Applications (ICMMITA 2016) Copyright © 2017, the Authors. Published by Atlantis Press. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/). Advances in Computer Science Research, volume 71

51 citations

Journal ArticleDOI
TL;DR: It is shown how nonsystematic Reed-Solomon (RS) codes encoded by means of the Chinese remainder theorem can be decoded using the Berlekamp algorithm.
Abstract: It is shown how nonsystematic Reed-Solomon (RS) codes encoded by means of the Chinese remainder theorem can be decoded using the Berlekamp algorithm. The Chien search and calculation of error values are not needed but are replaced by a polynomial division and added calculation in determining the syndrome. It is shown that for certain cases of low-rate RS codes, the total decoding computation may be less than the usual method used with cyclic codes. Encoding and decoding for shorter length codes is presented.

50 citations

Proceedings ArticleDOI
16 Feb 2004
TL;DR: A check node centric reformulation of the algorithm which gives huge memory reduction and which thus makes the serial approach possible, and avoids the routing congestion problem by going for multiple independent sequential decoding machines.
Abstract: Low density parity check (LDPC) codes offer excellent error correcting performance. However, current implementations are not capable of achieving the performance required by next generation storage and telecom applications. Extrapolation of many of those designs is not possible because of routing congestions. This article proposes a new architecture, based on a redefinition of a lesser-known LDPC decoding algorithm. As random LDPC codes are the most powerful, we abstain from making simplifying assumptions about the LDPC code which could ease the routing problem. We avoid the routing congestion problem by going for multiple independent sequential decoding machines, each decoding separate received codewords. In this serial approach the required amount of memory must be multiplied by the large number of machines. Our key contribution is a check node centric reformulation of the algorithm which gives huge memory reduction and which thus makes the serial approach possible.

50 citations


Network Information
Related Topics (5)
Base station
85.8K papers, 1M citations
89% related
Fading
55.4K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202384
2022153
202179
202078
201982
201894