Topic
List decoding
About: List decoding is a research topic. Over the lifetime, 7251 publications have been published within this topic receiving 151182 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This correspondence shows that q-ary RM codes are subfield subcodes of RS codes over F/sub q//sup m/ and presents a list-decoding algorithm, applicable to codes of any rates, and achieves an error-correction bound n(1-/spl radic)/n.
Abstract: The q-ary Reed-Muller (RM) codes RM/sub q/(u,m) of length n=q/sup m/ are a generalization of Reed-Solomon (RS) codes, which use polynomials in m variables to encode messages through functional encoding. Using an idea of reducing the multivariate case to the univariate case, randomized list-decoding algorithms for RM codes were given in and . The algorithm in Sudan et al. (1999) is an improvement of the algorithm in , it is applicable to codes RM/sub q/(u,m) with u
96 citations
•
19 Feb 2003TL;DR: In this article, a novel solution is presented that completely eliminates and/or substantially reduces the oscillations that are oftentimes encountered with the various iterative decoding approaches that are employed to decode LDPC coded signals.
Abstract: Stopping or reducing oscillations in Low Density Parity Check (LDPC) codes. A novel solution is presented that completely eliminates and/or substantially reduces the oscillations that are oftentimes encountered with the various iterative decoding approaches that are employed to decode LDPC coded signals. This novel approach may be implemented in any one of the following three ways. One way involves combining the Sum-Product (SP) soft decision decoding approach with the Bit-Flip (BF) hard decision decoding approach in an intelligent manner that may adaptively select the number of iterations performed during the SP soft decoding process. The other two ways involve modification of the manner in which the SP soft decoding approach and the BF hard decision decoding approach are implemented. One modification involves changing the initialization of the SP soft decoding process, and another modification involves the updating procedure employed during the SP soft decoding approach process.
96 citations
•
14 Jun 2013TL;DR: In this article, an early decoding termination detection for QC-LDPC decoders is discussed, where the controller terminates decoding the data unit in response to determining that the decoded data units from more than one layer decoding operation satisfy a parity check equation.
Abstract: Embodiments of decoders having early decoding termination detection are disclosed. The decoders can provide for flexible and scalable decoding and early termination detection, particularly when quasi-cyclic low-density parity-check code (QC-LDPC) decoding is used. In one embodiment, a controller iteratively decodes a data unit using a coding matrix comprising a plurality of layers. The controller terminates decoding the data unit in response to determining that the decoded data units from more than one layer decoding operation satisfy a parity check equation and that the decoded data units from more than one layer decoding operation are the same. Advantageously, the termination of decoding of the data unit can reduce a number of iterations performed to decode the data unit.
96 citations
••
TL;DR: Computer simulations assuming a turbo-coded W-CDMA mobile radio reverse link under frequency selective Rayleigh fading demonstrate that when the maximum number of iterations is 8, the average number of decoding iterations can be reduced to 1/4 at BER=10/sup -6/.
Abstract: The average number of decoding iterations in a turbo decoder is reduced by incorporating CRC error detection into the decoding iteration process. Turbo decoding iterations are stopped when CRC decoding determines that there is no error in the decoded data sequence. Computer simulations assuming a turbo-coded W-CDMA mobile radio reverse link under frequency selective Rayleigh fading demonstrate that when the maximum number of iterations is 8, the average number of decoding iterations can be reduced to 1/4 at BER=10/sup -6/.
96 citations
••
TL;DR: The multiplicity codes as mentioned in this paper are based on evaluating multivariate polynomials and their derivatives, and they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in the rate and minimum distance.
Abstract: Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(k∈) were known only for codes of rate ∈Ω(1/∈), where k is the length of the message. Furthermore, for codes of rate > 1/2, no nontrivial locality had been achieved.In this article, we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every ∈ > 0 and α > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 − α, and is locally decodable from a constant fraction of errors using O(k∈) queries and time.These codes, which we call multiplicity codes, are based on evaluating multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in the rate and minimum distance.
96 citations