scispace - formally typeset
Search or ask a question
Topic

List decoding

About: List decoding is a research topic. Over the lifetime, 7251 publications have been published within this topic receiving 151182 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Under $\rm MAP$ decoding, although the introduction of a list can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected for any finite list size.
Abstract: Motivated by the significant performance gains which polar codes experience under successive cancellation list decoding, their scaling exponent is studied as a function of the list size. In particular, the error probability is fixed, and the tradeoff between the block length and back-off from capacity is analyzed. A lower bound is provided on the error probability under $\rm MAP$ decoding with list size $L$ for any binary-input memoryless output-symmetric channel and for any class of linear codes such that their minimum distance is unbounded as the block length grows large. Then, it is shown that under $\rm MAP$ decoding, although the introduction of a list can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected for any finite list size. In particular, this result applies to polar codes, since their minimum distance tends to infinity as the block length increases. A similar result is proved for genie-aided successive cancellation decoding when transmission takes place over the binary erasure channel, namely, the scaling exponent remains constant for any fixed number of helps from the genie. Note that since genie-aided successive cancellation decoding might be strictly worse than successive cancellation list decoding, the problem of establishing the scaling exponent of the latter remains open.

44 citations

Journal ArticleDOI
TL;DR: The impact of the interleaver, embedded in the encoder for a parallel concatenated code, called the turbo code, is studied and it is shown that an increased minimum Hamming distance can be obtained by using a structured interleavers.
Abstract: The impact of the interleaver, embedded in the encoder for a parallel concatenated code, called the turbo code, is studied. The known turbo codes consist of long random interleavers, whose purpose is to reduce the value of the error coefficients. It is shown that an increased minimum Hamming distance can be obtained by using a structured interleaver. For low bit-error rates (BERs), we show that the performance of turbo codes with a structured interleaver is better than that obtained with a random interleaver. Another important advantage of the structured interleaver is the short length required, which yields a short decoding delay and reduced decoding complexity (in terms of memory). We also consider the use of turbo codes as component codes in multilevel codes. Powerful coding structures that consist of two component codes are suggested. Computer simulations are performed in order to evaluate the reduction in coding gain due to suboptimal iterative decoding. From the results of these simulations we deduce that the degradation in the performance (due to suboptimal decoding) is very small.

44 citations

Proceedings ArticleDOI
04 Aug 2003
TL;DR: In this article, the turbo principle is applied to joint source-channel decoding for continuous-amplitude source samples, where the source samples are quantized and their indexes are appropriately mapped onto bitvectors.
Abstract: The turbo principle (iterative decoding between component decoders) is a general scheme, which we apply to joint source-channel decoding. As a realistic example (e.g., speech parameter coding), we discuss joint source-channel decoding for auto-correlated continuous-amplitude source samples. At the transmitter, the source samples are quantized and their indexes are appropriately mapped onto bitvectors. Afterwards, the bits are interleaved and channel-encoded; an AWGN channel is assumed for transmission. The auto-correlations of the source samples act as implicit outer channel codes that are serially concatenated with the inner explicit channel code. Thus, by applying the turbo principle, we can perform iterative decoding at the receiver. As an example, we show that, with a proper bit mapping for a 5-bit quantizer, iterative source-channel decoding saves up to 2 dB in channel SNR or 8 dB in source SNR for an auto-correlated Gaussian source.

44 citations

Journal ArticleDOI
TL;DR: This paper compares the finite-length performance of protograph-based spatially coupled low-density paritycheck (SC-LDPC) codes and LDPC block codes (LDPC-BCs) over GF(q) with a sliding window decoder with a stopping rule based on a soft belief propagation (BP) estimate to reduce computational complexity and latency.
Abstract: In this paper, we compare the finite-length performance of protograph-based spatially coupled low-density paritycheck (SC-LDPC) codes and LDPC block codes (LDPC-BCs) over GF(q). To reduce computational complexity and latency, a sliding window decoder with a stopping rule based on a soft belief propagation (BP) estimate is used for the q-ary SC-LDPC codes. Two regimes are considered: one when the constraint length of q-ary SC-LDPC codes is equal to the block length of q-ary LDPC-BCs and the other when the two decoding latencies are equal. Simulation results confirm that, in both regimes, (3,6)-, (3,9)-, and (3,12)-regular non-binary SC-LDPC codes can significantly outperform both binary and non-binary LDPC-BCs and binary SC-LDPC codes. Finally, we present a computational complexity comparison of q-ary SC-LDPC codes and q-ary LDPC-BCs under equal decoding latency and equal decoding performance assumptions.

44 citations

Patent
Hiroo Okamoto1, Masaharu Kobayashi1, Hiroyuki Kimura1, Takaharu Noguchi1, Takao Arai1 
31 Oct 1984
TL;DR: In this paper, error detection for first code blocks and error correction for S2 words at unknown locations and E flagged word erasures, where d1 is a Hamming distance and S2 and E satisfy a relation of 2S2+E≦d1-1, are parallely or sequentially effected, and a combination of S 2 and E having a high correction capability and a low probability of miscorrection is selected from a plurality of correction results of error locations and the numbers of flags added at the first decoding, and the word errors are corrected based on the
Abstract: Error correction of digital signals is suited for codes having error detection and correction words, such as doubly-encoded Reed-Solomon code. In a first decoding, at least error detection is effected and flags indicating decoding conditions are added. In a second decoding, error detection for first code blocks and error correction for S2 words at unknown locations and E flagged word erasures, where d1 is a Hamming distance and S2 and E satisfy a relation of 2S2+E≦d1-1, are parallely or sequentially effected, and a combination of S2 and E having a high correction capability and a low probability of miscorrection is selected from a plurality of correction results of error locations and the numbers of flags added at the first decoding, and the word errors are corrected based on the selected combination.

44 citations


Network Information
Related Topics (5)
Base station
85.8K papers, 1M citations
89% related
Fading
55.4K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202384
2022153
202179
202078
201982
201894