scispace - formally typeset
Search or ask a question
Topic

Sequential decoding

About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.


Papers
More filters
Book ChapterDOI
26 Jun 2017
TL;DR: In this paper, the authors quantise other information set decoding algorithms by using quantum walk techniques which were devised for the subset-sum problem in [6], which results in improving the worst-case complexity of Bernstein's algorithm to 2.05869n.
Abstract: The security of code-based cryptosystems such as the McEliece cryptosystem relies primarily on the difficulty of decoding random linear codes. The best decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoding techniques. It is also important to assess the security of such cryptosystems against a quantum computer. This research thread started in [23] and the best algorithm to date has been Bernstein’s quantising [5] of the simplest information set decoding algorithm, namely Prange’s algorithm. It consists in applying Grover’s quantum search to obtain a quadratic speed-up of Prange’s algorithm. In this paper, we quantise other information set decoding algorithms by using quantum walk techniques which were devised for the subset-sum problem in [6]. This results in improving the worst-case complexity of \(2^{0.06035n}\) of Bernstein’s algorithm to \(2^{0.05869n}\) with the best algorithm presented here (where n is the codelength).

44 citations

Journal ArticleDOI
TL;DR: Under $\rm MAP$ decoding, although the introduction of a list can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected for any finite list size.
Abstract: Motivated by the significant performance gains which polar codes experience under successive cancellation list decoding, their scaling exponent is studied as a function of the list size. In particular, the error probability is fixed, and the tradeoff between the block length and back-off from capacity is analyzed. A lower bound is provided on the error probability under $\rm MAP$ decoding with list size $L$ for any binary-input memoryless output-symmetric channel and for any class of linear codes such that their minimum distance is unbounded as the block length grows large. Then, it is shown that under $\rm MAP$ decoding, although the introduction of a list can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected for any finite list size. In particular, this result applies to polar codes, since their minimum distance tends to infinity as the block length increases. A similar result is proved for genie-aided successive cancellation decoding when transmission takes place over the binary erasure channel, namely, the scaling exponent remains constant for any fixed number of helps from the genie. Note that since genie-aided successive cancellation decoding might be strictly worse than successive cancellation list decoding, the problem of establishing the scaling exponent of the latter remains open.

44 citations

Proceedings ArticleDOI
04 Aug 2003
TL;DR: In this article, the turbo principle is applied to joint source-channel decoding for continuous-amplitude source samples, where the source samples are quantized and their indexes are appropriately mapped onto bitvectors.
Abstract: The turbo principle (iterative decoding between component decoders) is a general scheme, which we apply to joint source-channel decoding. As a realistic example (e.g., speech parameter coding), we discuss joint source-channel decoding for auto-correlated continuous-amplitude source samples. At the transmitter, the source samples are quantized and their indexes are appropriately mapped onto bitvectors. Afterwards, the bits are interleaved and channel-encoded; an AWGN channel is assumed for transmission. The auto-correlations of the source samples act as implicit outer channel codes that are serially concatenated with the inner explicit channel code. Thus, by applying the turbo principle, we can perform iterative decoding at the receiver. As an example, we show that, with a proper bit mapping for a 5-bit quantizer, iterative source-channel decoding saves up to 2 dB in channel SNR or 8 dB in source SNR for an auto-correlated Gaussian source.

44 citations

Journal ArticleDOI
TL;DR: Simulations for both the BSC and the AWGN channel show that the reliability-based decision-feedback scheme can surpass the random-coding lower bound on throughput for feedback codes at some blocklengths less than 100 symbols.
Abstract: This paper presents a variable-length decision-feedback coding scheme that achieves high rates at short blocklengths. This scheme uses the reliability-output Viterbi algorithm (ROVA) to determine when the receiver's decoding estimate satisfies a given error constraint. We evaluate the performance of both terminated and tail-biting convolutional codes at average blocklengths less than 300 symbols, using the ROVA and the tail-biting ROVA, respectively. Comparing with recent results from finite-blocklength information theory, simulations for both the BSC and the AWGN channel show that the reliability-based decision-feedback scheme can surpass the random-coding lower bound on throughput for feedback codes at some blocklengths less than 100 symbols. This is true both when decoding after every symbol is permitted and when decoding is limited to a small number of increments. Finally, the performance of the reliability-based stopping rule with the ROVA is compared with retransmission decisions based on CRCs. For short blocklengths where the latency overhead of the CRC bits is severe, the ROVA-based approach delivers superior rates.

44 citations

Journal ArticleDOI
TL;DR: This paper compares the finite-length performance of protograph-based spatially coupled low-density paritycheck (SC-LDPC) codes and LDPC block codes (LDPC-BCs) over GF(q) with a sliding window decoder with a stopping rule based on a soft belief propagation (BP) estimate to reduce computational complexity and latency.
Abstract: In this paper, we compare the finite-length performance of protograph-based spatially coupled low-density paritycheck (SC-LDPC) codes and LDPC block codes (LDPC-BCs) over GF(q). To reduce computational complexity and latency, a sliding window decoder with a stopping rule based on a soft belief propagation (BP) estimate is used for the q-ary SC-LDPC codes. Two regimes are considered: one when the constraint length of q-ary SC-LDPC codes is equal to the block length of q-ary LDPC-BCs and the other when the two decoding latencies are equal. Simulation results confirm that, in both regimes, (3,6)-, (3,9)-, and (3,12)-regular non-binary SC-LDPC codes can significantly outperform both binary and non-binary LDPC-BCs and binary SC-LDPC codes. Finally, we present a computational complexity comparison of q-ary SC-LDPC codes and q-ary LDPC-BCs under equal decoding latency and equal decoding performance assumptions.

44 citations


Network Information
Related Topics (5)
MIMO
62.7K papers, 959.1K citations
90% related
Fading
55.4K papers, 1M citations
90% related
Base station
85.8K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022112
202124
202026
201922
201832