scispace - formally typeset
Search or ask a question
Topic

Sequential decoding

About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This work introduces a simplified description of several iterative decoding algorithms in terms of the a posteriori average entropy, and studies them as a function of a single parameter that closely approximates the signal-to-noise ratio (SNR).
Abstract: Iterative decoding algorithms may be viewed as high-dimensional nonlinear dynamical systems, depending on a large number of parameters. In this work, we introduce a simplified description of several iterative decoding algorithms in terms of the a posteriori average entropy, and study them as a function of a single parameter that closely approximates the signal-to-noise ratio (SNR). Using this approach, we show that virtually all the iterative decoding schemes in use today exhibit similar qualitative dynamics. In particular, a whole range of phenomena known to occur in nonlinear systems, such as existence of multiple fixed points, oscillatory behavior, bifurcations, chaos, and transient chaos are found in iterative decoding algorithms. As an application, we develop an adaptive technique to control transient chaos in the turbo-decoding algorithm, leading to a substantial improvement in performance. We also propose a new stopping criterion for turbo codes that achieves the same performance with considerably fewer iterations.

37 citations

Journal ArticleDOI
TL;DR: This paper presents three algorithms based on stochastic computation to reduce the decoding complexity of non-binary low-density parity-check codes over Galois fields with low order and a small variable node degree and studies the performance and complexity of the algorithms.
Abstract: Despite the outstanding performance of non-binary low-density parity-check (LDPC) codes over many communication channels, they are not in widespread use yet. This is due to the high implementation complexity of their decoding algorithms, even those that compromise performance for the sake of simplicity. In this paper, we present three algorithms based on stochastic computation to reduce the decoding complexity. The first is a purely stochastic algorithm with error-correcting performance matching that of the sum-product algorithm (SPA) for LDPC codes over Galois fields with low order and a small variable node degree. We also present a modified version which reduces the number of decoding iterations required while remaining purely stochastic and having a low per-iteration complexity. The second algorithm, relaxed half-stochastic (RHS) decoding, combines elements of the SPA and the stochastic decoder and uses successive relaxation to match the error-correcting performance of the SPA. Furthermore, it uses fewer iterations than the purely stochastic algorithm and does not have limitations on the field order and variable node degree of the codes it can decode. The third algorithm, NoX, is a fully stochastic specialization of RHS for codes with a variable node degree 2 that offers similar performance, but at a significantly lower computational complexity. We study the performance and complexity of the algorithms; noting that all have lower per-iteration complexity than SPA and that RHS can have comparable average per-codeword computational complexity, and NoX a lower one.

37 citations

Proceedings ArticleDOI
18 Jun 2000
TL;DR: It is shown that maximum likelihood (ML) sequential decoding and maximum a posteriori (MAP) sequence estimation gives significant decoding improvements over hard decisions alone and that it is possible to make use of the inherent meaning of the codewords without additional transmission of side information which results in a further gain.
Abstract: We present the results of two methods for soft decoding of variable-length codes. We first show that maximum likelihood (ML) sequential decoding and maximum a posteriori (MAP) sequence estimation gives significant decoding improvements over hard decisions alone, then we show that further improvements can be gained by additional transmission of the symbol length. Finally, we show that it is possible to make use of the inherent meaning of the codewords without additional transmission of side information which results in a further gain.

37 citations

Journal Article
TL;DR: In this paper, an improved method for the fast correlation attack on certain stream ciphers is presented, which employs the following decoding approaches: list decoding in which a candidate is assigned to the list based on the most reliable information sets, and minimum distance decoding based on Hamming distance Performance and complexity of the proposed algorithm are considered.
Abstract: An improved method for the fast correlation attack on certain stream ciphers is presented The proposed algorithm employs the following decoding approaches: list decoding in which a candidate is assigned to the list based on the most reliable information sets, and minimum distance decoding based on Hamming distance Performance and complexity of the proposed algorithm are considered A desirable characteristic of the proposed algorithm is its theoretical analyzibility, so that its performance can also be estimated in cases where corresponding experiments are not feasible due to the current technological limitations The algorithm is compared with relevant recently reported algorithms, and its advantages are pointed out Finally, the proposed algorithm is considered in a security evaluation context of a proposal (NESSIE) for stream ciphers

37 citations

Journal ArticleDOI
Mark M. Wilde1
TL;DR: It is demonstrated that a sequential decoding strategy works well even in the most general ‘one-shot’ regime, where the authors are given a single instance of a channel and wish to determine the maximal number of bits that can be communicated up to a small failure probability.
Abstract: Since a quantum measurement generally disturbs the state of a quantum system, one might think that it should not be possible for a sender and receiver to communicate reliably when the receiver performs a large number of sequential measurements to determine the message of the sender. We show here that this intuition is not true, by demonstrating that a sequential decoding strategy works well even in the most general "one-shot" regime, where we are given a single instance of a channel and wish to determine the maximal number of bits that can be communicated up to a small failure probability. This result follows by generalizing a non-commutative union bound to apply for a sequence of general measurements. We also demonstrate two ways in which a receiver can recover a state close to the original state after it has been decoded by a sequence of measurements that each succeed with high probability. The second of these methods will be useful in realizing an efficient decoder for fully quantum polar codes, should a method ever be found to realize an efficient decoder for classical-quantum polar codes.

37 citations


Network Information
Related Topics (5)
MIMO
62.7K papers, 959.1K citations
90% related
Fading
55.4K papers, 1M citations
90% related
Base station
85.8K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022112
202124
202026
201922
201832