Topic
Sequential decoding
About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.
Papers published on a yearly basis
Papers
More filters
••
25 Jun 2000TL;DR: It is shown that the performance of Reed-Solomon codes, for certain parameter values, is limited by worst case codeword configurations, but that with randomly chosen codes over large alphabets, more errors can be corrected.
Abstract: We derive upper bounds on the number of errors that can be corrected by list decoding of maximum-distance separable (MDS) codes using small lists. We show that the performance of Reed-Solomon (RS) codes, for certain parameter values, is limited by worst case codeword configurations, but that with randomly chosen codes over large alphabets, more errors can be corrected.
62 citations
••
01 Jul 2012TL;DR: This paper analyzes a class of spatially-coupled generalized LDPC codes and observes that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding.
Abstract: A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we analyze a class of spatially-coupled generalized LDPC codes and observe that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding. These codes can be seen as generalized product codes and are closely related to braided block codes.
62 citations
••
TL;DR: A recursive implementation of optimal soft decoding for vector quantization over noisy channels with finite memory and an approach to suboptimal decoding, of lower complexity, being based on a generalization of the Viterbi algorithm are considered.
Abstract: We provide a general treatment of optimal soft decoding for vector quantization over noisy channels with finite memory. The main result is a recursive implementation of optimal decoding. We also consider an approach to suboptimal decoding, of lower complexity, being based on a generalization of the Viterbi algorithm. Finally, we treat the problem of combined encoder-decoder design. Simulations compare the new decoders to a decision-based approach that uses Viterbi detection plus table lookup decoding. Optimal soft decoding significantly outperforms the benchmark decoder. The introduced suboptimal decoder is able to perform close to the optimal and to outperform the benchmark scheme at a comparable complexity.
62 citations
••
09 Jul 2006TL;DR: It is shown that any quantum convolutional code contains a subcode of finite index which has a non-catastrophic encoding circuit and the encodes and their inverse constructed by the method naturally can be applied online, i.e., qubits can be sent and received with constant delay.
Abstract: We present an algorithm to construct quantum circuits for encoding and inverse encoding of quantum convolutional codes. We show that any quantum convolutional code contains a subcode of finite index which has a non-catastrophic encoding circuit. Our work generalizes the conditions for non-catastrophic encoders derived in a paper by Ollivier and Tillich (quantph/0401134) which are applicable only for a restricted class of quantum convolutional codes. We also show that the encoders and their inverses constructed by our method naturally can be applied online, i.e., qubits can be sent and received with constant delay.
62 citations
••
TL;DR: The trapping sets of the asymptotically good protograph-based LDPC convolutional codes considered earlier are studied and it is shown that the size of the smallest non-empty trapping set grows linearly with the constraint length for these ensembles.
Abstract: Low-density parity-check (LDPC) convolutional codes have been shown to be capable of achieving capacity-approaching performance with iterative message-passing decoding. In the first part of this paper, using asymptotic methods to obtain lower bounds on the free distance to constraint length ratio, we show that several ensembles of regular and irregular LDPC convolutional codes derived from protograph-based LDPC block codes have the property that the free distance grows linearly with respect to the constraint length, i.e., the ensembles are asymptotically good. In particular, we show that the free distance to constraint length ratio of the LDPC convolutional code ensembles exceeds the minimum distance to block length ratio of the corresponding LDPC block code ensembles. A large free distance growth rate indicates that codes drawn from the ensemble should perform well at high signal-to-noise ratios under maximum-likelihood decoding. When suboptimal decoding methods are employed, there are many factors that affect the performance of a code. Recently, it has been shown that so-called trapping sets are a significant factor affecting decoding failures of LDPC codes over the additive white Gaussian noise channel with iterative message-passing decoding. In the second part of this paper, we study the trapping sets of the asymptotically good protograph-based LDPC convolutional codes considered earlier. By extending the theory presented in part one and using similar bounding techniques, we show that the size of the smallest non-empty trapping set grows linearly with the constraint length for these ensembles.
61 citations