scispace - formally typeset
Search or ask a question
Topic

Sequential decoding

About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The conjecture of Rujan on error-correcting codes is proven and errors in decoding of signals transmitted through noisy channels assume the smallest values when signals are decoded at a particular finite temperature.
Abstract: The conjecture of Rujan on error-correcting codes is proven. Errors in decoding of signals transmitted through noisy channels assume the smallest values when signals are decoded at a particular finite temperature. This finite-temperature decoding is compared with the conventional maximum likelihood decoding which corresponds to the T =0 case. The method of gauge transformation in the spin glass theory is useful in this argument.

37 citations

Journal ArticleDOI
TL;DR: The optimum gear-shift decoder is proved to have a decoding threshold equal to or better than the best decoding threshold among those of the available algorithms.
Abstract: This paper considers a class of iterative message-passing decoders for low-density parity-check codes in which the decoder can choose its decoding rule from a set of decoding algorithms at each iteration. Each available decoding algorithm may have a different per-iteration computation time and performance. With an appropriate choice of algorithm at each iteration, overall decoding latency can be reduced significantly, compared with standard decoding methods. Such a decoder is called a gear-shift decoder because it changes its decoding rule (shifts gears) in order to guarantee both convergence and maximum decoding speed (minimum decoding latency). Using extrinsic information transfer charts, the problem of finding the optimum (minimum decoding latency) gear-shift decoder is formulated as a computationally tractable dynamic program. The optimum gear-shift decoder is proved to have a decoding threshold equal to or better than the best decoding threshold among those of the available algorithms. In addition to speeding up software decoder implementations, gear-shift decoding can be applied to optimize a pipelined hardware decoder, minimizing hardware cost for a given decoder throughput

37 citations

Journal ArticleDOI
TL;DR: An algorithm performing maximum-likelihood (ML) soft-decision syndrome decoding based on the LRB is presented, which is more conveniently implementable for codes whose codimension is not small and outperforms any algebraic decoding algorithm capable of correcting up to t+1 errors with an order of reprocessing.
Abstract: In this correspondence, various aspects of reliability-based syndrome decoding of binary codes are investigated. First, it is shown that the least reliable basis (LRB) and the most reliable basis (MRB) are dual of each other. By exploiting this duality, an algorithm performing maximum-likelihood (ML) soft-decision syndrome decoding based on the LRB is presented. Contrarily to previous LRB-based ML syndrome decoding algorithms, this algorithm is more conveniently implementable for codes whose codimension is not small. New sufficient conditions for optimality are derived. These conditions exploit both the ordering associated with the LRB and the structure of the code considered. With respect to MRR-based sufficient conditions, they present the advantage of requiring no soft information and thus can be preprocessed and stored. Based on these conditions, low-complexity soft-decision syndrome decoding algorithms for particular classes of codes are proposed. Finally, a simple algorithm is analyzed. After the construction of the LRB, this algorithm computes the syndrome of smallest Hamming weight among o(K/sup i/) candidates, where K is the dimension of the code, for an order i of reprocessing. At practical bit-error rates, for codes of length N/spl les/128, this algorithm always outperforms any algebraic decoding algorithm capable of correcting up to t+1 errors with an order of reprocessing of at most 2, where t is the error-correcting capability of the code considered.

37 citations

Journal ArticleDOI
TL;DR: It is shown that all convolutional codes can be converted to a form called "doubly systematic" which simplifies the decoding circuitry and error propagation after a decoding mistake is terminated by the occurrence of a double guard space of error-free blocks.
Abstract: A general procedure is formulated for decoding any convolutional code with decoding delay N blocks that corrects all bursts confined to r or fewer consecutive blocks followed by a guard space of at least N-1 consecutive error-free blocks. It is shown that all such codes can be converted to a form called "doubly systematic" which simplifies the decoding circuitry. The decoding procedure can then be implemented with a circuit of the same order of complexity as a parity-checking circuit for a block-linear code. A block diagram of a complete decoder is given for an optimal burst-correcting code. It is further shown that error propagation after a decoding mistake is always terminated by the occurrence of a double guard space of error-free blocks.

37 citations

Journal ArticleDOI
TL;DR: It is proved analytically that all the LDPC convolutional codes of different rates in the family are capable of achieving the capacity of the binary erasure channel (BEC) and the decoding thresholds of the rate-compatible codes approach the corresponding Shannon limits over both channels.
Abstract: In this paper, we propose a new family of rate-compatible regular low-density parity-check (LDPC) convolutional codes. The construction is based on graph extension, i.e., the codes of lower rates are generated by successively extending the graph of the base code with the highest rate. Theoretically, the proposed rate-compatible family can cover all the rational rates from 0 to 1. In addition, the regularity of degree distributions simplifies the code optimization. We prove analytically that all the LDPC convolutional codes of different rates in the family are capable of achieving the capacity of the binary erasure channel (BEC). The analysis is extended to the general binary memoryless symmetric channel, for which a capacity-approaching performance can be achieved. Analytical thresholds and simulation results for finite check and variable node degrees are provided for both BECs and binary-input additive white Gaussian noise channels. The results confirm that the decoding thresholds of the rate-compatible codes approach the corresponding Shannon limits over both channels.

37 citations


Network Information
Related Topics (5)
MIMO
62.7K papers, 959.1K citations
90% related
Fading
55.4K papers, 1M citations
90% related
Base station
85.8K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022112
202124
202026
201922
201832