scispace - formally typeset
Search or ask a question
Topic

Sequential decoding

About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.


Papers
More filters
Proceedings ArticleDOI
03 May 2017
TL;DR: This paper investigates in this paper the performance of Convolutional, Turbo, Low-Density Parity-Check (LDPC), and Polar codes, in terms of the Bit-Error-Ratio for different information block lengths and code rates, spanning the multiple scenarios of reliability and high throughput.
Abstract: Channel coding is a fundamental building block in any communications system. High performance codes, with low complexity encoding and decoding are a must-have for future wireless systems, with requirements ranging from the operation in highly reliable scenarios, utilizing short information messages and low code rates, to high throughput scenarios, working with long messages, and high code rates. We investigate in this paper the performance of Convolutional, Turbo, Low-Density Parity-Check (LDPC), and Polar codes, in terms of the Bit-Error-Ratio (BER) for different information block lengths and code rates, spanning the multiple scenarios of reliability and high throughput. We further investigate their convergence behavior with respect to the number of iterations (turbo and LDPC), and list size (polar), as well as how their performance is impacted by the approximate decoding algorithms.

69 citations

Patent
29 Jul 1992
TL;DR: In this article, the redundancy due to the use of the convolutional codes as the error correction codes is reduced by using a convolution encoding at an identical encoding rate and a puncture process using different puncture rates for different classes of the input signals classified by the error sensitivity of each bit.
Abstract: A reduction of the redundancy due to the use of the convolutional codes as the error correction codes is achieved by a convolutional encoding at an identical encoding rate and a puncture process using different puncture rates for different classes of the input signals classified by the error sensitivity of each bit. A reduction of the decoding delay time without deteriorating the decoding error rate is achieved by updating the survivor path by remaining bits of the selected survivor path for each state other than the oldest bit and an additional bit indicative of the each state to which a transition is made at the present stage of decoding. A reduction of a circuit size of a Viterbi decoder is achieved by using a single RAM for memorizing a path metric and a survivor path for each state at an immediately previous stage of decoding together in each word of the memory capacity. A reduction of the decoding error rate for the data block encoded by the convolutional encoding is achieved by using the (i+N+j) bits of decoder input signals containing entire N bits of the received signals, preceded by last i bits of the received signals and followed by first j bits of the received signals.

68 citations

Journal ArticleDOI
TL;DR: Upper and lower bounds are derived for the decoding complexity of a general lattice L in terms of the dimension n and the coding gain /spl gamma/ of L, and are obtained based on an improved version of Kannan's (1983) method.
Abstract: Upper and lower bounds are derived for the decoding complexity of a general lattice L. The bounds are in terms of the dimension n and the coding gain /spl gamma/ of L, and are obtained based on a decoding algorithm which is an improved version of Kannan's (1983) method. The latter is currently the fastest known method for the decoding of a general lattice. For the decoding of a point x, the proposed algorithm recursively searches inside an, n-dimensional rectangular parallelepiped (cube), centered at x, with its edges along the Gram-Schmidt vectors of a proper basis of L. We call algorithms of this type recursive cube search (RCS) algorithms. It is shown that Kannan's algorithm also belongs to this category. The complexity of RCS algorithms is measured in terms of the number of lattice points that need to be examined before a decision is made. To tighten the upper bound on the complexity, we select a lattice basis which is reduced in the sense of Korkin-Zolotarev (1873). It is shown that for any selected basis, the decoding complexity (using RCS algorithms) of any sequence of lattices with possible application in communications (/spl gamma//spl ges/1) grows at least exponentially with n and /spl gamma/. It is observed that the densest lattices, and almost all of the lattices used in communications, e.g., Barnes-Wall lattices and the Leech lattice, have equal successive minima (ESM). For the decoding complexity of ESM lattices, a tighter upper bound and a stronger lower bound result are derived.

68 citations

Proceedings ArticleDOI
17 Nov 2002
TL;DR: It is shown that min-sum is robust against quantization effects, and in many cases, only four quantization bits suffices to obtain close to ideal performance.
Abstract: This paper is concerned with the implementation issues of the so-called min-sum algorithm (also referred to as max-sum or max-product) for the decoding of low-density parity-check (LDPC) codes The effects of clipping threshold and the number of quantization bits on the performance of the min-sum algorithm at short and intermediate block lengths are studied It is shown that min-sum is robust against quantization effects, and in many cases, only four quantization bits suffices to obtain close to ideal performance We also propose modifications to the min-sum algorithm that improve the performance by a few tenths of a dB with just a small increase in decoding complexity

68 citations

Journal ArticleDOI
TL;DR: An algorithm is given that decodes the Leech lattice with not much more than twice the complexity of soft-decision decoding of the Golay code, and is readily generalized to lattices that can be expressed in terms of binary code formulas.
Abstract: An algorithm is given that decodes the Leech lattice with not much more than twice the complexity of soft-decision decoding of the Golay code. The algorithm has the same effective minimum distance as maximum-likelihood decoding and increases the effective error coefficient by less than a factor or two. The algorithm can be recognized as a member of the class of multistage algorithms that are applicable to hierarchical constructions. It is readily generalized to lattices that can be expressed in terms of binary code formulas, and in particular to construction B lattices. >

68 citations


Network Information
Related Topics (5)
MIMO
62.7K papers, 959.1K citations
90% related
Fading
55.4K papers, 1M citations
90% related
Base station
85.8K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022112
202124
202026
201922
201832