scispace - formally typeset
Search or ask a question
Topic

Sequential decoding

About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The loss in quantizing coded symbols in the additive white Gaussian noise (AWGN) channel with binary phase-shift keying (BPSK) or quadrature phase- Shift keying or QPSK modulation is discussed and a quantization scheme and branch metric calculation method are presented.
Abstract: The loss in quantizing coded symbols in the additive white Gaussian noise (AWGN) channel with binary phase-shift keying (BPSK) or quadrature phase-shift keying (QPSK) modulation is discussed. A quantization scheme and branch metric calculation method are presented. For the uniformly quantized AWGN channel, cutoff rate is used to determine the step size and the smallest number of quantization bits needed for a given bit-signal-to-noise ratio (E/sub b//N/sub 0/) loss. A nine-level quantizer is presented, along with 3-b branch metrics for a rate-1/2 code, which causes an E/sub b//N/sub 0/ loss of only 0.14 dB. These results also apply to soft-decision decoding of block codes. A tight upper bound is derived for the range of path metrics in a Viterbi decoder. The calculations are verified by simulations of several convolutional codes, including the memory-14, rate-1/4 or -1/6 codes used by the big Viterbi decoders at JPL. >

40 citations

Proceedings Article
16 Feb 2007
TL;DR: It is shown that the larger the constraint length used in a convolutional encoding process, the more powerful the code produced.
Abstract: Convolutional encoding with Viterbi decoding is a powerful method for forward error correction. It has been widely deployed in many wireless communication systems to improve the limited capacity of the communication channels. The Viterbi algorithm, which is the most extensively employed decoding algorithm for convolutional codes. In this paper, we present a field-programmable gate array implementation of Viterbi Decoder with a constraint length of 11 and a code rate of 1/3. It shows that the larger the constraint length used in a convolutional encoding process, the more powerful the code produced.

39 citations

Proceedings ArticleDOI
01 Jan 2005
TL;DR: Simulations of codes of very short lengths over BEC reveal the superiority of the proposed decoding algorithm over present improved decoding algorithms for a wide range of bit error rates.
Abstract: This paper presents a new improved decoding algorithm for low-density parity-check (LDPC) codes over the binary erasure channel (BEC). The proposed algorithm combines the fact that a considerable fraction of unsatisfied check nodes are of degree two with the concept of guessing bits to perform simple graph-theoretic manipulations on the Tanner graph. The proposed decoding algorithm has a complexity similar to present improved decoding algorithms [H. Pishro-Nik et al., 2004]. Simulations of codes of very short lengths over BEC reveal the superiority of our algorithm over present improved decoding algorithms for a wide range of bit error rates.

39 citations

Journal ArticleDOI
TL;DR: The fundamental limits of channels with mismatched decoding are addressed, and an identity is deduced between the Verdu–Han general channel capacity formula, and the mismatch capacity formula applied to maximum likelihood decoding metric.
Abstract: The fundamental limits of channels with mismatched decoding are addressed. A general formula is established for the mismatch capacity of a general channel, defined as a sequence of conditional distributions with a general decoding metrics sequence. We deduce an identity between the Verdu–Han general channel capacity formula, and the mismatch capacity formula applied to maximum likelihood decoding metric. Furthermore, several upper bounds on the capacity are provided, and a simpler expression for a lower bound is derived for the case of a non-negative decoding metric. The general formula is specialized to the case of finite input and output alphabet channels with a type-dependent metric. The closely related problem of threshold mismatched decoding is also studied, and a general expression for the threshold mismatch capacity is obtained. As an example of threshold mismatch capacity, we state a general expression for the erasures-only capacity of the finite input and output alphabet channel. We observe that for every channel, there exists a (matched) threshold decoder, which is capacity achieving. In addition, necessary and sufficient conditions are stated for a channel to have a strong converse.

39 citations

Proceedings ArticleDOI
17 Nov 2002
TL;DR: It is shown how low-density parity-check (LDPC) codes can be used as an application of the Slepian-Wolf (1973) theorem for correlated binary sources and the simulated performance results are better than most of the existing turbo code results available in the literature.
Abstract: It is shown how low-density parity-check (LDPC) codes can be used as an application of the Slepian-Wolf (1973) theorem for correlated binary sources. We focus on the asymmetric case of compression with side information. The approach is based on viewing the correlation as a channel and applying the syndrome concept. The encoding and decoding procedures, i.e. the compression and decompression, are explained in detail. The simulated performance results are better than most of the existing turbo code results available in the literature and very close to the Slepian-Wolf limit.

39 citations


Network Information
Related Topics (5)
MIMO
62.7K papers, 959.1K citations
90% related
Fading
55.4K papers, 1M citations
90% related
Base station
85.8K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022112
202124
202026
201922
201832