scispace - formally typeset
Search or ask a question
Topic

Sequential decoding

About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.


Papers
More filters
Proceedings ArticleDOI
01 Sep 2012
TL;DR: Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.
Abstract: We describe a new family of integer lattices built from construction A and non-binary LDPC codes. An iterative message-passing algorithm suitable for decoding in high dimensions is proposed. This family of lattices, referred to as LDA lattices, follows the recent transition of Euclidean codes from their classical theory to their modern approach as announced by the pioneering work of Loeliger (1997), Erez, Litsyn, and Zamir (2004–2005). Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.

70 citations

Journal ArticleDOI
TL;DR: Using analog, non-linear and highly parallel networks, this work attempts to perform decoding of block and convolutional codes, equalization of certain frequency-selective channels, decoding of multi-level coded modulation and reconstruction of coded PCM signals.
Abstract: Using analog, non-linear and highly parallel networks, we attempt to perform decoding of block and convolutional codes, equalization of certain frequency-selective channels, decoding of multi-level coded modulation and reconstruction of coded PCM signals. This is in contrast to common practice where these tasks are performed by sequentially operating processors. Our advantage is that we operate fully on soft values for input and output, similar to what is done in 'turbo' decoding. However, we do not have explicit iterations because the networks float freely in continuous time. The decoder has almost no latency in time because we are only restricted by the time constants from the parasitic RC values of integrated circuits. Simulation results for several simple examples are shown which, in some cases, achieve the performance of a conventional MAP detector. For more complicated codes we indicate promising solutions with more complex analog networks based on the simple ones. Furthermore, we discuss the principles of the analog VLSI implementation of these networks.

70 citations

Journal ArticleDOI
TL;DR: A new explicit error-correcting code based on Trevisan's extractor that can handle high-noise, almost-optimal rate list-decodable codes over large alphabets and soft decoding is proposed.
Abstract: We study error-correcting codes for highly noisy channels. For example, every received signal in the channel may originate from some half of the symbols in the alphabet. Our main conceptual contribution is an equivalence between error-correcting codes for such channels and extractors. Our main technical contribution is a new explicit error-correcting code based on Trevisan's extractor that can handle such channels, and even noisier ones. Our new code has polynomial-time encoding and polynomial-time soft-decision decoding. We note that Reed-Solomon codes cannot handle such channels, and our study exposes some limitations on list decoding of Reed-Solomon codes. Another advantage of our equivalence is that when the Johnson bound is restated in terms of extractors, it becomes the well-known Leftover Hash Lemma. This yields a new proof of the Johnson bound which applies to large alphabets and soft decoding. Our explicit codes are useful in several applications. First, they yield algorithms to extract many hardcore bits using few auxiliary random bits. Second, they are the key tool in a recent scheme to compactly store a set of elements in a way that membership in the set can be determined by looking at only one bit of the representation. Finally, they are the basis for the recent construction of high-noise, almost-optimal rate list-decodable codes over large alphabets.

70 citations

Journal ArticleDOI
TL;DR: A new partial-sum updating algorithm and the corresponding PSN architecture are introduced which achieve a delay performance independent of the code length and the area complexity is reduced, for a high-performance and area-efficient semi-parallel SCD implementation.
Abstract: Polar codes have recently received a lot of attention because of their capacity-achieving performance and low encoding and decoding complexity. The performance of the successive cancellation decoder (SCD) of the polar codes highly depends on that of the partial-sum network (PSN) implementation. Hence, in this work, an efficient PSN architecture is proposed, based on the properties of polar codes. First, a new partial-sum updating algorithm and the corresponding PSN architecture are introduced which achieve a delay performance independent of the code length. Moreover, the area complexity is also reduced. Second, for a high-performance and area-efficient semi-parallel SCD implementation, a folded PSN architecture is presented to integrate seamlessly with the folded processing element architecture. This is achieved by using a novel folded decoding schedule. As a result, both the critical path delay and the area (excluding the memory for folding) of the semi-parallel SCD are approximately constant for a large range of code lengths. The proposed designs are implemented in both FPGA and ASIC and compared with the existing designs. Experimental result shows that for polar codes with large code length, the decoding throughput is improved by more than 1.05 times and the area is reduced by as much as 50.4%, compared with the state-of-the-art designs.

70 citations

Journal ArticleDOI
TL;DR: A new class of decoders obtained by applying the alternating direction method of multipliers (ADMM) algorithm to a set of non-convex optimization problems are constructed by adding a penalty term to the objective of LP decoding to make pseudocodewords, which are non-integer vertices of the LP relaxation, more costly.
Abstract: Linear programming (LP) decoding for low-density parity-check codes was introduced by Feldman et al. and has been shown to have theoretical guarantees in several regimes. Furthermore, it has been reported in the literature—via simulation and via instanton analysis—that LP decoding displays better error rate performance at high signal-to-noise ratios (SNR) than does belief propagation (BP) decoding. However, at low SNRs, LP decoding is observed to have worse performance than BP. In this paper, we seek to improve LP decoding at low SNRs while maintaining LP decoding’s high SNR performance. Our main contribution is a new class of decoders obtained by applying the alternating direction method of multipliers (ADMM) algorithm to a set of non-convex optimization problems. These non-convex problems are constructed by adding a penalty term to the objective of LP decoding. The goal of the penalty is to make pseudocodewords, which are non-integer vertices of the LP relaxation, more costly. We name this class of decoders—ADMM penalized decoders. For low and moderate SNRs, we simulate ADMM penalized decoding with $\ell _{1}$ and $\ell _{2}$ penalties. We find that these decoders can outperform both BP and LP decoding. For high SNRs, where it is difficult to obtain data via simulation, we use an instanton analysis and find that, asymptotically, ADMM penalized decoding performs better than BP but not as well as LP. Unfortunately, since ADMM penalized decoding is not a convex program, we have not been successful in developing theoretical guarantees. However, the non-convex program can be approximated using a sequence of linear programs; an approach that yields a reweighted LP decoder. We show that a two-round reweighted LP decoder has an improved theoretical recovery threshold when compared with LP decoding. In addition, we find via simulation that reweighted LP decoding significantly attains lower error rates than LP decoding at low SNRs.

70 citations


Network Information
Related Topics (5)
MIMO
62.7K papers, 959.1K citations
90% related
Fading
55.4K papers, 1M citations
90% related
Base station
85.8K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022112
202124
202026
201922
201832