scispace - formally typeset
Search or ask a question
Topic

Sequential decoding

About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.


Papers
More filters
01 Jan 2004
TL;DR: A suboptimal algorithm named the λ-min algorithm is proposed, which reduces significantly the complexity of the decoder without any significant performance loss, as compared to the belief propagation (BP) algorithm.
Abstract: The Low-Density Parity-Check codes are among the most powerful forward error correcting codes, since they enable to get as close as a fraction of a dB from the Shannon limit. This astonishing performance combined with their relatively simple decoding algorithm make these codes very attractive for the next digital transmission system generations. It is already the case for the next digital satellite broadcasting standard (DVB-S2), where an irregular LDPC code has been chosen to protect the downlink information. In this thesis, we focused our research on the iterative decoding algorithms and their hardware implementations. We proposed first a suboptimal algorithm named the λ-min algorithm. It reduces significantly the complexity of the decoder without any significant performance loss, as compared to the belief propagation (BP) algorithm. Then we studied and designed a generic architecture of an LDPC decoder, which has been implemented on a FPGA based platform. This hardware decoder enables to accelerate the simulations more than 500 times as compared to software simulations. Moreover, based on an all-tunable design, our decoder features many facilities: It is possible to configure it for a very wide code family, so that the research for good codes is processed faster ; thanks to the genericity of the processing components, it is also possible to optimize the internal coding format, and even to compare various decoding algorithms and various processing schedules. Finally, our experience in the area of LDPC decoders led us to propose a formal framework for analysing the architectures of LDPC decoders. This framework encompasses both the datapath (parallelism, node processors architectures) and the control mode associated to the several decoding schedules. Thus within this framework, a classification of the different state-of-the-art LDPC decoders is proposed. Moreover, some synthesis of efficient and unpublished architectures have been also proposed. ix (c) Frédéric Guilloud, Télécom Paris July 2004 This page intentionally left blank.

46 citations

Journal ArticleDOI
TL;DR: A new systolic priority queue is described that allows each decoding step, including retrieval, reordering and storage of the nodes, to take place in a single clock period and appears to be faster, affordable, and compatible with convolutional codes having long memory and high coding rate.
Abstract: The troublesome operation of reordering the stack in stack sequential decoders is alleviated by storing the nodes in a systolic priority queue that delivers the true top node in a short and constant amount of time. A new systolic priority queue is described that allows each decoding step, including retrieval, reordering and storage of the nodes, to take place in a single clock period. A complete decoder architecture designed around this queue is compared to a conventional stack-bucket architecture from both speed and cost points of view. The proposed decoder architecture appears to be faster, affordable, and compatible with convolutional codes having long memory and high coding rate. >

46 citations

Journal ArticleDOI
TL;DR: This letter uses the evolution of messages, i.e., log-likelihood ratios, of unfrozen bits during iterative BP decoding of polar codes to identify weak bit-channels, and modified codes show improved performance not only under BP decoding, but also under SCL decoding.
Abstract: Polar code constructions based on mutual information or Bhattacharyya parameters of bit-channels are intended for hard-output successive cancellation (SC) decoders, and thus might not be well designed for use with other decoders, such as soft-output belief propagation (BP) decoders or successive cancellation list (SCL) decoders. In this letter, we use the evolution of messages, i.e., log-likelihood ratios, of unfrozen bits during iterative BP decoding of polar codes to identify weak bit-channels, and then modify the conventional polar code construction by swapping these bit-channels with strong frozen bit-channels. The modified codes show improved performance not only under BP decoding, but also under SCL decoding. The code modification is shown to reduce the number of low-weight codewords, with and without CRC concatenation.

46 citations

Journal ArticleDOI
01 Mar 2001
TL;DR: The results show that iterative detection/decoding schemes using LDPC codes can outperform hard-decision decoding of Reed-Solomon codes by over 2 dB at a sector error rate of 10/sup -3/.
Abstract: The performance of low-density parity-check (LDPC) codes serially concatenated with generalized partial response channels is investigated. Various soft-input/soft-output detection schemes suitable for use in iterative detection/decoding systems are described. A low-complexity near-optimal detection algorithm that incorporates soft-input reliability information and generates soft-output reliability information is presented. A reduced-complexity algorithm for decoding LDPC codes is described. Simulation results on the performance of high-rate LDPC codes on generalized PR channels at various recording densities are presented. These results indicate that a judicious selection of the inner detector target polynomial and the choice of a good LDPC code are important in optimizing the performance of the overall recording system. Furthermore, the results also show that iterative detection/decoding schemes using LDPC codes can outperform hard-decision decoding of Reed-Solomon codes by over 2 dB at a sector error rate of 10/sup -3/.

46 citations

Proceedings ArticleDOI
01 Jan 2004
TL;DR: This paper presents the application to H.264 standard of a soft-input VLC decoding algorithm based on MAP sequence estimation techniques and using residual source redundancy information to provide channel error protection and correction.
Abstract: This paper presents the application to H.264 standard of a soft-input VLC decoding algorithm based on MAP sequence estimation techniques and using residual source redundancy information to provide channel error protection and correction. This algorithm relies on the presence of soft values and of contextual information available at the input of the source decoder, and is fully compatible with the existing H.264 standard. Numerical results obtained with this considered soft-input decoding algorithm to the decoding of a H.264 encoded video sequence under the assumption of an unequal error protection scheme are presented. Performance obtained for the 'Foreman' ITU reference sequence show that the proposed algorithm provides gains up to 12 to 15 dB in terms of PSNR when compared to classical hard input decoding methods.

46 citations


Network Information
Related Topics (5)
MIMO
62.7K papers, 959.1K citations
90% related
Fading
55.4K papers, 1M citations
90% related
Base station
85.8K papers, 1M citations
89% related
Wireless network
122.5K papers, 2.1M citations
87% related
Wireless
133.4K papers, 1.9M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022112
202124
202026
201922
201832