scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2019"


Posted Content
TL;DR: An account of the original ideas that motivated the development of polar coding is given and some new ideas for exploiting channel polarization more effectively in order to improve the performance of polar codes are discussed.
Abstract: This note is a written and extended version of the Shannon Lecture I gave at 2019 International Symposium on Information Theory. It gives an account of the original ideas that motivated the development of polar coding and discusses some new ideas for exploiting channel polarization more effectively in order to improve the performance of polar codes.

102 citations


Proceedings ArticleDOI
01 Nov 2019
TL;DR: This paper proposed a parallel iterative edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC) and OCR correction.
Abstract: We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC). Recent approaches are based on the popular encoder-decoder (ED) model for sequence to sequence learning. The ED model auto-regressively captures full dependency among output tokens but is slow due to sequential decoding. The PIE model does parallel decoding, giving up the advantage of modeling full dependency in the output, yet it achieves accuracy competitive with the ED model for four reasons: 1. predicting edits instead of tokens, 2. labeling sequences instead of generating sequences, 3. iteratively refining predictions to capture dependencies, and 4. factorizing logits over edits and their token argument to harness pre-trained language models like BERT. Experiments on tasks spanning GEC, OCR correction and spell correction demonstrate that the PIE model is an accurate and significantly faster alternative for local sequence transduction.

71 citations


Posted Content
TL;DR: Experiments on tasks spanning GEC, OCR correction and spell correction demonstrate that the PIE model is an accurate and significantly faster alternative for local sequence transduction.
Abstract: We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC). Recent approaches are based on the popular encoder-decoder (ED) model for sequence to sequence learning. The ED model auto-regressively captures full dependency among output tokens but is slow due to sequential decoding. The PIE model does parallel decoding, giving up the advantage of modelling full dependency in the output, yet it achieves accuracy competitive with the ED model for four reasons: 1.~predicting edits instead of tokens, 2.~labeling sequences instead of generating sequences, 3.~iteratively refining predictions to capture dependencies, and 4.~factorizing logits over edits and their token argument to harness pre-trained language models like BERT. Experiments on tasks spanning GEC, OCR correction and spell correction demonstrate that the PIE model is an accurate and significantly faster alternative for local sequence transduction.

55 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that compared with the conventional BP decoder, the BLER of the proposed bit-flip decoder can achieve significant signal-to-noise ratio (SNR) gain which is comparable to that of a cyclic redundancy check-aided SC list decoder with a moderate list size.
Abstract: The bit-flip method has been successfully applied to the successive cancellation (SC) decoder to improve the block error rate (BLER) performance for polar codes in the finite code length region. However, due to the sequential decoding, the SC decoder inherently suffers from longer decoding latency than that of the belief propagation (BP) decoder with efficient early stopping criterion. It is natural to ask how to perform bit-flip in a polar BP decoder. In this paper, bit-flip is introduced into the BP decoder for polar codes. The idea of critical set (CS), that is, originally proposed by Zhang et al. for identifying unreliable bits in a SC bit-flip decoder, is extended to the BP decoder here. After revealing the relationship between CS and the incorrect BP decoding results, critical set with order ω (CS-ω) is constructed to identify unreliable bit decisions in polar BP decoding. The simulation results demonstrate that compared with the conventional BP decoder, the BLER of the proposed bit-flip decoder can achieve significant signal-to-noise ratio (SNR) gain which is comparable to that of a cyclic redundancy check-aided SC list decoder with a moderate list size. In addition, the decoding latency of the proposed BP bit-flip decoder is only slightly higher than that of the conventional BP decoder in the medium and high SNR regions.

38 citations


Journal ArticleDOI
TL;DR: In this article, an alternative improvement of SC decoding by incorporating the Fano sequential decoding into SC decoding is presented, referred to as SC-Fano decoding, which can address the major drawback of sequential decoding by enabling moving-backward when the reliability of an on-going path is not good enough.
Abstract: For finite-length polar codes, the standard successive cancellation (SC) decoding has been improved, such as SC-List (SCL), SC-Stack (SCS), and SC-Flip (SCF) decodings. In this paper, we present an alternative improvement of SC decoding by incorporating the Fano sequential decoding into SC decoding. This is referred to as SC-Fano decoding. Specifically, it can address the major drawback of SC decoding by enabling moving-backward when the reliability of an on-going path is not good enough. The SCS and SC-Fano decodings can be viewed as the sequential decoding for polar codes. In addition, for cyclic-redundancy-check (CRC) concatenated polar codes, we enhance SC-Fano decoding by leveraging the bit-flipping idea of SCF decoding. The simulation results demonstrate that the proposed SC-Fano decoding can provide better performance-complexity tradeoff than the existing decoding methods.

26 citations


Proceedings ArticleDOI
01 May 2019
TL;DR: Experiments demonstrate that the proposed hierarchical VRAE is able to learn the complementary representation as well as tackle the posterior collapse in stochastic sequential learning and the performance of recurrent autoencoder is substantially improved in terms of perplexity.
Abstract: Despite a great success in learning representation for image data, it is challenging to learn the stochastic latent features from natural language based on variational inference. The difficulty in stochastic sequential learning is due to the posterior collapse caused by an autoregressive decoder which is prone to be too strong to learn sufficient latent information during optimization. To compensate this weakness in learning procedure, a sophisticated latent structure is required to assure good convergence so that random features are sufficiently captured for sequential decoding. This study presents a new variational recurrent autoencoder (VRAE) for sequence reconstruction. There are two complementary encoders consisting of a long short-term memory (LSTM) and a pyramid bidirectional LSTM which are merged to discover the global and local dependencies in a hierarchical latent variable model, respectively. Experiments on Penn Treebank and Yelp 2013 demonstrate that the proposed hierarchical VRAE is able to learn the complementary representation as well as tackle the posterior collapse in stochastic sequential learning. The performance of recurrent autoencoder is substantially improved in terms of perplexity.

20 citations


Posted Content
TL;DR: In this paper, a residual neural network decoder (RNND) was proposed for polar codes, which introduces a denoising module based on residual learning before NND to remove remarkable amount of noise from received signals.
Abstract: Polar codes have been adopted as the control channel coding scheme in the fifth generation new radio (5G NR) standard due to its capacity-achievable property. Traditional polar decoding algorithms such as successive cancellation (SC) suffer from high latency problem because of their sequential decoding nature. Neural network decoder (NND) has been proved to be a candidate for polar decoder since it is capable of oneshot decoding and parallel computing. Whereas, the bit-errorrate (BER) performance of NND is still inferior to that of SC algorithm. In this paper, we propose a residual neural network decoder (RNND) for polar codes. Different from previous works which directly use neural network for decoding symbols received from the channel, the proposed RNND introduces a denoising module based on residual learning before NND. The proposed residual learning denoiser is able to remove remarkable amount of noise from received signals. Numerical results show that our proposed RNND outperforms traditional NND with regard to the BER performance under comparable latency.

10 citations


Journal ArticleDOI
TL;DR: The main benefit of recovering the sum-of-codewords instead of the codeword itself is that it allows to attain points on the dominant face of the multiple access channel capacity without the need of rate-splitting or time sharing while maintaining a low complexity in the order of a standard point-to-point decoder.
Abstract: We present a practical strategy that aims to attain rate points on the dominant face of the multiple access channel capacity using a standard low complexity decoder. This technique is built upon recent theoretical developments of Zhu and Gastpar on compute–forward multiple access which achieves the capacity of the multiple access channel using a sequential decoder. We illustrate this strategy with off-the-shelf LDPC codes. In the first stage of decoding, the receiver first recovers a linear combination of the transmitted codewords using the sum-product algorithm (SPA). In the second stage, by using the recovered sum-of-codewords as side information, the receiver recovers one of the two codewords using a modified SPA, ultimately recovering both codewords. The main benefit of recovering the sum-of-codewords instead of the codeword itself is that it allows to attain points on the dominant face of the multiple access channel capacity without the need of rate-splitting or time sharing while maintaining a low complexity in the order of a standard point-to-point decoder. This property is also shown to be crucial for some applications, e.g., interference channels. For all the simulations with single-layer binary codes, our proposed practical strategy is shown to be within 1.7 dB of the theoretical limits, without explicit optimization on the off-the-self LDPC codes.

9 citations


Posted Content
TL;DR: This paper considers the problem of recovering K linearly independent combinations over a K-user MAC, i.e., recovering the messages in their entirety via nested linear codes, to handle the statistical dependencies between competing codeword K-tuples that occur in nested linear code ensembles.
Abstract: Consider a receiver in a multi-user network that wishes to decode several messages. Simultaneous joint typicality decoding is one of the most powerful techniques for determining the fundamental limits at which reliable decoding is possible. This technique has historically been used in conjunction with random i.i.d. codebooks to establish achievable rate regions for networks. Recently, it has been shown that, in certain scenarios, nested linear codebooks in conjunction with "single-user" or sequential decoding can yield better achievable rates. For instance, the compute-forward problem examines the scenario of recovering $L \le K$ linear combinations of transmitted codewords over a $K$-user multiple-access channel (MAC), and it is well established that linear codebooks can yield higher rates. Here, we develop bounds for simultaneous joint typicality decoding used in conjunction with nested linear codebooks, and apply them to obtain a larger achievable region for compute-forward over a $K$-user discrete memoryless MAC. The key technical challenge is that competing codeword tuples that are linearly dependent on the true codeword tuple introduce statistical dependencies, which requires careful partitioning of the associated error events.

6 citations


Dissertation
01 Nov 2019
TL;DR: Simulation results show that polar codes have a beneficial performance-complexity trade-offs for moderate block lengths at or above 512 bits, but at shorter lengths sequentially decoded codes can have a better trade-off.
Abstract: Performance and Decoding Complexity Analysis of Short Binary Codes Bo Lian Master of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto 2019 Motivated by emerging 5G wireless systems supporting ultra-reliable low-latency applications, this work studies performance-complexity trade-offs for short block length codes. While well-established tools exist for code optimization of long block length codes, there is no universal approach to the code design problem for short block lengths. Three candidate approaches for short block length designs are considered: 1) tail-biting convolutional codes decoded with the wrap-around Viterbi algorithm (WAVA), 2) polar codes decoded with successive-cancellation (SC) and an SC-list algorithm aided with error detection, 3) tail-biting convolutional codes and a class of random linear codes with a particular index profile decoded with a sequential decoding algorithm. Simulation results show that polar codes have a beneficial performance-complexity trade-off for moderate block lengths at or above 512 bits, but at shorter lengths sequentially decoded codes can have a better trade-off. WAVA decoding is competitive only at short lengths and for very low error rates. ii Acknowledgements My sincere gratitude goes to my supervisor, Dr. Frank R. Kschischang, for his kind encouragement and patient supervision. I have had the good fortune to benefit from his insightful questions and rich depth of knowledge. It has been a privilege to do research under his guidance. I would also like to show my gratitude to my colleagues, who have graciously provided me with their time and expertise. Finally I would like to thank my family, for their everlasting spiritual support.

5 citations


Posted Content
TL;DR: In this article, a path metric aided bit-flipping decoding algorithm was proposed to identify and correct more errors efficiently in polar codes, where the bitflipping list is generated based on both log likelihood ratio (LLR) based path metric and bit- flipping metric.
Abstract: Polar codes attract more and more attention of researchers in recent years, since its capacity achieving property. However, their error-correction performance under successive cancellation (SC) decoding is inferior to other modern channel codes at short or moderate blocklengths. SC-Flip (SCF) decoding algorithm shows higher performance than SC decoding by identifying possibly erroneous decisions made in initial SC decoding and flipping them in the sequential decoding attempts. However, it performs not well when there are more than one erroneous decisions in a codeword. In this paper, we propose a path metric aided bit-flipping decoding algorithm to identify and correct more errors efficiently. In this algorithm, the bit-flipping list is generated based on both log likelihood ratio (LLR) based path metric and bit-flipping metric. The path metric is used to verify the effectiveness of bit-flipping. In order to reduce the decoding latency and computational complexity, its corresponding pipeline architecture is designed. By applying these decoding algorithms and pipeline architecture, an improvement on error-correction performance can be got up to 0.25dB compared with SCF decoding at the frame error rate of $10^{-4}$, with low average decoding latency.

Journal ArticleDOI
TL;DR: This paper achieves substantial reductions in the number of density convolutions necessary for the construction of punctured polar codes and obtain tight upper bounds on the block error rates.
Abstract: Polar codes asymptotically achieve the symmetric capacity of arbitrary binary-input discrete memoryless channels under low-complexity sequential decoding algorithms such as successive cancellation decoding. However, in their original formulation, the block length of polar codes is limited to integer powers of the dimension of the underlying polarization kernel used, thus imposing strict constraints on possible application scenarios. While leeway in the choice of kernel or concatenation with other codes mitigates this drawback to a certain extent, puncturing presents a promising approach to specify the target length of a polar code with much greater flexibility. In this paper, we present an efficient implementation of the construction of punctured polar codes based on density evolution, a crucial tool in the construction of both regular, i.e., unpunctured, as well as punctured polar codes. Our implementation of density evolution covers the construction of both regular and punctured polar codes and allows for treating the construction of both code classes in a unified framework. Using our implementation, we achieve substantial reductions in the number of density convolutions necessary for the construction of punctured polar codes and obtain tight upper bounds on the block error rates.

Proceedings ArticleDOI
15 Apr 2019
TL;DR: A path metric aided bit-flipping decoding algorithm to identify and correct more errors efficiently and its corresponding pipeline architecture is designed in order to reduce the decoding latency and computational complexity.
Abstract: Polar codes attract more and more attention of researchers in recent years, since its capacity achieving property. However, their error-correction performance under successive cancellation (SC) decoding is inferior to other modern channel codes at short or moderate blocklengths. SC-Flip (SCF) decoding algorithm shows higher performance than SC decoding by identifying possibly erroneous decisions made in initial SC decoding and flipping them in the sequential decoding attempts. However, it performs not well when there are more than one erroneous decisions in a codeword. In this paper, we propose a path metric aided bit-flipping decoding algorithm to identify and correct more errors efficiently. In this algorithm, the bit-flipping list is generated based on both log likelihood ratio (LLR) based path metric and bit-flipping metric. The path metric is used to verify the effectiveness of bit-flipping. In order to reduce the decoding latency and computational complexity, its corresponding pipeline architecture is designed. By applying these decoding algorithm and pipeline architecture, an improvement on error-correction performance can be got up to 0.25dB compared with SCF decoding at frame error rate of 10−4, with low average decoding latency.

Patent
11 Feb 2019
TL;DR: An image coding method and apparatus and an image decoding method are provided by the present invention as discussed by the authors, which includes: implementing matching coding for pixels of an input video image to obtain one or more matching relationship parameters, herein the matching relationships parameters are parameters used in a process of constructing predicted values and/or reconstructed values of the pixels in the image.
Abstract: An image coding method and apparatus and an image decoding method and apparatus are provided by the present invention. The image coding method includes: implementing matching coding for pixels of an input video image to obtain one or more matching relationship parameters, herein the matching relationship parameters are parameters used in a process of constructing predicted values and/or reconstructed values of the pixels in the image; mapping the matching relationship parameters to obtain mapped values of the matching relationship parameters; and performing entropy coding on the mapped values of the matching relationship parameters. The present invention addresses the problem existing in the conventional art which is caused by the direct implementation of entropy coding for matching relationship parameters and achieves a good data compression effect through entropy coding.

Posted Content
TL;DR: The statistical learning aided list decoding outperforms the sequential decoding in high signal-to-noise ratio~(SNR) region and under the constraint of equivalent decoding delay, the SRBO-CCs have comparable performance with the polar codes.
Abstract: In this paper, we propose a statistical learning aided list decoding algorithm, which integrates a serial list Viterbi algorithm~(SLVA) with a soft check instead of the conventional cyclic redundancy check~(CRC), for semi-random block oriented convolutional codes~(SRBO-CCs). The basic idea is that, compared with an erroneous candidate codeword, the correct candidate codeword for the first sub-frame has less effect on the output of Viterbi algorithm~(VA) for the second sub-frame. The threshold for testing the correctness of the candidate codeword is then determined by learning the statistical behavior of the introduced empirical divergence function~(EDF). With statistical learning aided list decoding, the performance-complexity tradeoff and the performance-delay tradeoff can be achieved by adjusting the statistical threshold and extending the decoding window, respectively. To analyze the performance, a closed-form upper bound and a simulated lower bound are derived. Simulation results verify our analysis and show that: 1) The statistical learning aided list decoding outperforms the sequential decoding in high signal-to-noise ratio~(SNR) region; 2) under the constraint of equivalent decoding delay, the SRBO-CCs have comparable performance with the polar codes.

Book ChapterDOI
01 Jan 2019
TL;DR: Variations of Multilevel MIMO technique such as multilevel spatial modulation (MLSM) and HybridMultilevel (HML) Modulation scheme have been investigated and analyzed with low complex sequential decoding algorithm and their performance over varied channel conditions is investigated.
Abstract: The future wireless communication system focuses on high data rate and reliability to cater the next generation technologies such as internet of things (IoT), real time voice calls etc., which would produce enormous real time multimedia data. In order to overcome the challenges, next generation wireless communication systems rely on MIMO techniques, which provide improved capacity without sacrificing the power and bandwidth. Many novel signal processing techniques have evolved in the past two decades for MIMO systems such as Spatial Multiplexing (SML), Space Time Coding (STC), Antenna Beamforming, Spatial Modulation (SM) and hybridization of the above mentioned techniques. In all these techniques, the computational complexity is the major problem when a high data rate is considered. This chapter focuses on computational complexity of the multilevel MIMO system and also investigates the performance of the multilevel MIMO system over varied channel conditions. Variations of Multilevel MIMO technique such as multilevel spatial modulation (MLSM) and Hybrid Multilevel (HML) Modulation scheme have been investigated and analyzed with low complex sequential decoding algorithm. Further, multilevel MIMO-OFDM systems MLSTTC-OFDM, MLSM-OFDM and HML-OFDM have also been compared.

Patent
31 May 2019
TL;DR: In this paper, a method of recursive sequential list decoding of a codeword of a polar code was proposed, in which an ordered sequence of constituent codes was obtained for the sequential decoding of the polar code, representable by a layered graph.
Abstract: There is provided a method of recursive sequential list decoding of a codeword of a polar code comprising: obtaining an ordered sequence of constituent codes usable for the sequential decoding of the polar code, representable by a layered graph; generating a first candidate codeword (CCW) of a first constituent code, the first CCW being computed from an input model informative of a CCW of a second constituent code, the first constituent code and second constituent code being children of a third constituent code; using the first CCW and the second CCW to compute, by the decoder, a CCW of the third constituent code; using the CCW of the third constituent code to compute a group of symbol likelihoods indicating probabilities of symbols of a fourth (higher-layer) constituent code having been transmitted with a particular symbol value, and using the group of symbol likelihoods to decode the fourth constituent code.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: The proposed hybrid parallelism decoder, which is used to decode pixels and deblock filters in parallel, achieved up to 2.17× speedup at UHD resolution compared with the sequential decoding time of HEVC reference software.
Abstract: Continuous advancements in display devices have increased demands for video coding support for videos of over 2K high-resolution. Although the latest video coding standard, high efficiency video coding (HEVC) has nearly twice the coding efficiency than previous video coding standard, H.264/AVC, the increased computational complexity makes the real-time decoding of ultra-high definition (UHD) a challenging issue. In this paper, a data-level parallel processing method for deblocking filters is presented to improve the HEVC decoder speed. In addition, data-level parallel processing methods for pixel decoding are introduced and their performances with respect to slices and tiles are compared. The proposed hybrid parallelism decoder, which is used to decode pixels and deblock filters in parallel, achieved up to 2.17× speedup at UHD resolution compared with the sequential decoding time of HEVC reference software.

Posted Content
TL;DR: In this article, a novel decoding algorithm of a polar code, named SC-Fano decoding, by appropriately incorporating the Fano sequential decoding into the standard successive-cancellation (SC) decoding, is presented.
Abstract: In this paper, we present a novel decoding algorithm of a polar code, named SC-Fano decoding, by appropriately incorporating the Fano sequential decoding into the standard successive-cancellation (SC) decoding. The proposed SC-Fano decoding follows the basic procedures of SC decoding with an additional operation to evaluate the reliability (or belief) of a current partial path. Specifically, at every decoding stage, it decides whether to move forward along a current path or move backward to find a more likelihood path. In this way, SC-Fano decoding can address the inherent drawback of SC decoding such as one wrong-decision will surely lead to a wrong codeword. Compared with the other improvements of SC decoding as SC-List (SCL) and SC-Stack (SCS) decodings, SC-Fano decoding has much lower memory requirement and thus is more suitable for hardware implementations. Also, SC- Fano decoding can be viewed as an efficient implementation of SC-Flip (SCF) decoding without the cost of cyclic-redundancy-code (CRC). Simulation results show that the proposed SC-Fano decoding significantly enhances the performance of SC decoding with a similar complexity as well as achieves the performance of SCL decoding with a lower complexity.

Patent
07 Mar 2019
TL;DR: In this article, a method of sequential list decoding of an error correction code (ECC) utilizing a decoder comprising a plurality of processors is presented, where decoding candidate words (DCWs) are generated for decoding a subsequent constituent code, each DCW associated with a ranking.
Abstract: There is provided a method of sequential list decoding of an error correction code (ECC) utilizing a decoder comprising a plurality of processors. The method comprises: a) obtaining an ordered sequence of constituent codes usable for the sequential decoding of the ECC; b) executing, by a first processor, a task of decoding a first constituent code, the executing comprising: a. generating decoding candidate words (DCWs) usable to be selected for decoding a subsequent constituent code, each DCW associated with a ranking; b. for the first constituent code, upon occurrence of a sufficiency criterion, and prior to completion of the generating all DCWs and rankings, selecting, in accordance with a selection criterion, at least one DCW; c) executing, by a second processor, a task of decoding a subsequent constituent code, the executing comprising processing data derived from the selected DCWs to generate data usable for decoding a next subsequent constituent code.

Posted Content
TL;DR: A successive cancellation list decoding algorithm, by which a list of candidate codewords are generated serially until one passes an empirical divergence test, is proposed, which outperforms the sequential decoding in high signal-to-noise ratio (SNR) region.
Abstract: We present in this paper a special class of unit memory convolutional codes (UMCCs), called semi-random UMCCs (SRUMCCs), where the information block is first encoded by a short block code and then transmitted in a block Markov (random) superposition manner. We propose a successive cancellation list decoding algorithm, by which a list of candidate codewords are generated serially until one passes an empirical divergence test instead of the conventional cyclic redundancy check (CRC). The threshold for testing the correctness of candidate codewords can be learned off-line based on the statistical behavior of the introduced empirical divergence function (EDF). The performance-complexity tradeoff and the performance-delay tradeoff can be achieved by adjusting the statistical threshold and the decoding window size. To analyze the performance, a closed-form upper bound and a simulated lower bound are derived. Simulation results verify our analysis and show that: 1) The proposed list decoding algorithm with empirical divergence test outperforms the sequential decoding in high signal-to-noise ratio (SNR) region; 2) Taking the tail-biting convolutional codes (TBCC) as the basic codes, the proposed list decoding of SRUMCCs have comparable performance with the polar codes under the constraint of equivalent decoding delay.

Proceedings ArticleDOI
20 May 2019
TL;DR: The proposed modified Fast-SSCL decoder, combining the local sorter from a single parity check (SPC) node with a shift-based path memory, is proposed to provide a high-throughput using a low-complexity implementation.
Abstract: Achieving a high decoding throughput using a successive cancellation list (SCL) decoder for polar codes is difficult due to its sequential decoding architecture In this work, combining the local sorter from a single parity check (SPC) node with a shift-based path memory, a modified fast simplified successive cancellation list (Fast-SSCL) decoder is proposed, in order to provide a high-throughput using a low-complexity implementation The proposed modified Fast-SSCL decoder can be operated at 470 MHz and was synthesized with an area of 526 mm<sup>2</sup> using a TSMC 90 nm CMOS process The decoder presented in this work is able to improve the throughput to area ratio (TAR) by more than 30% compared with the previous designs