scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2021"


Journal ArticleDOI
TL;DR: In this article, an adaptive heuristic metric, tree search constraints for backtracking to avoid exploration of unlikely sub-paths, and tree search strategies consistent with the pattern of error occurrence in polar codes are proposed to reduce the complexity of sequential decoding of PAC/polar codes.
Abstract: In the Shannon lecture at the 2019 International Symposium on Information Theory (ISIT), Arikan proposed to employ a one-to-one convolutional transform as a pre-coding step before the polar transform. The resulting codes of this concatenation are called polarization-adjusted convolutional (PAC) codes . In this scheme, a pair of polar mapper and demapper as pre- and post- processing devices are deployed around a memoryless channel, which provides polarized information to an outer decoder leading to improved error correction performance of the outer code. In this paper, the list decoding and sequential decoding (including Fano decoding and stack decoding) are first adapted for use to decode PAC codes. Then, to reduce the complexity of sequential decoding of PAC/polar codes, we propose (i) an adaptive heuristic metric, (ii) tree search constraints for backtracking to avoid exploration of unlikely sub-paths, and (iii) tree search strategies consistent with the pattern of error occurrence in polar codes. These contribute to the reduction of the average decoding time complexity from 50% to 80%, trading with 0.05 to 0.3 dB degradation in error correction performance within FER = $10^{-3}$ range, respectively, relative to not applying the corresponding search strategies. Additionally, as an important ingredient in Fano decoding of PAC/polar codes, an efficient computation method for the intermediate LLRs and partial sums is provided. This method is effective in backtracking and avoids storing the intermediate information or restarting the decoding process. Eventually, all three decoding algorithms are compared in terms of performance, complexity, and resource requirements.

44 citations


Proceedings Article
18 May 2021
TL;DR: Wang et al. as discussed by the authors proposed a simple yet efficient recurrent network ensemble called Recurrent Autoencoder with Multiresolution Ensemble Decoding (RAMED), which uses decoders with different decoding lengths and a new coarse-to-fine fusion mechanism.
Abstract: Recurrent autoencoder is a popular model for time series anomaly detection, in which outliers or abnormal segments are identified by their high reconstruction errors. However, existing recurrent autoencoders can easily suffer from overfitting and error accumulation due to sequential decoding. In this paper, we propose a simple yet efficient recurrent network ensemble called Recurrent Autoencoder with Multiresolution Ensemble Decoding (RAMED). By using decoders with different decoding lengths and a new coarse-to-fine fusion mechanism, lower-resolution information can help long-range decoding for decoders with higher-resolution outputs. A multiresolution shape-forcing loss is further introduced to encourage decoders' outputs at multiple resolutions to match the input's global temporal shape. Finally, the output from the decoder with the highest resolution is used to obtain an anomaly score at each time step. Extensive empirical studies on real-world benchmark data sets demonstrate that the proposed RAMED model outperforms recent strong baselines on time series anomaly detection.

10 citations


Posted Content
11 May 2021
TL;DR: In this paper, the polarization-adjusted convolutional (PAC) codes were proposed to improve the performance of polar codes at short blocklengths by replacing CRC and successive cancellation decoding with sequential decoding.
Abstract: Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arikan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L⩾128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.

9 citations


Journal ArticleDOI
TL;DR: In this paper, an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept was proposed for 5G massive MIMO receivers.
Abstract: The stringent requirements on reliability and processing delay in the fifth-generation (5G) cellular networks introduce considerable challenges in the design of massive multiple-input-multiple-output (M-MIMO) receivers. The two main components of an M-MIMO receiver are a detector and a decoder. To improve the trade-off between reliability and complexity, a Bayesian concept has been considered as a promising approach that enhances classical detectors, e.g. minimum-mean-square-error detector. This work proposes an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept. We then develop a high performance M-MIMO receiver, integrating the proposed detector with a low complexity sequential decoding for polar codes. Simulation results of the proposed detector show a significant performance gain compared to other low complexity detectors. Furthermore, the proposed M-MIMO receiver with sequential decoding ensures one order magnitude lower complexity compared to a receiver with stack successive cancellation decoding for polar codes from the 5G New Radio standard.

9 citations


Journal ArticleDOI
16 Apr 2021
TL;DR: A novel graph-induced CN clustering approach to partition the state space of the MDP in such a way that dependencies between clusters are minimized and some of the proposed RL schemes not only improve the decoding performance, but also reduce the decoding complexity dramatically once the scheduling policy is learned.
Abstract: In this work, we consider the decoding of short sparse graph-based channel codes via reinforcement learning (RL). Specifically, we focus on low-density parity-check (LDPC) codes, which for example have been standardized in the context of 5G cellular communication systems due to their excellent error correcting performance. LDPC codes are typically decoded via belief propagation on the corresponding bipartite (Tanner) graph of the code via flooding, i.e., all check and variable nodes in the Tanner graph are updated at once. We model the node-wise sequential LDPC scheduling scheme as a Markov decision process (MDP), and obtain optimized check node (CN) scheduling policies via RL to improve sequential decoding performance as compared to flooding. In each RL step, an agent decides which CN to schedule next by observing a reward associated with each choice. Repeated scheduling enables the agent to discover the optimized CN scheduling policy which is later incorporated in our RL-based sequential LDPC decoder. In order to reduce RL complexity, we propose a novel graph-induced CN clustering approach to partition the state space of the MDP in such a way that dependencies between clusters are minimized. Compared to standard decoding approaches from the literature, some of our proposed RL schemes not only improve the decoding performance, but also reduce the decoding complexity dramatically once the scheduling policy is learned. By concatenating an outer Hamming code with an inner LDPC code which is decoded based on our learned policy, we demonstrate significant improvements in the decoding performance compared to other LDPC decoding policies.

8 citations


Posted Content
TL;DR: In this article, it was shown that although invariance under a large automorphism group is valuable in a theoretical sense, it also ensures that the list size needed for good performance grows exponentially.
Abstract: The recently introduced polar codes constitute a breakthrough in coding theory due to their capacityachieving property. This goes hand in hand with a quasilinear construction, encoding, and successive cancellation list decoding procedures based on the Plotkin construction. The decoding algorithm can be applied with slight modifications to Reed-Muller or eBCH codes, that both achieve the capacity of erasure channels, although the list size needed for good performance grows too fast to make the decoding practical even for moderate block lengths. The key ingredient for proving the capacity-achieving property of Reed-Muller and eBCH codes is their group of symmetries. It can be plugged into the concept of Plotkin decomposition to design various permutation decoding algorithms. Although such techniques allow to outperform the straightforward polar-like decoding, the complexity stays impractical. In this paper, we show that although invariance under a large automorphism group is valuable in a theoretical sense, it also ensures that the list size needed for good performance grows exponentially. We further establish the bounds that arise if we sacrifice some of the symmetries. Although the theoretical analysis of the list decoding algorithm remains an open problem, our result provides an insight into the factors that impact the decoding complexity.

8 citations


Journal ArticleDOI
30 Jun 2021-Entropy
TL;DR: In this paper, the polarization-adjusted convolutional (PAC) codes were proposed to improve the performance of polar codes at short blocklengths by replacing CRC and successive cancellation decoding with sequential decoding.
Abstract: Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arikan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L⩾128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.

7 citations


Journal ArticleDOI
TL;DR: A novel memory decoder for visual narrating is devised consisting of multiple memory layers that alleviates dilution of long-term information and leverages the latent information of each layer, which is helpful for generating accurate descriptions.
Abstract: Visual narrating focuses on generating semantic descriptions to summarize visual content of images or videos, e.g., visual captioning and visual storytelling. The challenge mainly lies in how to design a decoder to generate accurate descriptions matching visual content. Recent advances often employ a recurrent neural network (RNN), e.g., Long-Short Term Memory (LSTM), as the decoder. However, RNN is prone to diluting long-term information, which weakens its performance of capturing long-term dependencies. Recent work has demonstrated memory network (MemNet) owns the advantage of storing long-term information. However, as the decoder, it has not been well exploited for visual narrating. The reason partially comes from the difficulty of multi-modal sequential decoding with MemNet. In this article, we devise a novel memory decoder for visual narrating. Concretely, to obtain a better multi-modal representation, we first design a new multi-modal fusion method to fully merge visual and lexical information. Then, based on the fusion result, during decoding, we construct a MemNet-based decoder consisting of multiple memory layers. Particularly, in each layer, we employ a memory set to store previous decoding information and utilize an attention mechanism to adaptively select the information related to the current output. Meanwhile, we also employ a memory set to store the decoding output of each memory layer at the current time step and still utilize an attention mechanism to select the related information. Thus, this decoder alleviates dilution of long-term information. Meanwhile, the hierarchical architecture leverages the latent information of each layer, which is helpful for generating accurate descriptions. Experimental results on two tasks of visual narrating, i.e., video captioning and visual storytelling, show that our decoder could obtain superior results and outperform the performance of conventional RNN-based decoder.

6 citations


Posted Content
TL;DR: This paper proposed a concept-guided non-autoregressive model (CG-nAR) for open-domain dialogue generation, which comprises a multi-concept planning module that learns to identify multiple associated concepts from a concept graph and a customized Insertion Transformer that performs concept guided nonautoregression generation to complete a response.
Abstract: Human dialogue contains evolving concepts, and speakers naturally associate multiple concepts to compose a response. However, current dialogue models with the seq2seq framework lack the ability to effectively manage concept transitions and can hardly introduce multiple concepts to responses in a sequential decoding manner. To facilitate a controllable and coherent dialogue, in this work, we devise a concept-guided non-autoregressive model (CG-nAR) for open-domain dialogue generation. The proposed model comprises a multi-concept planning module that learns to identify multiple associated concepts from a concept graph and a customized Insertion Transformer that performs concept-guided non-autoregressive generation to complete a response. The experimental results on two public datasets show that CG-nAR can produce diverse and coherent responses, outperforming state-of-the-art baselines in both automatic and human evaluations with substantially faster inference speed.

4 citations


Journal ArticleDOI
TL;DR: It is found that sequential decoding with the improved VBT metric has a better performance-complexity tradeoff than tail-biting codes under WAVA decoding but a worse performance–complexityTradeoff than polar codes under SCL decoding (except at high complexities).
Abstract: Sequential decoding of short length binary codes for the additive white Gaussian noise channel is considered. A variant of the variable-bias term (VBT) metric is introduced, producing useful trade-offs between performance and computational complexity. Comparisons are made with tail-biting convolutional codes decoded with a wrap-around Viterbi algorithm (WAVA) and with polar codes under successive-cancellation list (SCL) decoding. It is found that sequential decoding with the improved VBT metric has a better performance–complexity tradeoff than tail-biting codes under WAVA decoding (except at low complexities) but a worse performance–complexity tradeoff than polar codes under SCL decoding (except at high complexities).

3 citations


Posted Content
TL;DR: In this paper, a rate-profile construction method for polarization-adjusted convolutional (PAC) codes of any code length and rate was proposed, which is capable of maintaining trade-off between the error-correction performance and decoding complexity of PAC code.
Abstract: This paper proposes a rate-profile construction method for polarization-adjusted convolutional (PAC) codes of any code length and rate, which is capable of maintaining trade-off between the error-correction performance and decoding complexity of PAC code. The proposed method can improve the error-correction performance of PAC codes while guaranteeing a low mean sequential decoding complexity for signal-to-noise ratio (SNR) values beyond a target SNR value.

Proceedings ArticleDOI
Mikhail Kamenev1
12 Jul 2021
TL;DR: In this article, a soft-input sequential decoder for Reed-Muller (RM) codes of length $2^{m}$ and order $m-3$ is proposed, with permutations being selected on-the-fly from the RM codes' automorphism group based on soft information from a channel.
Abstract: A soft-input sequential decoder for Reed-Muller (RM) codes of length $2^{m}$ and order $m-3$ is proposed. The considered algorithm sequentially processes different permuted versions of the received vector using a decoder of an extended Hamming code, with permutations being selected on-the-fly from the RM codes' automorphism group based on soft information from a channel. It is shown that the proposed algorithm outperforms the recursive list decoder with similar computational complexity and achieves near maximum-likelihood decoding performance with reasonable computational complexity for RM codes of length 512 and 1024.

Journal ArticleDOI
TL;DR: A simplified decoding method for polar codes with large kernels consists in decoder-side substitution of some submatrices of the polarizing transformation with matrices, which admit simpler evaluation of log-likelihood ratios.
Abstract: A simplified decoding method for polar codes with large kernels is presented. The proposed approach consists in decoder-side substitution of some submatrices of the polarizing transformation with matrices, which admit simpler evaluation of log-likelihood ratios. This approach enables complexity reduction for the successive cancellation, list and sequential decoding algorithms.

Posted Content
TL;DR: In this paper, the polarization lemma is reconstructed based on the bit error probability of successive cancellation decoding, and then techniques to compute the error probability are introduced to construct binary polar source/channel codes and lower and upper bounds of the block decoding error probability.
Abstract: This paper introduces techniques to construct binary polar source/channel codes based on the bit error probability of successive-cancellation decoding. The polarization lemma is reconstructed based on the bit error probability and then techniques to compute the bit error probability are introduced. These techniques can be applied to the construction of polar codes and the computation of lower and upper bounds of the block decoding error probability.

Posted Content
TL;DR: In this paper, a bit-flipping strategy tailored to the state-of-the-art fast successive-cancellation list (FSCL) decoding was proposed to avoid tree-traversal in the binary tree representation of SCLF, thus reducing the latency of the decoding process.
Abstract: This work presents a fast successive-cancellation list flip (Fast-SCLF) decoding algorithm for polar codes that addresses the high latency issue associated with the successive-cancellation list flip (SCLF) decoding algorithm. We first propose a bit-flipping strategy tailored to the state-of-the-art fast successive-cancellation list (FSCL) decoding that avoids tree-traversal in the binary tree representation of SCLF, thus reducing the latency of the decoding process. We then derive a parameterized path-selection error model to accurately estimate the bit index at which the correct decoding path is eliminated from the initial FSCL decoding. The trainable parameter is optimized online based on an efficient supervised learning framework. Simulation results show that for a polar code of length 512 with 256 information bits, with similar error-correction performance and memory consumption, the proposed Fast-SCLF decoder reduces up to $73.4\%$ of the average decoding latency of the SCLF decoder with the same list size at the frame error rate of $10^{-4}$, while incurring a maximum computational overhead of $36.2\%$. For the same polar code of length 512 with 256 information bits and at practical signal-to-noise ratios, the proposed decoder with list size 4 reduces $89.1\%$ and $43.7\%$ of the average complexity and decoding latency of the FSCL decoder with list size 32 (FSCL-32), respectively, while also reducing $83.3\%$ of the memory consumption of FSCL-32. The significant improvements of the proposed decoder come at the cost of only $0.07$ dB error-correction performance degradation compared with FSCL-32.

Posted Content
TL;DR: In this article, the authors compare different BCH codes of the same coderate with different weight distributions, thus, are not equivalent by using different choices of cyclotomic cosets.
Abstract: Cyclic codes have the advantage that it is only necessary to store one polynomial. The binary primitive BCH codes are cyclic and are created by choosing a subset of the cyclotomic cosets which can be done in various ways. We compare different BCH codes of the same coderate with different weight distributions, thus, are not equivalent by using different choices of cyclotomic cosets. We recall an old result from the sixties that any Reed-Muller code is equivalent to a particular BCH code extended by a parity bit. The motivation for decoding BCH codes is that they have possibly better parameters than Reed-Muller codes which are related in recent publications to polar codes. We present several hard and soft decision decoding schemes based on minimal weight codewords of the dual code, including information set decoding in case of a channel without reliability information. Different BCH codes of the same rate are compared and show different decoding performance and complexity. Some examples of hard decision decoding of BCH codes have the same decoding performance as maximum likelihood decoding. All presented decoding methods can be extended to include reliability information of a Gaussian channel for soft decision decoding. We show various simulation results for soft decision list information set decoding and analyze the influence of different aspects on the performance.

Proceedings ArticleDOI
06 Sep 2021
TL;DR: In this article, the RL-based sequential decoding of LDPC codes is considered and an optimized decoding policy is subsequently obtained via RL, where the agent learns to schedule all check nodes in a cluster and all clusters in every iteration.
Abstract: In this work, we consider the sequential decoding of moderate length low-density parity-check (LDPC) codes via reinforcement learning (RL). The sequential decoding scheme is modeled as a Markov decision process (MDP), and an optimized decoding policy is subsequently obtained via RL. In contrast to our previous works, where an agent learns to schedule only a single check node (CN) within a group (cluster) of CNs per iteration, in this work we train the agent to schedule all CNs in a cluster, and all clusters in every iteration. That is, in each RL step, an agent learns to schedule CN clusters sequentially depending on the reward associated with the outcome of scheduling a particular cluster. We also propose a modified MDP and a uniform sequential decoding policy, enabling the RL-based decoder to be suitable for much longer LDPC codes than the ones studied in our previous work. The proposed RL-based decoder exhibits an SNR gain of almost 0.8 dB for fixed bit error probability over the standard flooding approach.

Proceedings ArticleDOI
27 Jan 2021
TL;DR: In this article, a logarithmic non-uniform quantization of intermediate log-likelihood ratios (LLRs) is presented, which can provide an error correction performance close to floating-point precision for a wide range of code-lengths.
Abstract: The quantization of intermediate log-likelihood ratios (LLRs) is a concern in the hardware implementation of the LLR-based tree search algorithms such as successive cancellation list (SCL) and sequential (SCS) decoding for polar codes (particularly for large block-lengths), where comparability of tree paths requires precision for path metrics that the uniform quantization demands a large memory space due to the wide dynamic range. As the consequence of low accuracy in uniform quantization (with large step size) for small LLR values, the error correction performance degrades. In this work, we present a logarithmic non-uniform quantization (based on lookup table, logarithm functions, and piecewise linear functions) which can provide an error correction performance close to floating-point precision for a wide range of code-lengths.

Proceedings Article
09 Sep 2021
TL;DR: This article proposed a concept-guided non-autoregressive model (CG-nAR) for open-domain dialogue generation, which comprises a multi-concept planning module that learns to identify multiple associated concepts from a concept graph and a customized Insertion Transformer that performs concept guided nonautoregression generation to complete a response.
Abstract: Human dialogue contains evolving concepts, and speakers naturally associate multiple concepts to compose a response. However, current dialogue models with the seq2seq framework lack the ability to effectively manage concept transitions and can hardly introduce multiple concepts to responses in a sequential decoding manner. To facilitate a controllable and coherent dialogue, in this work, we devise a concept-guided non-autoregressive model (CG-nAR) for open-domain dialogue generation. The proposed model comprises a multi-concept planning module that learns to identify multiple associated concepts from a concept graph and a customized Insertion Transformer that performs concept-guided non-autoregressive generation to complete a response. The experimental results on two public datasets show that CG-nAR can produce diverse and coherent responses, outperforming state-of-the-art baselines in both automatic and human evaluations with substantially faster inference speed.

Journal ArticleDOI
TL;DR: In this article, a decoding algorithm for 2D convolutional codes over an erasure channel is presented, which reduces the decoding process to several decoding steps applied to 1D convolutions.
Abstract: Two-dimensional (2D) convolutional codes are a generalization of (1D) convolutional codes, which are suitable for transmission over an erasure channel. In this paper, we present a decoding algorithm for 2D convolutional codes over such a channel by reducing the decoding process to several decoding steps applied to 1D convolutional codes. Moreover, we provide constructions of 2D convolutional codes that are specially tailored to our decoding algorithm.

Proceedings ArticleDOI
04 May 2021
TL;DR: Wang et al. as discussed by the authors proposed an open-loop sequential decoding algorithm under a synchronized brain-machine interface (BMI) framework to achieve high-performance sequential movement decoding, providing a potential solution for BMIs in complex movement reconstruction.
Abstract: Serial movement is crucial for interacting with complex environments, and invasive brain-machine interfaces (BMIs) are currently being developed to restore motor function by decoding neural responses. However, coherent sequential movement decoding in complex behavior is still poorly understood. Although current closed-loop BMIs decoders enable subjects to drive equipment step by step to approach multiple targets and complete serial action, such a control strategy is inflexible and unnatural. Here, we propose an open-loop sequential decoding algorithm under a synchronized BMI framework to achieve high-performance sequential movement decoding, providing a potential solution for BMIs in complex movement reconstruction.

Posted Content
TL;DR: In this article, Chen et al. proposed an improved version of the list decoding algorithm for BCH codes and punctured Reed-Muller (RM) codes, which significantly reduced the BER while maintaining the same (in some cases even smaller) FER.
Abstract: The cyclically equivariant neural decoder was recently proposed in [Chen-Ye, International Conference on Machine Learning, 2021] to decode cyclic codes. In the same paper, a list decoding procedure was also introduced for two widely used classes of cyclic codes -- BCH codes and punctured Reed-Muller (RM) codes. While the list decoding procedure significantly improves the Frame Error Rate (FER) of the cyclically equivariant neural decoder, the Bit Error Rate (BER) of the list decoding procedure is even worse than the unique decoding algorithm when the list size is small. In this paper, we propose an improved version of the list decoding algorithm for BCH codes and punctured RM codes. Our new proposal significantly reduces the BER while maintaining the same (in some cases even smaller) FER. More specifically, our new decoder provides up to $2$dB gain over the previous list decoder when measured by BER, and the running time of our new decoder is $15\%$ smaller. Code available at this https URL

Posted Content
TL;DR: In this paper, an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept is proposed, and the proposed detector is integrated with a low complexity sequential decoding for polar codes.
Abstract: The stringent requirements on reliability and processing delay in the fifth-generation ($5$G) cellular networks introduce considerable challenges in the design of massive multiple-input-multiple-output (M-MIMO) receivers. The two main components of an M-MIMO receiver are a detector and a decoder. To improve the trade-off between reliability and complexity, a Bayesian concept has been considered as a promising approach that enhances classical detectors, e.g. minimum-mean-square-error detector. This work proposes an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept. We then develop a high performance M-MIMO receiver, integrating the proposed detector with a low complexity sequential decoding for polar codes. Simulation results of the proposed detector show a significant performance gain compared to other low complexity detectors. Furthermore, the proposed M-MIMO receiver with sequential decoding ensures one order magnitude lower complexity compared to a receiver with stack successive cancellation decoding for polar codes from the 5G New Radio standard.

Posted Content
TL;DR: This article proposed a variational auto-encoder based non-autoregressive text-to-speech (VAENAR-TTS) model, which does not require phoneme-level durations.
Abstract: This paper describes a variational auto-encoder based non-autoregressive text-to-speech (VAENAR-TTS) model. The autoregressive TTS (AR-TTS) models based on the sequence-to-sequence architecture can generate high-quality speech, but their sequential decoding process can be time-consuming. Recently, non-autoregressive TTS (NAR-TTS) models have been shown to be more efficient with the parallel decoding process. However, these NAR-TTS models rely on phoneme-level durations to generate a hard alignment between the text and the spectrogram. Obtaining duration labels, either through forced alignment or knowledge distillation, is cumbersome. Furthermore, hard alignment based on phoneme expansion can degrade the naturalness of the synthesized speech. In contrast, the proposed model of VAENAR-TTS is an end-to-end approach that does not require phoneme-level durations. The VAENAR-TTS model does not contain recurrent structures and is completely non-autoregressive in both the training and inference phases. Based on the VAE architecture, the alignment information is encoded in the latent variable, and attention-based soft alignment between the text and the latent variable is used in the decoder to reconstruct the spectrogram. Experiments show that VAENAR-TTS achieves state-of-the-art synthesis quality, while the synthesis speed is comparable with other NAR-TTS models.