scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2017"


Proceedings ArticleDOI
22 Mar 2017
TL;DR: In this paper, the authors revisited the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes, and showed that neural networks can learn a form of decoding algorithm, rather than only a simple classifier.
Abstract: We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.

234 citations


Journal ArticleDOI
TL;DR: In this paper, the authors improved SSCL and SSCL-SPC by proving that the list size imposes a specific number of path splitting required to decode rate one and single parity check codes, while guaranteeing exactly the same error-correct performance as if the paths were forked at each bit estimation.
Abstract: Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable tradeoff between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of path splitting required to decode rate one and single parity check codes. Thus, the number of splitting can be limited while guaranteeing exactly the same error-correction performance as if the paths were forked at each bit estimation. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of path forks in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: It is shown that our design can achieve $\mathbf {1.86}$ Gb/s throughput, higher than the best state-of-the-art decoders.

149 citations


Journal ArticleDOI
TL;DR: By exploring the lattice structure of SCMA code words, a low-complexity decoding algorithm based on list sphere decoding (LSD) is proposed that can reduce the decoding complexity substantially while the performance loss compared with the existing algorithm is negligible.
Abstract: Sparse code multiple access (SCMA) is one of the most promising methods among all the non-orthogonal multiple access techniques in the future 5G communication. Compared with some other non-orthogonal multiple access techniques, such as low density signature, SCMA can achieve better performance due to the shaping gain of the SCMA code words. However, despite the sparsity of the code words, the decoding complexity of the current message passing algorithm utilized by SCMA is still prohibitively high. In this paper, by exploring the lattice structure of SCMA code words, we propose a low-complexity decoding algorithm based on list sphere decoding (LSD). The LSD avoids the exhaustive search for all possible hypotheses and only considers signal within a hypersphere. As LSD can be viewed a depth-first tree search algorithm, we further propose several methods to prune the redundancy-visited nodes in order to reduce the size of the search tree. Simulation results show that the proposed algorithm can reduce the decoding complexity substantially while the performance loss compared with the existing algorithm is negligible.

123 citations


Journal ArticleDOI
TL;DR: Five fast parallel decoders corresponding to different frozen-bit sequences to improve the decoding speed of polar codes are presented and achieves significant latency reduction without tangibly altering the bit-error-rate performance of the code.
Abstract: The decoding latency of polar codes can be reduced by implementing fast parallel decoders in the last stages of decoding. In this letter, we present five such decoders corresponding to different frozen-bit sequences to improve the decoding speed of polar codes. Implementing them achieves significant latency reduction without tangibly altering the bit-error-rate performance of the code.

84 citations


Posted Content
TL;DR: A recurrent neural network architecture for decoding linear block codes, which shows comparable bit error rate results compared to the feed-forward neural network with significantly less parameters and demonstrates improved performance over belief propagation on sparser Tanner graph representations of the codes.
Abstract: Designing a practical, low complexity, close to optimal, channel decoder for powerful algebraic codes with short to moderate block length is an open research problem. Recently it has been shown that a feed-forward neural network architecture can improve on standard belief propagation decoding, despite the large example space. In this paper we introduce a recurrent neural network architecture for decoding linear block codes. Our method shows comparable bit error rate results compared to the feed-forward neural network with significantly less parameters. We also demonstrate improved performance over belief propagation on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the RNN decoder can be used to improve the performance or alternatively reduce the computational complexity of the mRRD algorithm for low complexity, close to optimal, decoding of short BCH codes.

78 citations


Proceedings ArticleDOI
19 Mar 2017
TL;DR: In this paper, the Rate-1 node decoder was proposed to reduce the number of time-steps needed to decode the polar codes in order to improve the error-correction performance.
Abstract: Polar codes are capacity achieving error correcting codes that can be decoded through the successive-cancellation algorithm. To improve its error-correction performance, a list-based version called successive-cancellation list (SCL) has been proposed in the past, that however substantially increases the number of time-steps in the decoding process. The simplified SCL (SSCL) decoding algorithm exploits constituent codes within the polar code structure to greatly reduce the required number of time-steps without introducing any error-correction performance loss. In this paper, we propose a faster decoding approach to decode one of these constituent codes, the Rate-1 node. We use this Rate-1 node decoder to develop Fast-SSCL. We demonstrate that only a list-size-bound number of bits needs to be estimated in Rate-1 nodes and Fast-SSCL exactly matches the error-correction performance of SCL and SSCL. This technique can potentially greatly reduce the total number of time-steps needed for polar codes decoding: analysis on a set of case studies show that Fast-SSCL has a number of time- steps requirement that is up to 66.6% lower than SSCL and 88.1% lower than SCL.

69 citations


Proceedings ArticleDOI
03 May 2017
TL;DR: This paper investigates in this paper the performance of Convolutional, Turbo, Low-Density Parity-Check (LDPC), and Polar codes, in terms of the Bit-Error-Ratio for different information block lengths and code rates, spanning the multiple scenarios of reliability and high throughput.
Abstract: Channel coding is a fundamental building block in any communications system. High performance codes, with low complexity encoding and decoding are a must-have for future wireless systems, with requirements ranging from the operation in highly reliable scenarios, utilizing short information messages and low code rates, to high throughput scenarios, working with long messages, and high code rates. We investigate in this paper the performance of Convolutional, Turbo, Low-Density Parity-Check (LDPC), and Polar codes, in terms of the Bit-Error-Ratio (BER) for different information block lengths and code rates, spanning the multiple scenarios of reliability and high throughput. We further investigate their convergence behavior with respect to the number of iterations (turbo and LDPC), and list size (polar), as well as how their performance is impacted by the approximate decoding algorithms.

69 citations


Proceedings ArticleDOI
21 May 2017
TL;DR: In this article, a generalized construction for binary polar codes based on mixing multiple kernels of different sizes was proposed to construct polar codes of block lengths that are not only powers of integers.
Abstract: We propose a generalized construction for binary polar codes based on mixing multiple kernels of different sizes in order to construct polar codes of block lengths that are not only powers of integers. This results in a multi-kernel polar code with very good performance while the encoding complexity remains low and the decoding follows the same general structure as for the original Arikan polar codes. The construction provides numerous practical advantages as more code lengths can be achieved without puncturing or shortening. We observe numerically that the error-rate performance of our construction outperforms state-of-the-art constructions using puncturing methods.

62 citations


Journal ArticleDOI
Yingquan Wu1
TL;DR: A novel generalized integrated interleaving scheme over binary Bose-Chaudhuri–Hocquenghem codes, a lower bound on the minimum distance is revealed, and a similar encoding and decoding algorithm as those of Reed-Solomon codes is derived.
Abstract: Generalized integrated interleaved codes refer to two-level Reed-Solomon codes, such that each code of the nested layer belongs to different subcode of the first-layer code. In this paper, we first devise an efficient decoding algorithm by ignoring first-layer miscorrection and by intelligently reusing preceding results during each iteration of a decoding attempt. Neglecting first-layer miscorrection also enables to explicitly and neatly formulate the decoding failure probability. We next derive an erasure correcting algorithm for redundant arrays of independent disks systems. We further construct an algebraic systematic encoding algorithm, which had been open. Analogously, we propose a novel generalized integrated interleaving scheme over binary Bose-Chaudhuri–Hocquenghem codes, reveal a lower bound on the minimum distance, and derive a similar encoding and decoding algorithm as those of Reed-Solomon codes.

49 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: In this paper, a method for construction of polar subcodes is presented, which aims on minimization of the number of low-weight codewords in the obtained codes, as well as on improved performance under list or sequential decoding.
Abstract: A method for construction of polar subcodes is presented, which aims on minimization of the number of low-weight codewords in the obtained codes, as well as on improved performance under list or sequential decoding. Simulation results are provided, which show that the obtained codes outperform LDPC and turbo codes.

48 citations


Journal ArticleDOI
TL;DR: This letter uses the evolution of messages, i.e., log-likelihood ratios, of unfrozen bits during iterative BP decoding of polar codes to identify weak bit-channels, and modified codes show improved performance not only under BP decoding, but also under SCL decoding.
Abstract: Polar code constructions based on mutual information or Bhattacharyya parameters of bit-channels are intended for hard-output successive cancellation (SC) decoders, and thus might not be well designed for use with other decoders, such as soft-output belief propagation (BP) decoders or successive cancellation list (SCL) decoders. In this letter, we use the evolution of messages, i.e., log-likelihood ratios, of unfrozen bits during iterative BP decoding of polar codes to identify weak bit-channels, and then modify the conventional polar code construction by swapping these bit-channels with strong frozen bit-channels. The modified codes show improved performance not only under BP decoding, but also under SCL decoding. The code modification is shown to reduce the number of low-weight codewords, with and without CRC concatenation.

Book ChapterDOI
26 Jun 2017
TL;DR: In this paper, the authors quantise other information set decoding algorithms by using quantum walk techniques which were devised for the subset-sum problem in [6], which results in improving the worst-case complexity of Bernstein's algorithm to 2.05869n.
Abstract: The security of code-based cryptosystems such as the McEliece cryptosystem relies primarily on the difficulty of decoding random linear codes. The best decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoding techniques. It is also important to assess the security of such cryptosystems against a quantum computer. This research thread started in [23] and the best algorithm to date has been Bernstein’s quantising [5] of the simplest information set decoding algorithm, namely Prange’s algorithm. It consists in applying Grover’s quantum search to obtain a quadratic speed-up of Prange’s algorithm. In this paper, we quantise other information set decoding algorithms by using quantum walk techniques which were devised for the subset-sum problem in [6]. This results in improving the worst-case complexity of \(2^{0.06035n}\) of Bernstein’s algorithm to \(2^{0.05869n}\) with the best algorithm presented here (where n is the codelength).

Journal ArticleDOI
TL;DR: It is shown that one can approach capacity at high rates using iterative hard-decision decoding (HDD) of generalized product codes, a class of spatially coupled generalized LDPC codes with Bose–Chaudhuri–Hocquengham component codes.
Abstract: A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability decoding of their component codes. In this paper, we show that one can approach capacity at high rates using iterative hard-decision decoding (HDD) of generalized product codes. Specifically, a class of spatially coupled generalized LDPC codes with Bose–Chaudhuri–Hocquengham component codes is considered, and it is observed that, in the high-rate regime, they can approach capacity under the proposed iterative HDD. These codes can be seen as generalized product codes and are closely related to braided block codes. An iterative HDD algorithm is proposed that enables one to analyze the performance of these codes via density evolution.

Journal ArticleDOI
TL;DR: This paper proposes a set of novel techniques that aim at reducing the high-memory cost of SC-based decoders and can be applied on top of existing memory reduction techniques.
Abstract: Polar codes have gained a great amount of attention in the past few years, since they can provably achieve the capacity of a symmetric channel with a low-complexity encoding and decoding algorithm. As a result, polar codes have been selected as a coding scheme in the 5th generation wireless communication standard. Among different decoding schemes, successive-cancellation (SC) and SC list decoding yield good trade-off between error-correction performance and hardware implementation cost. However, both families of algorithms have large memory requirements. In this paper, we propose a set of novel techniques that aim at reducing the high-memory cost of SC-based decoders. These techniques are orthogonal to the specific decoder architecture considered, and can be applied on top of existing memory reduction techniques. We have designed and implemented different polar decoders on FPGA and also synthesized them in 65 nm TSMC CMOS technology to verify the effectiveness of the proposed memory reduction techniques. The benchmark decoders yield comparable or lower area occupation than the state of the art: the results show that the proposed methods can save up to 46% memory area occupation and 42% total area occupation compared with benchmark SC-based decoders.

Journal ArticleDOI
01 Mar 2017
TL;DR: The effects of decoding and processing costs in an energy harvesting two-way channel is studied, and the optimal offline power scheduling policies that maximize the sum throughput by a given deadline are designed.
Abstract: We study the effects of decoding and processing costs in an energy harvesting two-way channel. We design the optimal offline power scheduling policies that maximize the sum throughput by a given deadline, subject to energy causality constraints, decoding causality constraints, and processing costs at both users. In this system, each user spends energy to transmit data to the other user, and also to decode data coming from the other user; that is, each user divides its harvested energy for transmission and reception. Further, each user incurs a processing cost per unit time as long as it communicates. The power needed for decoding the incoming data is modeled as an increasing convex function of the incoming data rate; and the power needed to be on , i.e., the processing cost, is modeled to be a constant per unit time. We solve this problem by first considering the cases with decoding costs only and processing costs only individually. In each case, we solve the single energy arrival scenario, and then use the solution’s insights to provide an iterative algorithm that solves the multiple energy arrivals scenario. Then, we consider the general case with both decoding and processing costs in a single setting, and solve it for the most general scenario of multiple energy arrivals.

Posted Content
TL;DR: The authors propose a trainable decoding algorithm in which a decoding algorithm is trained to find a translation that maximizes an arbitrary decoding objective with a variant of deterministic policy gradient (DPG).
Abstract: Recent research in neural machine translation has largely focused on two aspects; neural network architectures and end-to-end learning algorithms. The problem of decoding, however, has received relatively little attention from the research community. In this paper, we solely focus on the problem of decoding given a trained neural machine translation model. Instead of trying to build a new decoding algorithm for any specific decoding objective, we propose the idea of trainable decoding algorithm in which we train a decoding algorithm to find a translation that maximizes an arbitrary decoding objective. More specifically, we design an actor that observes and manipulates the hidden state of the neural machine translation decoder and propose to train it using a variant of deterministic policy gradient. We extensively evaluate the proposed algorithm using four language pairs and two decoding objectives and show that we can indeed train a trainable greedy decoder that generates a better translation (in terms of a target decoding objective) with minimal computational overhead.

Journal ArticleDOI
TL;DR: Simulation results show that for both the regular and irregular LDPC codes, the ADMM decoding using LUT-based projection can substantially reduce the decoding time while maintaining the error rate performance at a comparatively large memory cost.
Abstract: Linear programming decoding with the alternating direction method of multipliers (ADMM) is a promising decoding technique for low-density parity-check (LDPC) codes, where the computational complexity of Euclidean projections onto check polytopes becomes a prominent problem. In this paper, the problem is circumvented by building lookup tables (LUTs) and quantizing the inputs to approach approximate Euclidean projections at low computational complexities. To challenge the huge memory cost of LUTs, we first propose two commutative compositions of Euclidean projection and self-map, and show the existence of a small quantization range which does not alter the Euclidean projection. Then, we investigate the design and simplification of the LUTs by exploiting the commutative compositions and check node decomposition techniques. An efficient algorithm for the LUT-based projection is demonstrated by using one simplification method. Simulation results show that for both the regular and irregular LDPC codes, the ADMM decoding using LUT-based projection can substantially reduce the decoding time while maintaining the error rate performance at a comparatively large memory cost.

Journal ArticleDOI
TL;DR: BICM PNC systems exhibit different decoding behavior from conventional BICM point-to-point systems, and Gray mapping gives rise to best PNC rate for MUD-XOR and XOR-CD systems among several bits- to-symbol mappings under study.
Abstract: This paper presents several soft decision iterative decoding schemes for physical-layer network coding (PNC) operated with coded modulation (CM) and bit-interleaved coded modulation (BICM) With respect to PNC operated with CM, we consider network coding-based channel decoding (NC-CD) and multi-user complete decoding (MUD-NC) for PNC decoding at the relay Their BICM counterparts are XOR-based channel decoding (XOR-CD) and MUD-XOR, respectively First, we show that, when the decoding is non-iterative, there is a gap between the BICM capacities of both XOR-CD and MUD-XOR under Gray mapping and the capacities of their CM counterparts, NC-CD, and MUD-NC This is in contrast to the conventional point-to-point communication system, for which the BICM capacity with Gray mapping is known to be very close to the CM capacity, without the need for iterative decoding Second, we investigate the error performance of iteratively decoded BICM XOR-CD and MUD-XOR Extrinsic information transfer chart analysis and simulation results indicate that for these Gray-mapped BICM PNC systems, iterative decoding can achieve considerable gains over non-iterative decoding Again, this is in contrast to the Gray-mapped BICM point-to-point communication system, for which iterative decoding provides little gain over non-iterative decoding We further show that Gray mapping gives rise to best PNC rate for MUD-XOR and XOR-CD systems among several bits-to-symbol mappings under study Overall, our results indicate that BICM PNC systems exhibit different decoding behavior from conventional BICM point-to-point systems This paper serves as a first foray into the investigation of this issue

Journal ArticleDOI
TL;DR: Simulation results show that, with nonuniform message passing and periodic puncturing, near capacity performance can be maintained throughout a wide range of rates with reasonable decoding complexity and no visible error floors.
Abstract: In this paper, we present a novel sliding window decoding scheme based on iterative Bahl–Cocke–Jelinek–Raviv decoding for braided convolutional codes, a class of turbo-like codes with short constraint length component convolutional codes The tradeoff between performance and decoding latency is examined and, to reduce decoding complexity, both uniform and nonuniform message passing schedules within the decoding window, along with early stopping rules, are proposed We also perform a density evolution analysis of sliding window decoding to guide the selection of the window size and message passing schedule Periodic puncturing is employed to obtain rate-compatible code rates of 1/2 and 2/3 starting from a rate 1/3 mother code and a code rate of 3/4 starting from a rate 1/2 mother code Simulation results show that, with nonuniform message passing and periodic puncturing, near capacity performance can be maintained throughout a wide range of rates with reasonable decoding complexity and no visible error floors

Proceedings ArticleDOI
25 Jun 2017
TL;DR: BCH codes and their analogous in the Lee space are utilized to construct explicit and systematic codes that are immune to up to r sticky insertions and the ratio of the number of constructed redundancy bits in the construction to a certain upper bound approaches one as the block length grows large, which implies asymptotic optimality of the construction.
Abstract: The problem of constructing sticky-insertion-correcting codes with efficient encoding and decoding is considered. An {n, M, r) sticky-insertion-correcting code consists of M codewords of length n such that any pattern of up to r sticky insertions can be corrected. We utilize BCH codes and their analogous in the Lee space to construct explicit and systematic codes that are immune to up to r sticky insertions. It is shown that the ratio of the number of constructed redundancy bits in the construction to a certain upper bound approaches one as the block length grows large, which implies asymptotic optimality of the construction.

Journal ArticleDOI
TL;DR: A first-order analysis is introduced, which provides the expected number of inactivations for an LT code, as a function of the output distribution, the number of input symbols, and the decoding overhead.
Abstract: In this paper, we analyze Luby transform (LT) and Raptor codes under inactivation decoding. A first-order analysis is introduced, which provides the expected number of inactivations for an LT code, as a function of the output distribution, the number of input symbols, and the decoding overhead. The analysis is then extended to the calculation of the distribution of the number of inactivations. In both cases, random inactivation is assumed. The developed analytical tools are then exploited to design LT and Raptor codes, enabling a tight control on the decoding complexity versus failure probability tradeoff. The accuracy of the approach is confirmed by numerical simulations.

Journal ArticleDOI
TL;DR: A new reliable physical-layer network coding and cascade-computation decoding scheme that significantly outperforms the iterative detection and decoding scheme with a single iteration, by 1.7 dB for the two user case.
Abstract: This paper studies non-orthogonal transmission over a $K$ -user fading multiple access channel. We propose a new reliable physical-layer network coding and cascade-computation decoding scheme. In the proposed scheme, $K$ single-antenna users encode their messages by the same practical channel code and QAM modulation, and transmit simultaneously. The receiver chooses $K$ linear coefficient vectors and computes the associated $K$ layers of finite-field linear message combinations in a cascade manner. Finally, the $K$ users’ messages are recovered by solving the $K$ linear equations. The proposed can be regarded as a generalized onion peeling. We study the optimal network coding coefficient vectors used in the cascade computation. Numerical results show the performance of the proposed approaches that of the iterative maximum $a~posteriori$ probability detection and decoding scheme, but without using receiver iteration. This results in considerable complexity reduction, processing delay, and easier implementation. Our proposed scheme significantly outperforms the iterative detection and decoding scheme with a single iteration, for example, by 1.7 dB for the two user case. The proposed scheme provides a competitive solution for non-orthogonal multiple access.

Journal ArticleDOI
TL;DR: In this article, the authors investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms.
Abstract: Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: A new partial decoding algorithm for m-interleaved Reed-Solomon (IRS) codes that can decode, with high probability, a random error of relative weight 1 − Rm/m+1 at all code rates R, in time polynomial in the code length n is proposed.
Abstract: We propose a new partial decoding algorithm for m-interleaved Reed-Solomon (IRS) codes that can decode, with high probability, a random error of relative weight 1 − Rm/m+1 at all code rates R, in time polynomial in the code length n. For m > 2, this is an asymptotic improvement over the previous state-of-the-art for all rates, and the first improvement for R > 1/3 in the last 20 years. The method combines collaborative decoding of IRS codes with power decoding up to the Johnson radius.

Journal ArticleDOI
TL;DR: Simulation results demonstrate the effectiveness of the proposed hybrid decoding algorithm for improving the reliability of STT-MRAM in the presence of both write errors and read errors and supporting its potential applications, such as replacing DRAM.
Abstract: Spin-torque transfer magnetic random access memory (STT-MRAM) is a promising non-volatile memory technology widely considered to replace dynamic random access memory (DRAM). However, there still exist critical technical challenges to be tackled. For example, process variation and thermal fluctuation may lead to both write errors and read errors, severely affecting the reliability of the memory array. In this paper, we first propose a novel cascaded channel model for STT-MRAM that facilitates fast error rate simulations and more importantly the theoretical design and analysis of memory sensing and channel coding schemes. We analyze the raw bit error rate and probabilities of dominant error events, and derive the maximum likelihood decision criterion and the log-likelihood ratio of the cascaded channel. Based on these works, we further propose a two-stage hybrid decoding algorithm for extended Hamming codes for STT-MRAM. Simulation results demonstrate the effectiveness of the proposed hybrid decoding algorithm for improving the reliability of STT-MRAM in the presence of both write errors and read errors and supporting its potential applications, such as replacing DRAM.

Journal ArticleDOI
TL;DR: A robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes that has high decoding accuracy and strong robustness to surface color and complex textures.

Journal ArticleDOI
TL;DR: This paper presents two low-complexity edge-based scheduling schemes, referred to as the e-Flooding and e-Shuffled schedules, for the belief-propagation (BP) decoding of low-density parity-check and Reed–Solomon codes, and shows that these schemes reduce the BP decoding complexity by more than 90% compared with the prior-art BP schedules.
Abstract: This paper presents two low-complexity edge-based scheduling schemes, referred to as the e-Flooding and e-Shuffled schedules, for the belief-propagation (BP) decoding of low-density parity-check and Reed–Solomon codes. The proposed schedules selectively update the edges of the code graph based on the run-time reliability of variable and check nodes. Specifically, new message update is propagated exclusively along the unreliable edges of the code graph. This reduces the decoding complexity of BP algorithm as only a partial set of message updates is computed per decoding iteration. Besides, restricting the flow of message updates may also precludes the occurrence of some short graph cycles, which helps to preserve the BP message independence at certain variable and check nodes. Using numerical simulations, it is shown that the proposed edge-based schedules reduce the BP decoding complexity by more than 90% compared with the prior-art BP schedules, while simultaneously improving the error-rate performance, at medium-to-high signal-to-noise ratio over additive white Gaussian noise channel.

Journal ArticleDOI
TL;DR: This work proposes a non-uniform pragmatic decoding schedule (parallel and serial) that does not require any additional calculations (e.g., BER estimates) within the decoding process and results in a significant reduction in complexity without any loss in performance.
Abstract: Spatially coupled low-density parity-check codes can be decoded using a graph-based message passing algorithm applied across the total length of the coupled graph. However, considering practical constraints on decoding latency and complexity, a sliding window decoding approach is normally preferred. In order to reduce decoding complexity compared with standard parallel decoding schedules, serial schedules can be applied within a decoding window. However, uniform serial schedules within a window do not provide the expected reduction in complexity. Hence, we propose non-uniform schedules (parallel and serial) based on measured improvements in the estimated bit error rate (BER). We show that these non-uniform schedules result in a significant reduction in complexity without any loss in performance. Furthermore, based on observations made using density evolution, we propose a non-uniform pragmatic decoding schedule (parallel and serial) that does not require any additional calculations (e.g., BER estimates) within the decoding process.

Journal ArticleDOI
TL;DR: This letter investigates the node-wise scheduling for ADMM decoders, named NS-ADMM, and proposes a reduced-complexity method by avoiding Euclidean projections involved in the calculation of message residuals, which converges much faster than the flooding and layered scheduling while keeping a lower complexity.
Abstract: Similar to the belief propagation decoder, linear programming decoding based on the alternating direction method of multipliers (ADMM) can also be seen as an iterative message-passing decoding algorithm. How to schedule messages efficiently is an important aspect since it will influence the convergence rate of iterative decoders. In this letter, we investigate the node-wise scheduling for ADMM decoders, named NS-ADMM. In particular, we propose a reduced-complexity method for the NS-ADMM decoder by avoiding Euclidean projections involved in the calculation of message residuals. Simulation results show that the proposed method converges much faster than the flooding and layered scheduling while keeping a lower complexity when compared with the NS-ADMM decoder.

Proceedings ArticleDOI
25 Jun 2017
TL;DR: There is a trade-off between the CRC length and FER performance, and for a given target FER, there is the minimum length of CRC that satisfies the FER constraint in high signal-to-noise ratio (SNR).
Abstract: Concatenation of polar codes with cyclic redundancy check (CRC) codes, together with successive cancellation list (SCL) decoding, is known to be an effective approach that can significantly enhance the performance of the original polar codes. Most of the studies on the concatenation of CRC and polar codes, however, pay little attention to the structure of CRC codes themselves, even though the longer CRC may lead to loss in terms of information rate. In this work, we investigate the effect of CRC length on the CRC-concatenated polar code performance by developing an analytical bound for the frame error rate (FER) after the CRC-assisted list decoding. As a result, we reveal that there is a trade-off between the CRC length and FER performance, and for a given target FER, there is the minimum length of CRC that satisfies the FER constraint in high signal-to-noise ratio (SNR). The validity of our analytical framework is confirmed by extensive simulation over an additive white Gaussian noise (AWGN) channel. The results thus offer a useful guideline when designing CRC codes for polar codes with SCL decoding.