scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2016"


Posted Content
TL;DR: A simple, fast decoding algorithm that fosters diversity in neural generation by adding an inter-sibling ranking penalty and is capable of automatically adjusting its diversity decoding rates for different inputs using reinforcement learning (RL).
Abstract: In this paper, we propose a simple, fast decoding algorithm that fosters diversity in neural generation. The algorithm modifies the standard beam search algorithm by adding an inter-sibling ranking penalty, favoring choosing hypotheses from diverse parents. We evaluate the proposed model on the tasks of dialogue response generation, abstractive summarization and machine translation. We find that diverse decoding helps across all tasks, especially those for which reranking is needed. We further propose a variation that is capable of automatically adjusting its diversity decoding rates for different inputs using reinforcement learning (RL). We observe a further performance boost from this RL technique. This paper includes material from the unpublished script "Mutual Information and Diverse Decoding Improve Neural Machine Translation" (Li and Jurafsky, 2016).

253 citations


Journal ArticleDOI
TL;DR: A speed-up technique for successive-cancellation list decoding of polar codes that is exact for list size of 2, while its approximations bring negligible error-correction performance degradation (<;0.05 dB) for other list sizes.
Abstract: Polar codes are a recently discovered family of capacity-achieving error-correcting codes. Among the proposed decoding algorithms, successive-cancellation list decoding guarantees the best error-correction performance with codes of moderate lengths, but it yields low throughput. Speed-up techniques have been proposed in the past: most of them rely on approximations that degrade the error-correction capability of the algorithm. We propose a speed-up technique for successive-cancellation list decoding of polar codes that is exact for list size of 2, while its approximations bring negligible error-correction performance degradation ( $3.16\times $ , at the cost of 14.2% in area occupation.

110 citations


Proceedings ArticleDOI
Huayi Zhou1, Chuan Zhang1, Wenqing Song1, Shugong Xu2, Xiaohu You1 
15 May 2016
TL;DR: The segmented CRC- aided successive cancellation list (SCA-SCL) polar decoding scheme is proposed for better tradeoff of performance and complexity and has shown that, at SNR of 0.5 dB, this approach successfully provides as high as 41.65% complexity reduction and similar decoding performance compared to state-of-the-art ones.
Abstract: Because of the existence of channel noise, channel coding serves as an indispensable part of mobile communication system and the essential guarantee for the reliable, accurate, and effective transmission of information. As one of the most competitive channel code candidates for the 5th generation (5G) mobile communication, polar codes are the first codes which can provably achieve the symmetric capacity of binary-input discrete memoryless channels (B-DMCs). In this paper, the segmented CRC- aided successive cancellation list (SCA-SCL) polar decoding scheme is proposed for better tradeoff of performance and complexity. Numerical results on binary-input additive white Gaussian noise channel (BI-AWGNC) have shown that, at SNR of 0.5 dB, this approach successfully provides as high as 41.65% complexity reduction and similar decoding performance compared to state-of-the-art ones.

80 citations


Journal ArticleDOI
TL;DR: The error floor performance of the short block length codes is improved by the use of a novel candidate selection metric in code graph construction, and an extended class of the diversity-achieving codes on the challenging block fading channel is proposed and considered with the multipath EMD design.
Abstract: Low-density parity-check (LDPC) codes are capable of achieving excellent performance and provide a useful alternative for the high-performance applications. However, at medium to high signal-to-noise ratios, an observable error floor arises from the loss of independence of messages passed under iterative graph-based decoding. In this paper, the error floor performance of the short block length codes is improved by the use of a novel candidate selection metric in code graph construction. The proposed multipath extrinsic message degree (EMD) approach avoids harmful structures in the graph by evaluating certain properties of the cycles introduced in each edge placement. We present multipath EMD-based designs for several structured LDPC codes, including quasi-cyclic and the irregular repeat accumulate codes. In addition, an extended class of the diversity-achieving codes on the challenging block fading channel is proposed and considered with the multipath EMD design. This combined approach is demonstrated to provide the gains in decoder convergence and error rate performance. A simulation study evaluates the performance of the proposed and existing state-of-the-art methods.

75 citations


Journal ArticleDOI
TL;DR: This work proposes a novel coding scheme called multi-CRC polar code for significant reduction of memory size and decoding delay but with negligible performance loss and applies this scheme to hybrid automatic repeat request (HARQ) system to aid retransmission.
Abstract: Polar codes under successive cancelation list (SCL) decoding are capable of achieving almost the same or better performance than turbo codes or low density parity-check codes with the help of single cyclic redundancy check (CRC). This decoding scheme, however, suffers from very high complexity with long delay and large memory space. Motivated by this research problem, we propose a novel coding scheme called multi-CRC polar code for significant reduction of memory size and decoding delay but with negligible performance loss. Our analysis and simulation have shown that about half reduction of memory size and decoding delay can be achieved in SCL decoding. We also apply this scheme to hybrid automatic repeat request (HARQ) system to aid retransmission and show that the throughput of multi-CRC polar code is higher than that of the single-CRC one.

75 citations


Journal ArticleDOI
TL;DR: A new class of decoders obtained by applying the alternating direction method of multipliers (ADMM) algorithm to a set of non-convex optimization problems are constructed by adding a penalty term to the objective of LP decoding to make pseudocodewords, which are non-integer vertices of the LP relaxation, more costly.
Abstract: Linear programming (LP) decoding for low-density parity-check codes was introduced by Feldman et al. and has been shown to have theoretical guarantees in several regimes. Furthermore, it has been reported in the literature—via simulation and via instanton analysis—that LP decoding displays better error rate performance at high signal-to-noise ratios (SNR) than does belief propagation (BP) decoding. However, at low SNRs, LP decoding is observed to have worse performance than BP. In this paper, we seek to improve LP decoding at low SNRs while maintaining LP decoding’s high SNR performance. Our main contribution is a new class of decoders obtained by applying the alternating direction method of multipliers (ADMM) algorithm to a set of non-convex optimization problems. These non-convex problems are constructed by adding a penalty term to the objective of LP decoding. The goal of the penalty is to make pseudocodewords, which are non-integer vertices of the LP relaxation, more costly. We name this class of decoders—ADMM penalized decoders. For low and moderate SNRs, we simulate ADMM penalized decoding with $\ell _{1}$ and $\ell _{2}$ penalties. We find that these decoders can outperform both BP and LP decoding. For high SNRs, where it is difficult to obtain data via simulation, we use an instanton analysis and find that, asymptotically, ADMM penalized decoding performs better than BP but not as well as LP. Unfortunately, since ADMM penalized decoding is not a convex program, we have not been successful in developing theoretical guarantees. However, the non-convex program can be approximated using a sequence of linear programs; an approach that yields a reweighted LP decoder. We show that a two-round reweighted LP decoder has an improved theoretical recovery threshold when compared with LP decoding. In addition, we find via simulation that reweighted LP decoding significantly attains lower error rates than LP decoding at low SNRs.

70 citations


Proceedings ArticleDOI
23 Mar 2016
TL;DR: It is shown that with careful selection of list sizes and number of partitions, the proposed algorithm can outperform conventional SCL while requiring less memory.
Abstract: Successive-cancellation list (SCL) decoding is an algorithm that provides very good error-correction performance for polar codes. However, its hardware implementation requires a large amount of memory, mainly to store intermediate results. In this paper, a partitioned SCL algorithm is proposed to reduce the large memory requirements of the conventional SCL algorithm. The decoder tree is broken into partitions that are decoded separately. We show that with careful selection of list sizes and number of partitions, the proposed algorithm can outperform conventional SCL while requiring less memory.

70 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: In this article, an optimized metric is proposed to determine the flipping positions within the SCFlip decoder, which improves its ability to find the first error that occurred during the initial SC decoding attempt.
Abstract: This paper focuses on the recently introduced Successive Cancellation Flip (SCFlip) decoder of polar codes. Our contribution is twofold. First, we propose the use of an optimized metric to determine the flipping positions within the SCFlip decoder, which improves its ability to find the first error that occurred during the initial SC decoding attempt. We also show that the proposed metric allows closely approaching the performance of an ideal SCFlip decoder. Second, we introduce a generalisation of the SCFlip decoder to a number of ω nested flips, denoted by SCFlip-ω, using a similar optimized metric to determine the positions of the nested flips. We show that the SCFlip-2 decoder yields significant gains in terms of decoding performance and competes with the performance of the CRC-aided SC-List decoder with list size L=4, while having an average decoding complexity similar to that of the standard SC decoding, at medium to high signal to noise ratio.

66 citations


Journal ArticleDOI
TL;DR: Numerical results show, for high-rate codes suitable for flash memories, that 4 bits per message and a few iterations are sufficient to approach full belief-propagation decoding, less than 5-7bits per message typically needed.
Abstract: For low-density parity-check (LDPC) codes widely used in NAND flash memories, the bit-error rate performance is closely tied to the number of bits per message used by the message-passing decoder. This paper describes a technique to generate message-passing decoding mapping functions for LDPC codes using 3 and 4 bits per message. These maps are not derived from belief-propagation decoding or one of its approximations, instead, the maps are based on a channel quantizer that maximizes mutual information. More precisely, the construction technique is a systematic method, which uses an optimal quantizer at each step of density evolution to generate message-passing decoding mappings. Numerical results show, for high-rate codes suitable for flash memories, that 4 bits per message and a few iterations (10–20 iterations) are sufficient to approach full belief-propagation decoding, less than 5–7 bits per message typically needed. The construction technique is flexible, since it can generate maps for arbitrary number of bits per message, and can be applied to arbitrary memoryless channels.

61 citations


Journal ArticleDOI
TL;DR: A reduced latency list decoding (RLLD) algorithm for polar codes is proposed, which significantly reduces the decoding latency and, hence, improves throughput, while introducing little performance degradation.
Abstract: While long polar codes can achieve the capacity of arbitrary binary-input discrete memoryless channels when decoded by a low complexity successive-cancellation (SC) algorithm, the error performance of the SC algorithm is inferior for polar codes with finite block lengths. The cyclic redundancy check (CRC)-aided SC list (SCL) decoding algorithm has better error performance than the SC algorithm. However, current CRC-aided SCL decoders still suffer from long decoding latency and limited throughput. In this paper, a reduced latency list decoding (RLLD) algorithm for polar codes is proposed. Our RLLD algorithm performs the list decoding on a binary tree, whose leaves correspond to the bits of a polar code. In existing SCL decoding algorithms, all the nodes in the tree are traversed, and all possibilities of the information bits are considered. Instead, our RLLD algorithm visits much fewer nodes in the tree and considers fewer possibilities of the information bits. When configured properly, our RLLD algorithm significantly reduces the decoding latency and, hence, improves throughput, while introducing little performance degradation. Based on our RLLD algorithm, we also propose a high throughput list decoder architecture, which is suitable for larger block lengths due to its scalable partial sum computation unit. Our decoder architecture has been implemented for different block lengths and list sizes using the TSMC 90-nm CMOS technology. The implementation results demonstrate that our decoders achieve significant latency reduction and area efficiency improvement compared with the other list polar decoders in the literature.

60 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed functional-repair BASIC regenerating codes can achieve the fundamental tradeoff curve between the storage and repair bandwidth asymptotically of functional- Repair regenerating code with less computational complexity.
Abstract: In distributed storage systems, regenerating codes can achieve the optimal tradeoff between storage capacity and repair bandwidth. However, a critical drawback of existing regenerating codes, in general, is the high coding and repair complexity, since the coding and repair processes involve expensive multiplication operations in finite field. In this paper, we present a design framework of regenerating codes, which employ binary addition and bitwise cyclic shift as the elemental operations, named BASIC regenerating codes. The proposed BASIC regenerating codes can be regarded as a concatenated code with the outer code being a binary parity-check code, and the inner code being a regenerating code utilizing the binary parity-check code as the alphabet. We show that the proposed functional-repair BASIC regenerating codes can achieve the fundamental tradeoff curve between the storage and repair bandwidth asymptotically of functional-repair regenerating codes with less computational complexity. Furthermore, we demonstrate that the existing exact-repair product-matrix construction of regenerating codes can be modified to exact-repair BASIC product-matrix regenerating codes with much less encoding, repair, and decoding complexity from the theoretical analysis, and with less encoding time, repair time, and decoding time from the implementation results.

Journal ArticleDOI
TL;DR: Simulation results show that Conv-polar codes when decoded with the proposed soft-output multistage iterative decoding algorithm can outperform stand-alone polar codes decodes with successive cancellation or belief propagation decoding, and the proposed concatenation scheme requires lower memory and decoding complexity than existing methods in the finite-length regime.
Abstract: We analyze interleaved concatenation schemes of polar codes with outer binary BCH codes and convolutional codes. We show that both BCH-polar and Conv-polar codes can have a frame error rate that decays exponentially with the code length for all rates up to capacity, which is a substantial improvement in the error exponent over stand-alone polar codes. Interleaved concatenation with long constraint length convolutional codes is an effective way to leverage the fact that polarization increases the cutoff rate of the channel. Simulation results show that Conv-polar codes when decoded with the proposed soft-output multistage iterative decoding algorithm can outperform stand-alone polar codes decoded with successive cancellation or belief propagation decoding. It may be comparable to stand-alone polar codes with list decoding in the high SNR regime. In addition to this, we show that the proposed concatenation scheme requires lower memory and decoding complexity in comparison to belief propagation and list decoding of polar codes. Practically, the scheme enables rate compatible outer codes which ease hardware implementation. Our results suggest that the proposed method may strike a better balance between performance and complexity compared to existing methods in the finite-length regime.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: This paper starts from non-binary LDPC codes to analyze the principle application of correcting matrix and Tanner diagram and puts forward Min-Max non binary LD PC codes algorithm, which simplifies calculation of non binaryLDPC codes and can effectively promote enhancement of communication in error correction.
Abstract: with the increase in requirements on fiber-optical communication system by people, transmission quantity has been increased. In order to guarantee quick and effective fiber-optical communication, people have to use LDPC codes to implement correction modulation, of which, the ability of non-binary LDPC codes to correct abrupt error and random error become more stronger, and it is corresponds with high-order modulation, it is suitable to be used in optical transmission system with super speed and long distance, so it has become to the key point researched by people. This paper starts from non-binary LDPC codes to analyze the principle application of correcting matrix and Tanner diagram, it also illustrates coding and decoding of LDCP codes and puts forward Min-Max non binary LDPC codes algorithm, which simplifies calculation of non binary LDPC codes and can effectively promote enhancement of communication in error correction. Introduction As the soft decision technology universally researched by people, LDPC has excellent ability in error correction; error level is low and can be concurrently realized. Meanwhile, research on binary LDPC codes is relatively mature, including binary LDPC codes and decoding algorithm, capacity analysis method, search algorithm of optimal degree based on Gaussian white noise channel and Rayleigh fading channel with irregular code. However, compare non-binary LDPC code with LDPC codes, its ability in abrupt error correction and random error correction is stronger, and it is more suitable with high-order modulation, so it is suitable to be used in optical transmission system with super speed and long distance. Overview on non-binary LDPC codes LDPC codes was put forward by Gallager in 1960, it is the linear block code, ix represents better capacity in data transmission and data storage, it is exclusively determined by generation matrix G or parity check matrix H, so it can define LDPC codes by parity check matrix H. Of which, H has 4 natures, which means each line has p 1, each kind has r and 1, the position between any 2 lines is the same and the number of valuing 1 dose not exceed 1, the line number comparison between p, t as well as code length and H is very small. From these natures we can see that parity check matrix and H respectively has special line weight p and parallel weight r, and any 2 lines have 1 exceed the same position, the density of 1 is very small, so it is called as parity check matrix with low density, the details are indicated by the following figure 1: Figure 1 Parity check matrix with low=density of LDPC Except to use check matrix to express LDPC codes, it can also use bidirectional graph model to 4th International Conference on Machinery, Materials and Information Technology Applications (ICMMITA 2016) Copyright © 2017, the Authors. Published by Atlantis Press. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/). Advances in Computer Science Research, volume 71

Journal ArticleDOI
TL;DR: This brief presents an efficient sorting architecture for successive-cancellation-list decoding of polar codes that requires less than 50% of the compare-and-swap units demanded by the area-efficient sorting networks in the literature.
Abstract: This brief presents an efficient sorting architecture for successive-cancellation-list decoding of polar codes. In order to avoid performing redundant sorting operations on the metrics that are already sorted in the previous step of decoding, the proposed architecture separately processes the sorted metrics and unsorted ones. In addition, the odd–even sort network is adopted as a basic building block to further reduce the hardware complexity while sustaining low latencies for various list sizes. On average, the proposed architecture requires less than 50% of the compare-and-swap units demanded by the area-efficient sorting networks in the literature.

Journal ArticleDOI
TL;DR: Simulations show that, for rate 1/2 convolutional codes, this new method notably improves the performance and an optimum decoding-based blind recognition is also described.
Abstract: Existing schemes for blind recognition of channel codes make use of the average log-likelihood ratio (LLR) of each code’s parity checks. There are difficulties in setting necessary thresholds and in theoretical analysis of the method. This letter proposes to use the average likelihood difference (LD) of the parity checks. The computational complexity is reduced while the recognition performance is kept comparable to that of the LLR method. Moreover, an optimum decoding-based blind recognition is also described. Simulations show that, for rate 1/2 convolutional codes, this new method notably improves the performance.

Journal ArticleDOI
TL;DR: In this paper, it is shown that the same technique can be used to construct polar codes for arbitrary multiple access channels by using an appropriate Abelian group structure, and sufficient conditions for having maximal loss in the dominant face are provided.
Abstract: Polar codes are constructed for arbitrary channels by imposing an arbitrary quasi-group structure on the input alphabet. Just as with usual polar codes, the block error probability under successive cancellation decoding is $o(2^{-N^{1/2-\epsilon }})$ , where $N$ is the block length. Encoding and decoding for these codes can be implemented with a complexity of $O(N\log N)$ . It is shown that the same technique can be used to construct polar codes for arbitrary multiple access channels by using an appropriate Abelian group structure. Although the symmetric sum capacity is achieved by this coding scheme, some points in the symmetric capacity region may not be achieved. In the case where the channel is a combination of linear channels, we provide a necessary and sufficient condition characterizing the channels whose symmetric capacity region is preserved by the polarization process. We also provide a sufficient condition for having a maximal loss in the dominant face.

Journal ArticleDOI
TL;DR: This paper proposes a new queuing study over networking systems that make use of sequential decoders that is totally generic and parameterized by not only channel condition and packet incoming rate, but also those that are automatically adapted to the channel conditions which include lower and upper bound decoding limits.
Abstract: Recently, there has been a rapid progress in the field of wireless networks and mobile communications which makes the constraints on the used links clearly unconcealed. Wireless links are characterized by limited bandwidth and high latencies. Moreover, the bit-error-rate (BER) is very high in such environments for various reasons out of which weather conditions, cross-link interference, and mobility. High BER causes corruption in the data being transmitted over these channels. Therefore, convolutional encoding has been originated to be a professional means of communication over noisy environments. Sequential decoding, a category of convolutional codes, represents an efficient error detection and correction mechanism which attracts the attention for most of current researchers as for having a complexity that is dependent to the channel condition. In this paper, we propose a new queuing study over networking systems that make use of sequential decoders. Hence, the adopted flow and error control refer to stop-and-wait hybrid automatic repeat request. However, our queuing study is a novel extension to our prior work in which the lowest decoding complexity was fixed and did not account for the channel state. In other words, our proposed closed-form expression of the average buffer occupancy is totally generic and parameterized by not only channel condition and packet incoming rate, but also those that are automatically adapted to the channel conditions which include lower and upper bound decoding limits.

Journal ArticleDOI
TL;DR: An analysis of spatially coupled low-density parity-check (SC-LDPC) codes constructed from protographs reveals significant differences in their finite-length scaling behavior, which is corroborated by simulation.
Abstract: An analysis of spatially coupled low-density parity-check (SC-LDPC) codes constructed from protographs is proposed. Given the protograph used to generate the SC-LDPC code ensemble, a set of scaling parameters to characterize the average finite-length performance in the waterfall region is computed. The error performance of structured SC-LDPC code ensembles is shown to follow a scaling law similar to that of unstructured randomly constructed SC-LDPC codes. Under a finite-length perspective, some of the most relevant SC-LDPC protograph structures proposed to date are compared. The analysis reveals significant differences in their finite-length scaling behavior, which is corroborated by simulation. Spatially coupled repeat-accumulate codes present excellent finite-length performance, as they outperform in the waterfall region SC-LDPC codes of the same rate and better asymptotic thresholds.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: A low complexity decoding algorithm based on list sphere decoding that can reduce the computational complexity substantially while achieve the near maximum likelihood (ML) performance is proposed.
Abstract: Non-orthogonal multiple access is one of the key techniques developed for the future 5G communication systems among which, the recent proposed sparse code multiple access (SCMA) has attracted a lots of researchers' interests. By exploring the shaping gain of the multi-dimensional complex codewords, SCMA is shown to have a better performance compared with some other non-orthogonal schemes such as low density signature (LDS). However, although the sparsity of the codewords makes the near optimal message passing algorithm (MPA) possible, the decoding complexity is still very high. In this paper, we propose a low complexity decoding algorithm based on list sphere decoding. Complexity analysis and simulation results show that the proposed algorithm can reduce the computational complexity substantially while achieve the near maximum likelihood (ML) performance.

Journal ArticleDOI
Erdal Arikan1
TL;DR: Polar coding was originally designed to be a low-complexity recursive channel combining and splitting operation of this type, to be used as the inner code in a concatenated scheme with outer convolutional coding and sequential decoding as discussed by the authors.
Abstract: Polar coding was conceived originally as a technique for boosting the cutoff rate of sequential decoding, along the lines of earlier schemes of Pinsker and Massey. The key idea in boosting the cutoff rate is to take a vector channel (either given or artificially built), split it into multiple correlated subchannels, and employ a separate sequential decoder on each subchannel. Polar coding was originally designed to be a low-complexity recursive channel combining and splitting operation of this type, to be used as the inner code in a concatenated scheme with outer convolutional coding and sequential decoding. However, the polar inner code turned out to be so effective that no outer code was actually needed to achieve the original aim of boosting the cutoff rate to channel capacity. This paper explains the cutoff rate considerations that motivated the development of polar coding.

Journal ArticleDOI
TL;DR: A formulation of the ADMM decoding algorithm with modified computation scheduling is proposed that increases the error correction performance of the decoding algorithm and reduces the average computation complexity of the decode process thanks to a faster convergence.
Abstract: The alternate direction method of multipliers (ADMM) approach has been recently considered for LDPC decoding. It has been approved to enhance the error rate performance compared with conventional message passing (MP) techniques in both the waterfall and error floor regions at the cost of a higher computation complexity. In this letter, a formulation of the ADMM decoding algorithm with modified computation scheduling is proposed. It increases the error correction performance of the decoding algorithm and reduces the average computation complexity of the decoding process thanks to a faster convergence. Simulation results show that this modified scheduling speeds up the decoding procedure with regards to the ADMM initial formulation while enhancing the error correction performance. This decoding speed-up is further improved when the proposed scheduling is teamed with a recent complexity reduction method detailed in Wei et al. IEEE Commun. Lett. , 2015.

Proceedings ArticleDOI
10 Jul 2016
TL;DR: This paper shows that in the conventional formulation of SCL, there are redundant calculations which do not need to be performed in the course of the algorithm, and simplifies SCL by removing these redundant calculations and proves that the proposed simplified SCL and the conventional SCL algorithms are equivalent.
Abstract: The Successive-Cancellation List (SCL) decoding algorithm is one of the most promising approaches towards practical polar code decoding. It is able to provide a good trade-off between error-correction performance and complexity, tunable through the size of the list. In this paper, we show that in the conventional formulation of SCL, there are redundant calculations which do not need to be performed in the course of the algorithm. We simplify SCL by removing these redundant calculations and prove that the proposed simplified SCL and the conventional SCL algorithms are equivalent. The simplified SCL algorithm is valid for any code and can reduce the time-complexity of SCL without affecting the space complexity.

Proceedings ArticleDOI
02 May 2016
TL;DR: The developed REAL scheme incorporates the numerical-correlation characteristic of retention errors into the process of LDPC decoding, and leverages the characteristic as additional bits decision information to improve its error correction capabilities and decrease the decoding latency.
Abstract: Continuous technology scaling makes NAND flash cells much denser. As a result, NAND flash is becoming more prone to various interference errors. Due to the hardware circuit design mechanisms of NAND flash, retention errors have been recognized as the most dominant errors, which affect the data reliability and flash lifetime. Furthermore, after experiencing a large number of programm/erase (P/E) cycles, flash memory would suffer a much higher error rate, rendering traditional ECC codes (typically BCH codes) insufficient to ensure data reliability. Therefore, low density parity check (LDPC) codes with stronger error correction capability are used in NAND flash-based storage devices. However, directly using LDPC codes with belief propagation (BP) decoding algorithm introduces non-trivial overhead of decoding latency and hence significantly degrades the read performance of NAND flash. It has been observed that flash retention errors show the so-called numerical-correlation characteristic (i.e., the 0–1 bits stored in the flash cell affect each other with the leakage of the charge) in each flash cell. In this paper, motivated by the observed characteristic, we propose REAL: a retention error aware LDPC decoding scheme to improve NAND flash read performance. The developed REAL scheme incorporates the numerical-correlation characteristic of retention errors into the process of LDPC decoding, and leverages the characteristic as additional bits decision information to improve its error correction capabilities and decrease the decoding latency. Our simulation results show that the proposed REAL scheme can reduce the LDPC decoding latency by 26.44% and 33.05%, compared with the Logarithm Domain Min-Sum (LD-MS) and Probability Domain BP (PD-BP) schemes, respectively.

Journal ArticleDOI
TL;DR: By construction, this paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.
Abstract: This paper revisits strongly-MDS convolutional codes with maximum distance profile (MDP). These are (non-binary) convolutional codes that have an optimum sequence of column distances and attains the generalized Singleton bound at the earliest possible time frame. These properties make these convolutional codes applicable over the erasure channel, since they are able to correct a large number of erasures per time interval. The existence of these codes have been shown only for some specific cases. This paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.

Journal ArticleDOI
TL;DR: Under fixed channel gains, the newly optimized codes are shown to perform close to the capacity region boundary outperforming the existing designs and the off-the-shelf point-to-point (P2P) codes.
Abstract: We study code design for two-user Gaussian multiple access channels (GMACs) under fixed channel gains and under quasi-static fading. We employ low-density parity-check (LDPC) codes with BPSK modulation and utilize an iterative joint decoder. Adopting a belief propagation (BP) algorithm, we derive the PDF of the log-likelihood-ratios (LLRs) fed to the component LDPC decoders. Via examples, it is illustrated that the characterized PDF resembles a Gaussian mixture (GM) distribution, which is exploited in predicting the decoding performance of LDPC codes over GMACs. Based on the GM assumption, we propose variants of existing analysis methods, named modified density evolution (DE) and modified extrinsic information transfer (EXIT). We derive a stability condition on the degree distributions of the LDPC code ensembles and utilize it in the code optimization. Under fixed channel gains, the newly optimized codes are shown to perform close to the capacity region boundary outperforming the existing designs and the off-the-shelf point-to-point (P2P) codes. Under quasi-static fading, optimized codes exhibit consistent improvements upon the P2P codes as well. Finite block length simulations of specific codes picked from the designed ensembles are also carried out and it is shown that optimized codes perform close to the outage limits.

Journal ArticleDOI
TL;DR: An ensemble of random causal linear codes with a time invariant structure and it is shown that they are tree codes with probability one, and novel sufficient conditions on the rate and reliability required of the tree codes to stabilize vector plants are given and are argued to be asymptotically tight.
Abstract: The problem of stabilizing an unstable plant over a noisy communication link is an increasingly important one that arises in problems of distributed control and networked control systems. Although the work of Schulman, and Sahai and Mitter over the past two decades, and their development of the notions of “tree codes” and “anytime capacity” respectively, provides the theoretical framework for studying such problems, there has been scant practical progress in this area because explicit constructions of tree codes with efficient encoding and decoding did not exist. To stabilize an unstable plant driven by bounded noise over a noisy channel one often needs real-time encoding and real-time decoding and a reliability which increases exponentially with delay, which is what tree codes guarantee. We propose an ensemble of random causal linear codes with a time invariant structure and show that they are tree codes with probability one. For erasure channels, we show that the average complexity of maximum likelihood decoding is bounded by a constant for all time if the code rate is smaller than the computational cutoff rate. For rates larger than the computational cutoff rate, we present an alternate way to perform maximum likelihood decoding with a complexity that grows linearly with time. We give novel sufficient conditions on the rate and reliability required of the tree codes to stabilize vector plants and argue that they are asymptotically tight.

Proceedings ArticleDOI
22 May 2016
TL;DR: The state of the art in polar decoders implementing the successive-cancellation, belief propagation, and list decoding algorithms are reviewed, illustrating their advantages.
Abstract: Polar codes are an exciting new class of error correcting codes that achieve the symmetric capacity of memoryless channels. Many decoding algorithms were developed and implemented, addressing various application requirements: from error-correction performance rivaling that of LDPC codes to very high throughput or low-complexity decoders. In this work, we review the state of the art in polar decoders implementing the successive-cancellation, belief propagation, and list decoding algorithms, illustrating their advantages.

Journal ArticleDOI
TL;DR: A reduced complexity early stopping method for BP-based polar code decoders based on the hypothesis that observing only a small cluster of information bits polarized to the highest error probabilities is enough to detect successful decoding.
Abstract: As the first theoretically proven capacity-achieving error correction code, polar codes have become a milestone in information theory field, which draw serious attention by its low complexity encoding and decoding structures. For the decoding section, studies are focused on efficient and low complex algorithms based on successive cancellation and belief-propagation (BP) decoders. To reduce computational complexity of BP-based decoders further, early stopping methods that avoid unnecessary iterations can be used. In this letter, we propose a reduced complexity early stopping method for BP-based polar code decoders. The proposed method is based on the hypothesis that observing only a small cluster of information bits polarized to the highest error probabilities is enough to detect successful decoding. Simulation results show that the proposed early stopping criterion significantly reduces the computational complexity of successful decoding detection as well as the total operation numbers of the whole decoding process compared with previous methods in the literature.

Journal ArticleDOI
TL;DR: In this paper, the maximum likelihood decoding performance of Raptor codes with a systematic low-density generator-matrix code as the pre-code was analyzed and upper and lower bounds on the decoding failure probability were derived by investigating the rank of the product of two random coefficient matrices.
Abstract: In this paper, we analyze the maximum likelihood decoding performance of Raptor codes with a systematic low-density generator-matrix code as the pre-code. By investigating the rank of the product of two random coefficient matrices, we derive upper and lower bounds on the decoding failure probability. The accuracy of our analysis is validated through simulations. Results of extensive Monte Carlo simulations demonstrate that for Raptor codes with different degree distributions and pre-codes, the bounds obtained in this paper are of high accuracy. The derived bounds can be used to design near-optimum Raptor codes with short and moderate lengths.

Proceedings ArticleDOI
22 May 2016
TL;DR: This paper proposes a novel statistic which depicts the number of errors contained in the ordered received noisy codeword and incorporates the properties of this new statistic to derive the simplified error performance bound of the OSD algorithm for all order-I reprocessing.
Abstract: In this paper, a novel simplified statistical approach to evaluate the error performance bound of Ordered Statistics Decoding (OSD) of Linear Block Codes (LBC) is investigated. First, we propose a novel statistic which depicts the number of errors contained in the ordered received noisy codeword. Then, simplified expressions for the probability mass function and cumulative distribution function are derived exploiting the implicit statistical independence property of the samples of the received noisy codeword before reordering. Second, we incorporate the properties of this new statistic to derive the simplified error performance bound of the OSD algorithm for all order-I reprocessing. Finally, with the proposed approach, we obtain computationally simpler error performance bounds of the OSD than those proposed in literature for all length LBCs.