scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2013"


Journal ArticleDOI
TL;DR: Unified descriptions of the SC, SCL, and SCS decoding algorithms are given as path search procedures on the code tree of polar codes and a new decoding algorithm called the successive cancellation hybrid (SCH) is proposed to provide a flexible configuration when the time and space complexities are limited.
Abstract: As improved versions of the successive cancellation (SC) decoding algorithm, the successive cancellation list (SCL) decoding and the successive cancellation stack (SCS) decoding are used to improve the finite-length performance of polar codes. In this paper, unified descriptions of the SC, SCL, and SCS decoding algorithms are given as path search procedures on the code tree of polar codes. Combining the principles of SCL and SCS, a new decoding algorithm called the successive cancellation hybrid (SCH) is proposed. This proposed algorithm can provide a flexible configuration when the time and space complexities are limited. Furthermore, a pruning technique is also proposed to lower the complexity by reducing unnecessary path searching operations. Performance and complexity analysis based on simulations shows that under proper configurations, all the three improved successive cancellation (ISC) decoding algorithms can approach the performance of the maximum likelihood (ML) decoding but with acceptable complexity. With the help of the proposed pruning technique, the time and space complexities of ISC decoders can be significantly reduced and be made very close to those of the SC decoder in the high signal-to-noise ratio regime.

212 citations


Proceedings ArticleDOI
09 Jun 2013
TL;DR: A simple quasi-uniform puncturing algorithm to generate the puncturing table is proposed and it is proved that this method has better row-weight property than that of the random puncturing.
Abstract: CRC (cyclic redundancy check) concatenated polar codes are superior to the turbo codes under the successive cancellation list (SCL) or successive cancellation stack (SCS) decoding algorithms. But the code length of polar codes is limited to the power of two. In this paper, a family of rate-compatible punctured polar (RCPP) codes is proposed to satisfy the construction with arbitrary code length. We propose a simple quasi-uniform puncturing algorithm to generate the puncturing table. And we prove that this method has better row-weight property than that of the random puncturing. Simulation results under the binary input additive white Gaussian noise channels (BI-AWGNs) show that these RCPP codes outperform the performance of turbo codes in WCDMA (Wideband Code Division Multiple Access) or LTE (Long Term Evolution) wireless communication systems in the large range of code lengths. Especially, the RCPP code with CRC-aided SCL/SCS algorithm can provide over 0.7dB performance gain at the block error rate (BLER) of 10-4 with short code length M = 512 and code rate R = 0.5.

199 citations


Patent
03 Dec 2013
TL;DR: In this article, a method for decoding block and concatenated codes based on belief propagation algorithms, with particular advantages when applied to codes having higher density parity check matrices, is presented.
Abstract: Systems and methods for decoding block and concatenated codes are provided. These include advanced iterative decoding techniques based on belief propagation algorithms, with particular advantages when applied to codes having higher density parity check matrices. Improvements are also provided for performing channel state information estimation including the use of optimum filter lengths based on channel selectivity and adaptive decision-directed channel estimation. These improvements enhance the performance of various communication systems and consumer electronics. Particular improvements are also provided for decoding HD Radio signals, including enhanced decoding of reference subcarriers based on soft-diversity combining, joint enhanced channel state information estimation, as well as iterative soft-input soft-output and list decoding of convolutional codes and Reed-Solomon codes. These and other improvements enhance the decoding of different logical channels in HD Radio systems.

132 citations


Journal ArticleDOI
TL;DR: An improved version of the simplified successive-cancellation decoding algorithm that increases decoding throughput without degrading the error-correction performance is presented.
Abstract: The serial nature of successive-cancellation decoding results in low polar decoder throughput. In this letter we present an improved version of the simplified successive-cancellation decoding algorithm that increases decoding throughput without degrading the error-correction performance. We show that the proposed algorithm has up to three times the throughput of the simplified successive-cancellation decoding algorithm and up to twenty-nine times the throughput of a standard successive-cancellation decoder while using the same number of processing elements.

119 citations


Proceedings ArticleDOI
26 May 2013
TL;DR: This paper begins with a review of min-sum (MS) approximated BP algorithm, and then proposes a scaled MS (SMS) algorithm with improved decoding performance, and an efficient critical path reduction approach that can be applied to both of SMS and MS algorithms.
Abstract: Polar codes have emerged as important channel codes because of their capacity-achieving property. For low-complexity polar decoding, hardware architectures for successive cancellation (SC) algorithm have been investigated in prior works. However, belief propagation (BP)-based architectures have not been explored in detail. This paper begins with a review of min-sum (MS) approximated BP algorithm, and then proposes a scaled MS (SMS) algorithm with improved decoding performance. Then, in order to solve long critical path problem in the SMS algorithm, we propose an efficient critical path reduction approach. Due to its generality, this optimization method can be applied to both of SMS and MS algorithms. Compared with the state-of-the-art MS decoder, the proposed (1024, 512) SMS design can lead to 0.5dB extra decoding gain with the same hardware performance. Besides, the proposed optimized MS architecture can also achieve more than 30% and 80% increase in throughput and hardware efficiency, respectively.

109 citations


Journal ArticleDOI
TL;DR: An improvement and a new implementation of a simplified decoding algorithm for non-binary low density parity-check codes (NB-LDPC) in Galois fields GF(q) is presented, using the Extended Min-Sum algorithm.
Abstract: In this paper, we present an improvement and a new implementation of a simplified decoding algorithm for non-binary low density parity-check codes (NB-LDPC) in Galois fields GF(q). The base algorithm that we use is the Extended Min-Sum (EMS) algorithm, which has been widely studied in the recent literature, and has been shown to approach the performance of the belief propagation (BP) algorithm, with limited complexity. In our work, we propose a new way to compute modified configuration sets, using a trellis representation of incoming messages to check nodes. We call our modification of the EMS algorithm trellis-EMS (T-EMS). In the T-EMS, the algorithm operates directly on the deviation space by considering a trellis built from differential messages, which serves as a new reliability measure to sort the configurations. We show that this new trellis representation reduces the computational complexity, without any performance degradation. In addition, we show that our modifications of the algorithm allows to greatly reduce the decoding latency, by using a larger degree of hardware parallelization.

101 citations


Journal ArticleDOI
TL;DR: Numerical results show that length-compatible polar codes designed by the proposed method provide a performance gain of about 1.0 - 5.0 dB over those obtained by random puncturing when successive cancellation decoding is employed.
Abstract: Length-compatible polar codes are a class of polar codes which can support a wide range of lengths with a single pair of encoder and decoder. In this paper we propose a method to construct length-compatible polar codes by employing the reduction of the 2n × 2n polarizing matrix proposed by Arikan. The conditions under which a reduced matrix becomes a polarizing matrix supporting a polar code of a given length are first analyzed. Based on these conditions, length-compatible polar codes are constructed in a suboptimal way by codeword-puncturing and information-refreezing processes. They have low encoding and decoding complexity since they can be encoded and decoded in a similar way as a polar code of length 2n. Numerical results show that length-compatible polar codes designed by the proposed method provide a performance gain of about 1.0 - 5.0 dB over those obtained by random puncturing when successive cancellation decoding is employed.

99 citations


Journal ArticleDOI
TL;DR: In this paper, a two-slice characterization of the parity polytope is presented, which simplifies the representation of points in the parity space and allows the decoding of large-scale error-correcting codes efficiently.
Abstract: When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper, we draw on decomposition methods from optimization theory, specifically the alternating direction method of multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a “two-slice” characterization of the parity polytope, the polytope formed by taking the convex hull of all codewords of the single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by the ADMM decoder and its solution allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher SNR than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using the ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.

98 citations


Patent
14 Jun 2013
TL;DR: In this article, an early decoding termination detection for QC-LDPC decoders is discussed, where the controller terminates decoding the data unit in response to determining that the decoded data units from more than one layer decoding operation satisfy a parity check equation.
Abstract: Embodiments of decoders having early decoding termination detection are disclosed. The decoders can provide for flexible and scalable decoding and early termination detection, particularly when quasi-cyclic low-density parity-check code (QC-LDPC) decoding is used. In one embodiment, a controller iteratively decodes a data unit using a coding matrix comprising a plurality of layers. The controller terminates decoding the data unit in response to determining that the decoded data units from more than one layer decoding operation satisfy a parity check equation and that the decoded data units from more than one layer decoding operation are the same. Advantageously, the termination of decoding of the data unit can reduce a number of iterations performed to decode the data unit.

96 citations


Journal ArticleDOI
TL;DR: It is demonstrated that in the case of multiple relays, there is no improvement on the achievable rate by joint decoding either, and it is discovered that any compressions not supporting successive decoding will actually lead to strictly lower achievable rates for the original message.
Abstract: In the classical compress-and-forward relay scheme developed by Cover and El Gamal, the decoding process operates in a successive way: the destination first decodes the compression of the relay's observation and then decodes the original message of the source. Recently, several modified compress-and-forward relay schemes were proposed, where the destination jointly decodes the compression and the message, instead of successively. Such a modification on the decoding process was motivated by realizing that it is generally easier to decode the compression jointly with the original message, and more importantly, the original message can be decoded even without completely decoding the compression. Thus, joint decoding provides more freedom in choosing the compression at the relay. However, the question remains in these modified compress-and-forward relay schemes-whether this freedom of selecting the compression necessarily improves the achievable rate of the original message. It has been shown by El Gamal and Kim in 2010 that the answer is negative in the single-relay case. In this paper, it is further demonstrated that in the case of multiple relays, there is no improvement on the achievable rate by joint decoding either. More interestingly, it is discovered that any compressions not supporting successive decoding will actually lead to strictly lower achievable rates for the original message. Therefore, to maximize the achievable rate for the original message, the compressions should always be chosen to support successive decoding. Furthermore, it is shown that any compressions not completely decodable even with joint decoding will not provide any contribution to the decoding of the original message. The above phenomenon is also shown to exist under the repetitive encoding framework recently proposed by Lim , which improved the achievable rate in the case of multiple relays. Here, another interesting discovery is that the improvement is not a result of repetitive encoding, but the benefit of delayed decoding after all the blocks have been finished. The same rate is shown to be achievable with the simpler classical encoding process of Cover and El Gamal with a block-by-block backward decoding process.

94 citations


Patent
10 Jul 2013
TL;DR: In this paper, the decoding paths are successively duplicated and selectively pruned to generate a list of potential decoding paths and a single decoding path among the list of possible decoding paths is selected as the output and a candidate codeword is thereby identified.
Abstract: A method of decoding data encoded with a polar code and devices that encode data with a polar code. A received word of polar encoded data is decoded following several distinct decoding paths to generate a list of codeword candidates. The decoding paths are successively duplicated and selectively pruned to generate a list of potential decoding paths. A single decoding path among the list of potential decoding paths is selected as the output and a single candidate codeword is thereby identified. In another preferred embodiment, the polar encoded data includes redundancy values in its unfrozen bits. The redundancy values aid the selection of the single decoding path. A preferred device of the invention is a cellular network device, (e.g., a handset) that conducts decoding in accordance with the methods of the invention.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: Analysis and simulation of the iterative HDD of tightly-braided block codes with BCH component codes for high-speed optical communication shows that these codes are competitive with the best schemes based on HDD.
Abstract: Designing error-correcting codes for optical communication is challenging mainly because of the high data rates (e.g., 100 Gbps) required and the expectation of low latency, low overhead (e.g., 7% redundancy), and large coding gain (e.g., >9dB). Although soft-decision decoding (SDD) of low-density parity-check (LDPC) codes is an active area of research, the mainstay of optical transport systems is still the iterative hard-decision decoding (HDD) of generalized product codes with algebraic syndrome decoding of the component codes. This is because iterative HDD allows many simplifications and SDD of LDPC codes results in much higher implementation complexity. In this paper, we use analysis and simulation to evaluate tightly-braided block codes with BCH component codes for high-speed optical communication. Simulation of the iterative HDD shows that these codes are competitive with the best schemes based on HDD. Finally, we suggest a specific design that is compatible with the G.709 framing structure and exhibits a coding gain of >9.35 dB at 7% redundancy under iterative HDD with a latency of approximately 1 million bits.

Journal ArticleDOI
TL;DR: This work considers the decoding of spatially coupled codes through a windowed decoder that aims to retain many of the attractive features of belief propagation, while trying to reduce complexity further, by defining thresholds on channel erasure rates that guarantee a target erasure rate.
Abstract: Spatially coupled codes have been of interest recently owing to their superior performance over memoryless binary-input channels. The performance is good both asymptotically, since the belief propagation thresholds approach the Shannon limit, as well as for finite lengths, since degree-2 variable nodes that result in high error floors can be completely avoided. However, to realize the promised good performance, one needs large blocklengths. This in turn implies a large latency and decoding complexity. For the memoryless binary erasure channel, we consider the decoding of spatially coupled codes through a windowed decoder that aims to retain many of the attractive features of belief propagation, while trying to reduce complexity further. We characterize the performance of this scheme by defining thresholds on channel erasure rates that guarantee a target erasure rate. We give analytical lower bounds on these thresholds and show that the performance approaches that of belief propagation exponentially fast in the window size. We give numerical results including the thresholds computed using density evolution and the erasure rate curves for finite-length spatially coupled codes.

Journal ArticleDOI
TL;DR: A new family of channel codes, called ISI-free codes, are introduced, which improve the communication reliability while keeping the decoding complexity fairly low in the diffusion environment modeled by the Brownian motion.
Abstract: Molecular communications emerges as a promising scheme for communications between nanoscale devices. In diffusion-based molecular communications, molecules as information symbols diffusing in the fluid environments suffer from molecule crossovers, i.e., the arriving order of molecules is different from their transmission order, leading to intersymbol interference (ISI). In this paper, we introduce a new family of channel codes, called ISI-free codes, which improve the communication reliability while keeping the decoding complexity fairly low in the diffusion environment modeled by the Brownian motion. We propose general encoding/decoding schemes for the ISI-free codes, working upon the modulation schemes of transmitting a fixed number of identical molecules at a time. In addition, the bit error rate (BER) approximation function of the ISI-free codes is derived mathematically as an analytical tool to decide key factors in the BER performance. Compared with the uncoded systems, the proposed ISI-free codes offer good performance with reasonably low complexity for diffusion-based molecular communication systems.

Journal ArticleDOI
TL;DR: This paper proposes a linear-complexity algorithm for the projection onto a parity polytope (having a computational complexity of small O(d), where small d is the check-node degree), as compared to recent work .
Abstract: Linear program (LP) decoding has become increasingly popular for error-correcting codes due to its simplicity and promising performance. Low-complexity and efficient iterative algorithms for LP decoding are of great importance for practical applications. In this paper we focus on solving the binary LP decoding problem by using the alternating direction method of multipliers (ADMM). Our main contribution is that we propose a linear-complexity algorithm for the projection onto a parity polytope (having a computational complexity of small O(d), where small d is the check-node degree), as compared to recent work , which has a computational complexity of small O(d log d). In particular, we show that the projection onto the parity polytope can be transformed to a projection onto a simplex.

Journal ArticleDOI
TL;DR: It is shown that the coding scheme achieves the capacity region of noiseless WOMs when an arbitrary number of multiple writes is permitted and the results can be generalized from binary to generalized WOMs, described by an arbitrary directed acyclic graph.
Abstract: A coding scheme for write once memory (WOM) using polar codes is presented. It is shown that the scheme achieves the capacity region of noiseless WOMs when an arbitrary number of multiple writes is permitted. The encoding and decoding complexities scale as O(N log N), where N is the blocklength. For N sufficiently large, the error probability decreases subexponentially in N. The results can be generalized from binary to generalized WOMs, described by an arbitrary directed acyclic graph, using nonbinary polar codes. In the derivation, we also obtain results on the typical distortion of polar codes for lossy source coding. Some simulation results with finite length codes are presented.

Journal ArticleDOI
TL;DR: The trapping sets of the asymptotically good protograph-based LDPC convolutional codes considered earlier are studied and it is shown that the size of the smallest non-empty trapping set grows linearly with the constraint length for these ensembles.
Abstract: Low-density parity-check (LDPC) convolutional codes have been shown to be capable of achieving capacity-approaching performance with iterative message-passing decoding. In the first part of this paper, using asymptotic methods to obtain lower bounds on the free distance to constraint length ratio, we show that several ensembles of regular and irregular LDPC convolutional codes derived from protograph-based LDPC block codes have the property that the free distance grows linearly with respect to the constraint length, i.e., the ensembles are asymptotically good. In particular, we show that the free distance to constraint length ratio of the LDPC convolutional code ensembles exceeds the minimum distance to block length ratio of the corresponding LDPC block code ensembles. A large free distance growth rate indicates that codes drawn from the ensemble should perform well at high signal-to-noise ratios under maximum-likelihood decoding. When suboptimal decoding methods are employed, there are many factors that affect the performance of a code. Recently, it has been shown that so-called trapping sets are a significant factor affecting decoding failures of LDPC codes over the additive white Gaussian noise channel with iterative message-passing decoding. In the second part of this paper, we study the trapping sets of the asymptotically good protograph-based LDPC convolutional codes considered earlier. By extending the theory presented in part one and using similar bounding techniques, we show that the size of the smallest non-empty trapping set grows linearly with the constraint length for these ensembles.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: The results show that the proposed two-phase successive cancellation decoder architecture for polar codes has lower complexity, lower memory utilization with higher throughput, and a clock frequency that is less sensitive to code length.
Abstract: We propose a two-phase successive cancellation (TPSC) decoder architecture for polar codes that exploits the array-code property of polar codes by breaking the decoding of a length-TV polar code into a series of length-√ L decoding cycles. Each decoding cycle consists of two phases: a first phase for decoding along the columns and a second phase for decoding along the rows of the code array. The reduced decoder size makes it more affordable to implement the core decoder logic using distributed memory elements consisting of flip-flops (FFs), as opposed to slower random access memory (RAM), leading to a speed up in clock frequency. To minimize the circuit complexity, a single decoder unit is used in both phases with minor modifications. The re-use of the same decoder module makes it necessary to recall certain internal decoder state variables between decoding cycles. Instead of storing the decoder state variables in RAM, the decoder discards them and calculates them again when needed. Overall, the decoder has O(√ L) circuit complexity excluding RAM, and a latency of approximately 2.57V. A RAM of size O(N) is needed for storing the channel log-likelihood variables and the decoder decision variables. As an example of the proposed method, a length N = 214 bit polar code is implemented in an FPGA and the synthesis results are compared with a previously reported FPGA implementation. The results show that the proposed architecture has lower complexity, lower memory utilization with higher throughput, and a clock frequency that is less sensitive to code length.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: In this article, the theory of rank metric and Gabidulin codes is transposed to the case of fields of characteristic zero and the Frobenius automorphism is replaced by any element of the Galois group.
Abstract: We transpose the theory of rank metric and Gabidulin codes to the case of fields of characteristic zero. The Frobenius automorphism is then replaced by any element of the Galois group. We derive some conditions on the automorphism to be able to easily transpose the results obtained by Gabidulin as well and a classical polynomial-time decoding algorithm. We also provide various definitions for the rank-metric.

Journal ArticleDOI
TL;DR: Simulation results show that the performance degradation caused by the iterative multistage decoding algorithms is relevant to the code structure and can be utilized to trade off the performance against the complexity.
Abstract: This letter is concerned with a class of nonbinary low-density parity-check (LDPC) codes, referred to as column-scaled LDPC (CS-LDPC) codes, whose parity-check matrices have a property that each column is a scaled binary vector. The CS-LDPC codes, which include algebraically constructed nonbinary LDPC codes as subclasses, admit fast encoding and decoding algorithms. Specifically, for a code over the finite field F2p, the encoder can be implemented with p parallel binary LDPC encoders followed by a series of bijective mappers, while the decoder can be implemented with an iterative decoder in which no message permutations are required during the iterations. In addition, there exist low-complexity iterative multistage decoders that can be utilized to trade off the performance against the complexity. Simulation results show that the performance degradation caused by the iterative multistage decoding algorithms is relevant to the code structure.

Journal ArticleDOI
TL;DR: This brief study studies the application of a similar technique to a class of Euclidean geometry low density parity check (EG-LDPC) codes that are one step majority logic decodable and shows that the method is also effective for EG- LDPC codes.
Abstract: In a recent paper, a method was proposed to accelerate the majority logic decoding of difference set low density parity check codes. This is useful as majority logic decoding can be implemented serially with simple hardware but requires a large decoding time. For memory applications, this increases the memory access time. The method detects whether a word has errors in the first iterations of majority logic decoding, and when there are no errors the decoding ends without completing the rest of the iterations. Since most words in a memory will be error-free, the average decoding time is greatly reduced. In this brief, we study the application of a similar technique to a class of Euclidean geometry low density parity check (EG-LDPC) codes that are one step majority logic decodable. The results obtained show that the method is also effective for EG-LDPC codes. Extensive simulation results are given to accurately estimate the probability of error detection for different code sizes and numbers of errors.

Journal ArticleDOI
TL;DR: Based on the combinatorial optimization, an approximation method for the check node processing is presented that has small performance loss over the additive white Gaussian noise channel and independent Rayleigh fading channel and provides significant savings on hardware.
Abstract: Non-binary low-density parity-check codes are robust to various channel impairments. However, based on the existing decoding algorithms, the decoder implementations are expensive because of their excessive computational complexity and memory usage. Based on the combinatorial optimization, we present an approximation method for the check node processing. The simulation results demonstrate that our scheme has small performance loss over the additive white Gaussian noise channel and independent Rayleigh fading channel. Furthermore, the proposed reduced-complexity realization provides significant savings on hardware, so it yields a good performance-complexity tradeoff and can be efficiently implemented.

Journal ArticleDOI
TL;DR: In this article, a particle swarm optimization (PSO) algorithm was combined with the cocktail decoding method to solve the problem of multiprocessor task-scheduling in a hybrid flow shop (HFS) problem.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: In this article, a scheme for concatenating binary polar codes with interleaved Reed-Solomon codes is proposed, which achieves the capacity-achieving property of polar codes, while having a significantly better errordecay rate.
Abstract: A scheme for concatenating the recently invented polar codes with interleaved block codes is considered. By concatenating binary polar codes with interleaved Reed-Solomon codes, we prove that the proposed concatenation scheme captures the capacity-achieving property of polar codes, while having a significantly better error-decay rate. We show that for any e > 0, and total frame length N, the parameters of the scheme can be set such that the frame error probability is less than 2-N 1-e, while the scheme is still capacity achieving. This improves upon 2-N 0.5-e, the frame error probability of Arikan's polar codes. We also propose decoding algorithms for concatenated polar codes, which significantly improve the error-rate performance at finite block lengths while preserving the low decoding complexity.

Journal ArticleDOI
TL;DR: The Silent-Variable-Node-Free RBP (SVNF-RBP) schedule is proposed, which can force all variable nodes to contribute their intrinsic messages to the decoding process equally and provide appealing convergence speed and convergence error-rate performance compared to previous IDS decoders for both dedicated and punctured LDPC codes.
Abstract: When residual belief-propagation (RBP), which is a kind of informed dynamic scheduling (IDS), is applied to low-density parity-check (LDPC) codes, the convergence speed in error-rate performance can be significantly improved. However, the RBP decoders presented in previous literature suffer from poor convergence error-rate performance due to the two phenomena explored in this paper. The first is the greedy-group phenomenon, which results in a small part of the decoding graph occupying most of the decoding resources. By limiting the number of updates for each edge message in the decoding graph, the proposed Quota-based RBP (Q-RBP) schedule can reduce the probability of greedy groups forming. The other phenomenon is the silent-variable-nodes issue, which is a condition where some variable nodes have no chance of contributing their intrinsic messages to the decoding process. As a result, we propose the Silent-Variable-Node-Free RBP (SVNF-RBP) schedule, which can force all variable nodes to contribute their intrinsic messages to the decoding process equally. Both the Q-RBP and the SVNF-RBP provide appealing convergence speed and convergence error-rate performance compared to previous IDS decoders for both dedicated and punctured LDPC codes.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: A new class of exact-repair regenerating codes is constructed by combining two layers of erasure correction codes together with combinatorial block designs that have the “uncoded repair” property where the nodes participating in the repair simply transfer part of the stored data directly, without performing any computation.
Abstract: A new class of exact-repair regenerating codes is constructed by combining two layers of erasure correction codes together with combinatorial block designs. The proposed codes have the “uncoded repair” property where the nodes participating in the repair simply transfer part of the stored data directly, without performing any computation. The layered error correction structure results in a low-complexity decoding process. An analysis of our coding scheme is presented. This construction is able to achieve better performance than timesharing between the minimum storage regenerating codes and the minimum repair-bandwidth regenerating codes.

Proceedings ArticleDOI
07 Jun 2013
TL;DR: In this paper, performance of finite-length batched sparse (BATS) codes with belief propagation (BP) decoding is analyzed and a recursive formula is obtained to calculate the exact probability distribution of the stopping time of the BP decoder.
Abstract: In this paper, performance of finite-length batched sparse (BATS) codes with belief propagation (BP) decoding is analyzed. For fixed number of input symbols and fixed number of batches, a recursive formula is obtained to calculate the exact probability distribution of the stopping time of the BP decoder. When the number of batches follows a Poisson distribution, a recursive formula with lower computational complexity is derived. Inactivation decoding can be applied to reduce the receiving overhead of the BP decoder, where the number of inactive symbols determines the extra computation cost of inactivation decoding. Two more recursive formulas are derived to calculate the expected number of inactive symbols for fixed number of batches and for Poisson distributed number of batches, respectively. Since LT/Raptor codes are BATS codes with unit batch size, our results also provide new analytical tools for LT/Raptor codes.

Journal ArticleDOI
TL;DR: A novel relaxed check node processing scheme is proposed for the min-max NB-LDPC decoding algorithm and the complexity of the check nodes processing can be substantially reduced using the proposed scheme.
Abstract: Compared to binary low-density parity-check (LDPC) codes, nonbinary (NB) LDPC codes can achieve higher coding gain when the codeword length is moderate, but at the cost of higher decoding complexity. One major bottleneck of NB-LDPC decoding is the complicated check node processing. In this paper, a novel relaxed check node processing scheme is proposed for the min-max NB-LDPC decoding algorithm. Each finite field element of GF(2p) can be uniquely represented by a linear combination of p independent field elements. Making use of this property, an innovative method is developed in this paper to first find a set of the p most reliable variable-to-check messages with independent field elements, called the minimum basis. Then, the check-to-variable messages are efficiently computed from the minimum basis. With very small performance loss, the complexity of the check node processing can be substantially reduced using the proposed scheme. In addition, efficient VLSI architectures are developed to implement the proposed check node processing and the overall NB-LDPC decoder. Compared to the most efficient prior design, the proposed decoder for a (837, 726) NB-LDPC code over GF(25) can achieve 52% higher efficiency in terms of throughput-over-area ratio.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: A construction of 2-parity MDS array codes, that allow for optimal repair of a failed information node using XOR operations only, and the reduction of the field order is achieved by allowing more parity bits to be updated when a single information bit is being changed by the user.
Abstract: Maximum-distance separable (MDS) array codes with high rate and an optimal repair property were introduced recently. These codes could be applied in distributed storage systems, where they minimize the communication and disk access required for the recovery of failed nodes. However, the encoding and decoding algorithms of the proposed codes use arithmetic over finite fields of order greater than 2, which could result in a complex implementation. In this work, we present a construction of 2-parity MDS array codes, that allow for optimal repair of a failed information node using XOR operations only. The reduction of the field order is achieved by allowing more parity bits to be updated when a single information bit is being changed by the user.

Journal ArticleDOI
TL;DR: This paper presents three algorithms based on stochastic computation to reduce the decoding complexity of non-binary low-density parity-check codes over Galois fields with low order and a small variable node degree and studies the performance and complexity of the algorithms.
Abstract: Despite the outstanding performance of non-binary low-density parity-check (LDPC) codes over many communication channels, they are not in widespread use yet. This is due to the high implementation complexity of their decoding algorithms, even those that compromise performance for the sake of simplicity. In this paper, we present three algorithms based on stochastic computation to reduce the decoding complexity. The first is a purely stochastic algorithm with error-correcting performance matching that of the sum-product algorithm (SPA) for LDPC codes over Galois fields with low order and a small variable node degree. We also present a modified version which reduces the number of decoding iterations required while remaining purely stochastic and having a low per-iteration complexity. The second algorithm, relaxed half-stochastic (RHS) decoding, combines elements of the SPA and the stochastic decoder and uses successive relaxation to match the error-correcting performance of the SPA. Furthermore, it uses fewer iterations than the purely stochastic algorithm and does not have limitations on the field order and variable node degree of the codes it can decode. The third algorithm, NoX, is a fully stochastic specialization of RHS for codes with a variable node degree 2 that offers similar performance, but at a significantly lower computational complexity. We study the performance and complexity of the algorithms; noting that all have lower per-iteration complexity than SPA and that RHS can have comparable average per-codeword computational complexity, and NoX a lower one.