scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2014"


Journal ArticleDOI
TL;DR: This work aims to increase the throughput of polar decoding hardware by an order of magnitude relative to successive-cancellation decoders and is more than 8 times faster than the current fastest polar decoder.
Abstract: Polar codes provably achieve the symmetric capacity of a memoryless channel while having an explicit construction. The adoption of polar codes however, has been hampered by the low throughput of their decoding algorithm. This work aims to increase the throughput of polar decoding hardware by an order of magnitude relative to successive-cancellation decoders and is more than 8 times faster than the current fastest polar decoder. We present an algorithm, architecture, and FPGA implementation of a flexible, gigabit-per-second polar decoder.

391 citations


Journal ArticleDOI
TL;DR: In this paper, a random access scheme is introduced which relies on the combination of packet erasure correcting codes and successive interference cancellation (SIC), and a bipartite graph representation of the SIC process, resembling iterative decoding of generalized low-density parity check codes over the erasure channel, is exploited to optimize the selection probabilities of the component erasure correction codes via density evolution analysis.
Abstract: In this paper, a random access scheme is introduced which relies on the combination of packet erasure correcting codes and successive interference cancellation (SIC). The scheme is named coded slotted ALOHA. A bipartite graph representation of the SIC process, resembling iterative decoding of generalized low-density parity-check codes over the erasure channel, is exploited to optimize the selection probabilities of the component erasure correcting codes via density evolution analysis. The capacity (in packets per slot) of the scheme is then analyzed in the context of the collision channel without feedback. Moreover, a capacity bound is developed and component code distributions tightly approaching the bound are derived.

241 citations


Journal ArticleDOI
TL;DR: The butterfly structure of polar codes introduces correlation among source bits, justifying the use of the SC algorithm for efficient decoding, and state-of-the-art decoding algorithms, such as the BP and some generalized SC decoding, are explained in a broad framework.
Abstract: Polar codes represent an emerging class of error-correcting codes with power to approach the capacity of a discrete memoryless channel. This overview article aims to illustrate its principle, generation and decoding techniques. Unlike the traditional capacity-approaching coding strategy that tries to make codes as random as possible, the polar codes follow a different philosophy, also originated by Shannon, by creating a jointly typical set. Channel polarization, a concept central to polar codes, is intuitively elaborated by a Matthew effect in the digital world, followed by a detailed overview of construction methods for polar encoding. The butterfly structure of polar codes introduces correlation among source bits, justifying the use of the SC algorithm for efficient decoding. The SC decoding technique is investigated from the conceptual and practical viewpoints. State-of-the-art decoding algorithms, such as the BP and some generalized SC decoding, are also explained in a broad framework. Simulation results show that the performance of polar codes concatenated with CRC codes can outperform that of turbo or LDPC codes. Some promising research directions in practical scenarios are also discussed in the end.

207 citations


Journal ArticleDOI
TL;DR: A low-complexity alternative for soft-output decoding of polar codes that offers better performance but with significantly reduced processing and storage requirements is proposed.
Abstract: The state-of-the-art soft-output decoder for polar codes is a message-passing algorithm based on belief propagation, which performs well at the cost of high processing and storage requirements. In this paper, we propose a low-complexity alternative for soft-output decoding of polar codes that offers better performance but with significantly reduced processing and storage requirements. In particular we show that the complexity of the proposed decoder is only 4% of the total complexity of the belief propagation decoder for a rate one-half polar code of dimension 4096 in the dicode channel, while achieving comparable error-rate performance. Furthermore, we show that the proposed decoder requires about 39% of the memory required by the belief propagation decoder for a block length of 32768.

136 citations


Journal ArticleDOI
TL;DR: This brief presents a hardware architecture and algorithmic improvements for list successive cancellation (SC) decoding of polar codes and shows how to completely avoid copying of the likelihoods, which is algorithmically the most cumbersome part of list SC decoding.
Abstract: This brief presents a hardware architecture and algorithmic improvements for list successive cancellation (SC) decoding of polar codes. More specifically, we show how to completely avoid copying of the likelihoods, which is algorithmically the most cumbersome part of list SC decoding. The hardware architecture was synthesized for a blocklength of N = 1024 bits and list sizes L = 2, 4 using a UMC 90 nm VLSI technology. The resulting decoder can achieve a coded throughput of 181 Mb/s at a frequency of 459 MHz.

135 citations


Journal ArticleDOI
01 Aug 2014
TL;DR: Empirically the performance of polar codes at finite block lengths is boosted by moving along the family C-inter even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder.
Abstract: We explore the relationship between polar and RM codes and we describe a coding scheme which improves upon the performance of the standard polar code at practical block lengths. Our starting point is the experimental observation that RM codes have a smaller error probability than polar codes under MAP decoding. This motivates us to introduce a family of codes that "interpolates" between RM and polar codes, call this family C-inter = {C-alpha:alpha is an element of[0;1]}, where C alpha vertical bar(alpha=1) is the original polar code, and C alpha vertical bar(alpha=0) is an RM code. Based on numerical observations, we remark that the error probability under MAP decoding is an increasing function of alpha. MAP decoding has in general exponential complexity, but empirically the performance of polar codes at finite block lengths is boosted by moving along the family C-inter even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder. We demonstrate the performance gain via numerical simulations for transmission over the erasure channel as well as the Gaussian channel.

133 citations


Journal ArticleDOI
TL;DR: This tutorial elaborate on the concept of EXIT charts using three iteratively decoded prototype systems as design examples, and illustrates further applications ofEXIT charts, including near-capacity designs, the idea of irregular codes and the design of modulation schemes.
Abstract: Near-capacity performance may be achieved with the aid of iterative decoding, where extrinsic soft information is exchanged between the constituent decoders in order to improve the attainable system performance. Extrinsic Information Transfer (EXIT) charts constitute a powerful semi-analytical tool used for analysing and designing iteratively decoded systems. In this tutorial, we commence by providing a rudimentary overview of the iterative decoding principle and the concept of soft information exchange. We then elaborate on the concept of EXIT charts using three iteratively decoded prototype systems as design examples. We conclude by illustrating further applications of EXIT charts, including near-capacity designs, the concept of irregular codes and the design of modulation schemes.

117 citations


Journal ArticleDOI
TL;DR: In this article, a cost-constrained random coding ensemble with multiple auxiliary costs is introduced, and is shown to achieve error exponents and second-order coding rates matching those of constant-composition random coding, while being directly applicable to channels with infinite or continuous alphabets.
Abstract: This paper considers the problem of channel coding with a given (possibly suboptimal) maximum-metric decoding rule. A cost-constrained random-coding ensemble with multiple auxiliary costs is introduced, and is shown to achieve error exponents and second-order coding rates matching those of constant-composition random coding, while being directly applicable to channels with infinite or continuous alphabets. The number of auxiliary costs required to match the error exponents and second-order rates of constant-composition coding is studied, and is shown to be at most two. For independent identically distributed random coding, asymptotic estimates of two well-known non-asymptotic bounds are given using saddlepoint approximations. Each expression is shown to characterize the asymptotic behavior of the corresponding random-coding bound at both fixed and varying rates, thus unifying the regimes characterized by error exponents, second-order rates, and moderate deviations. For fixed rates, novel exact asymptotics expressions are obtained to within a multiplicative 1+o(1) term. Using numerical examples, it is shown that the saddlepoint approximations are highly accurate even at short block lengths.

106 citations


Journal ArticleDOI
TL;DR: Here, a fast decoding algorithm, called the adaptive successive decoder, is developed, and for any rate R less than the capacity C, communication is shown to be reliable with nearly exponentially small error probability.
Abstract: For the additive white Gaussian noise channel with average codeword power constraint, sparse superposition codes are developed. These codes are based on the statistical high-dimensional regression framework. In a previous paper, we investigated decoding using the optimal maximum-likelihood decoding scheme. Here, a fast decoding algorithm, called the adaptive successive decoder, is developed. For any rate R less than the capacity C, communication is shown to be reliable with nearly exponentially small error probability. Specifically, for blocklength n, it is shown that the error probability is exponentially small in n/logn.

100 citations


Journal ArticleDOI
TL;DR: The multiplicity codes as mentioned in this paper are based on evaluating multivariate polynomials and their derivatives, and they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in the rate and minimum distance.
Abstract: Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(k∈) were known only for codes of rate ∈Ω(1/∈), where k is the length of the message. Furthermore, for codes of rate > 1/2, no nontrivial locality had been achieved.In this article, we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every ∈ > 0 and α > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 − α, and is locally decodable from a constant fraction of errors using O(k∈) queries and time.These codes, which we call multiplicity codes, are based on evaluating multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in the rate and minimum distance.

96 citations


Journal ArticleDOI
TL;DR: This work proposes a new strategy to decode color codes, which is based on the projection of the error onto three surface codes, and establishes a general lower bound on the error threshold of a family of color codes depending on the threshold of the three corresponding surface codes.
Abstract: We propose a general strategy to decode color codes, which is based on the projection of the error onto three surface codes. This provides a method to transform every decoding algorithm of surface codes into a decoding algorithm of color codes. Applying this idea to a family of hexagonal color codes, with the perfect matching decoding algorithm for the three corresponding surface codes, we find a phase error threshold of approximately $8.7%$. Finally, our approach enables us to establish a general lower bound on the error threshold of a family of color codes depending on the threshold of the three corresponding surface codes. These results are based on a chain complex interpretation of surface codes and color codes.

Journal ArticleDOI
TL;DR: A low-complexity sequential soft decision decoding algorithm is proposed, based on the successive cancellation approach, and it employs most likely codeword probability estimates for selection of a path within the code tree to be extended.
Abstract: The problem of efficient decoding of polar codes is considered. A low-complexity sequential soft decision decoding algorithm is proposed. It is based on the successive cancellation approach, and it employs most likely codeword probability estimates for selection of a path within the code tree to be extended.

Journal ArticleDOI
TL;DR: A new partial-sum updating algorithm and the corresponding PSN architecture are introduced which achieve a delay performance independent of the code length and the area complexity is reduced, for a high-performance and area-efficient semi-parallel SCD implementation.
Abstract: Polar codes have recently received a lot of attention because of their capacity-achieving performance and low encoding and decoding complexity. The performance of the successive cancellation decoder (SCD) of the polar codes highly depends on that of the partial-sum network (PSN) implementation. Hence, in this work, an efficient PSN architecture is proposed, based on the properties of polar codes. First, a new partial-sum updating algorithm and the corresponding PSN architecture are introduced which achieve a delay performance independent of the code length. Moreover, the area complexity is also reduced. Second, for a high-performance and area-efficient semi-parallel SCD implementation, a folded PSN architecture is presented to integrate seamlessly with the folded processing element architecture. This is achieved by using a novel folded decoding schedule. As a result, both the critical path delay and the area (excluding the memory for folding) of the semi-parallel SCD are approximately constant for a large range of code lengths. The proposed designs are implemented in both FPGA and ASIC and compared with the existing designs. Experimental result shows that for polar codes with large code length, the decoding throughput is improved by more than 1.05 times and the area is reduced by as much as 50.4%, compared with the state-of-the-art designs.

Book
01 Jan 2014
TL;DR: A special class of convolutional codes in rank metric is introduced and an efficient decoding algorithm for these codes is proposed, which is (partial) unit memory codes, built upon rank-metric block codes.
Abstract: Rank-metric codes recently attract a lot of attention due to their possible application to network coding, cryptography, space-time coding and distributed storage. An optimal-cardinality algebraic code construction in rank metric was introduced some decades ago by Delsarte, Gabidulin and Roth. This Reed–Solomon-like code class is based on the evaluation of linearized polynomials and is nowadays called Gabidulin codes. This dissertation considers block and convolutional codes in rank metric with the objective of designing and investigating efficient decoding algorithms for both code classes. After giving a brief introduction to codes in rank metric and their properties, we first derive sub-quadratic-time algorithms for operations with linearized polynomials and state a new bounded minimum distance decoding algorithm for Gabidulin codes. This algorithm directly outputs the linearized evaluation polynomial of the estimated codeword by means of the (fast) linearized Euclidean algorithm. Second, we present a new interpolation-based algorithm for unique and (not necessarily polynomial-time) list decoding of interleaved Gabidulin codes. This algorithm decodes most error patterns of rank greater than half the minimum rank distance by efficiently solving two linear systems of equations. As a third topic, we investigate the possibilities of polynomial-time list decoding of rank-metric codes in general and Gabidulin codes in particular. For this purpose, we derive three bounds on the list size. These bounds show that the behavior of the list size for both, Gabidulin and rank-metric block codes in general, is significantly different from the behavior of Reed–Solomon codes and block codes in Hamming metric, respectively. The bounds imply, amongst others, that there exists no polynomial upper bound on the list size in rank metric as the Johnson bound in Hamming metric, which depends only on the length and the minimum rank distance of the code. Finally, we introduce a special class of convolutional codes in rank metric and propose an efficient decoding algorithm for these codes. These convolutional codes are (partial) unit memory codes, built upon rank-metric block codes. This structure is crucial in the decoding process since we exploit the efficient decoders of the underlying block codes in order to decode the convolutional code.

Journal ArticleDOI
TL;DR: This paper proposes a low-complexity min-sum algorithm for decoding low-density parity-check codes where the two-minimum calculation is replaced by one minimum calculation and a second minimum emulation, reducing by this way the decoder complexity.
Abstract: This paper proposes a low-complexity min-sum algorithm for decoding low-density parity-check codes. It is an improved version of the single-minimum algorithm where the two-minimum calculation is replaced by one minimum calculation and a second minimum emulation. In the proposed one, variable correction factors that depend on the iteration number are introduced and the second minimum emulation is simplified, reducing by this way the decoder complexity. This proposal improves the performance of the single-minimum algorithm, approaching to the normalized min-sum performance in the water-fall region. Also, the error-floor region is analyzed for the code of the IEEE 802.3an standard showing that the trapping sets are decoded due to a slow down of the convergence of the algorithm. An error-floor free operation below BER=10 -15 is shown for this code by means of a field-programmable gate array (FPGA)-based hardware emulator. A layered decoder is implemented in a 90-nm CMOS technology achieving 12.8 Gbps with an area of 3.84 mm 2 .

Journal ArticleDOI
TL;DR: In this article, the authors gave an explicit construction of a family of capacity-achieving binary t-write WOM codes for any number of writes t, which have polynomial time encoding and decoding algorithms.
Abstract: In this paper, we give an explicit construction of a family of capacity-achieving binary t-write WOM codes for any number of writes t, which have polynomial time encoding and decoding algorithms. The block length of our construction is N=(t/e)O(t/(δe)) when e is the gap to capacity and encoding and decoding run in time N1+δ. This is the first deterministic construction achieving these parameters. Our techniques also apply to larger alphabets.

Proceedings ArticleDOI
18 Oct 2014
TL;DR: In this paper, the authors proposed a coding scheme for the standard setting which performs optimally in all three measures: maximum tolerable error rate, communication complexity, and computational complexity.
Abstract: We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any n-round interactive protocol using N rounds over an adversarial channel that corrupts up to ρ N transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate ρ, communication complexity N, and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate δ < 1/4 with a linear N = Θ(n) communication complexity. This improves over prior results [1]–[4] which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate ρ < 2/7 adaptively, ρ < 1/3 if only one party is required to decode, and ρ < 1/2 if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear1.

Journal ArticleDOI
TL;DR: This paper presents an improved architecture for successive-cancellation decoding of polar codes, making use of a novel semi-parallel, encoder-based partial-sum computation module, and explores various optimization techniques such as a chained processing element and a variable quantization scheme.
Abstract: Polar codes are the first error-correcting codes to provably achieve channel capacity, asymptotically in code length, with an explicit construction. However, under successive-cancellation decoding, polar codes require very long code lengths to compete with existing modern codes. Nonetheless, the successive cancellation algorithm enables very-low-complexity implementations in hardware, due to the regular structure exhibited by polar codes. In this paper, we present an improved architecture for successive-cancellation decoding of polar codes, making use of a novel semi-parallel, encoder-based partial-sum computation module. We also provide quantization results for realistic code length N=2 15 , and explore various optimization techniques such as a chained processing element and a variable quantization scheme. This design is shown to scale to code lengths of up to N=2 21 , enabled by its low logic use, low register use and simple datapaths, limited almost exclusively by the amount of available SRAM. It also supports an overlapped loading of frames, allowing full-throughput decoding with a single set of input buffers.

Journal ArticleDOI
TL;DR: In this article, a stack SD (SSD) algorithm with an efficient enumeration is proposed, based on a novel path metric, which can effectively narrow the search range when enumerating the candidates within a sphere.
Abstract: Sphere decoding (SD) of polar codes is an efficient method to achieve the error performance of maximum likelihood (ML) decoding. But the complexity of the conventional sphere decoder is still high, where the candidates in a target sphere are enumerated and the radius is decreased gradually until no available candidate is in the sphere. In order to reduce the complexity of SD, a stack SD (SSD) algorithm with an efficient enumeration is proposed in this paper. Based on a novel path metric, SSD can effectively narrow the search range when enumerating the candidates within a sphere. The proposed metric follows an exact ML rule and takes the full usage of the whole received sequence. Furthermore, another very simple metric is provided as an approximation of the ML metric in the high signal-to-noise ratio regime. For short polar codes, simulation results over the additive white Gaussian noise channels show that the complexity of SSD based on the proposed metrics is up to 100 times lower than that of the conventional SD.

Journal ArticleDOI
TL;DR: The proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.
Abstract: In this work, we consider high-rate error-control systems for storage devices using multi-level per cell (MLC) NAND flash memories. Aiming at achieving a strong error-correcting capability, we propose error-control systems using block-wise parallel/serial concatenations of short Bose-Chaudhuri-Hocquenghem (BCH) codes with two iterative decoding strategies, namely, iterative hard-decision decoding (IHDD) and iterative reliability based decoding (IRBD). It will be shown that a simple but very efficient IRBD is possible by taking advantage of a unique feature of the block-wise concatenation. For tractable performance analysis and design of IHDD and IRBD at very low error rates, we derive semi-analytic approaches. The proposed error-control systems are compared with various error-control systems with well-known coding schemes such as a product code, multiple BCH codes, a single long BCH code, and low-density parity-check codes in terms of page error rates, which confirms our claim: the proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.

Proceedings ArticleDOI
18 Oct 2014
TL;DR: In this paper, the authors extend the notion of list-decoding to the setting of interactive communication and study its limits, showing that any protocol can be encoded, with a constant rate, into a listdecodable protocol which is resilient to a noise rate of up to 1/2.
Abstract: In this paper we extend the notion of list-decoding to the setting of interactive communication and study its limits. In particular, we show that any protocol can be encoded, with a constant rate, into a list-decodable protocol which is resilient to a noise rate of up to 1/2--e, and that this is tight. Using our list-decodable construction, we study a more nuanced model of noise where the adversary can corrupt up to a fraction α Alice's communication and up to a fraction β of Bob's communication. We use list-decoding in order to fully characterize the region RU of pairs (α β) for which unique decoding with a constant rate is possible. The region RU turns out to be quite unusual in its shape. In particular, it is bounded by a piecewise-differentiable curve with infinitely many pieces. We show that outside this region, the rate must be exponential. This suggests that in some error regimes, list-decoding is necessary for optimal unique decoding. We also consider the setting where only one party of the communication must output the correct answer. We precisely characterize the region of all pairs (α β) for which one-sided unique decoding is possible in a way that Alice will output the correct answer.

Proceedings ArticleDOI
06 Apr 2014
TL;DR: The union bounds as well as the simulation results illustrate that, under ML decoding, SPCs are superior to NSPCs in BER performance while N SPCs and SPC’s have the same FER performance.
Abstract: The distance spectrum is used to estimate the maximum likelihood (ML) performance of block codes. A practical method which can run on a memory-constrained computer is proposed to calculate the distance spectrum of polar codes. Utilizing the distance spectrum, the frame error rate (FER) and bit error rate (BER) performances of non-systematic polar codes (NSPCs) and systematic polar codes (SPCs) are analyzed. The union bounds as well as the simulation results illustrate that, under ML decoding, SPCs are superior to NSPCs in BER performance while NSPCs and SPCs have the same FER performance.

Journal ArticleDOI
TL;DR: A simple bit-level error model is introduced and it is shown that decoder symmetry is preserved under this model and the corresponding density evolution equations are formulated to predict the average bit error probability in the limit of infinite block length.
Abstract: We analyze the performance of quantized min-sum decoding of low-density parity-check codes under unreliable message storage. To this end, we introduce a simple bit-level error model and show that decoder symmetry is preserved under this model. Subsequently, we formulate the corresponding density evolution equations to predict the average bit error probability in the limit of infinite blocklength. We present numerical threshold results and we show that using more quantization bits is not always beneficial in the context of faulty decoders.

Proceedings ArticleDOI
31 May 2014
TL;DR: It is shown that any q-ary code with sufficiently good distance can be randomly punctured to obtain, with high probability, a code that is list decodable up to radius 1 --- 1/q --- ε with near-optimal rate and list sizes.
Abstract: We show that any q-ary code with sufficiently good distance can be randomly punctured to obtain, with high probability, a code that is list decodable up to radius 1 --- 1/q --- e with near-optimal rate and list sizes. Our results imply that "most" Reed-Solomon codes are list decodable beyond the Johnson bound, settling the longstanding open question of whether any Reed Solomon codes meet this criterion. More precisely, we show that a Reed-Solomon code with random evaluation points is, with high probability, list decodable up to radius 1 --- e with list sizes O(1/e) and rate Ω(e). As a second corollary of our argument, we obtain improved bounds on the list decodability of random linear codes over large fields. Our approach exploits techniques from high dimensional probability. Previous work used similar tools to obtain bounds on the list decodability of random linear codes, but the bounds did not scale with the size of the alphabet. In this paper, we use a chaining argument to deal with large alphabet sizes.


Journal Article
TL;DR: This paper precisely characterize the region of all pairs (α β) for which one-sided unique decoding is possible in a way that Alice will output the correct answer, and suggests that in some error regimes, list-decoding is necessary for optimal unique decoding.
Abstract: In this paper, we extend the notion of list decoding to the setting of interactive communication and study its limits. In particular, we show that any protocol can be encoded, with a constant rate, into a list-decodable protocol which is resilient to a noise rate of up to $\frac{1}{2}-\varepsilon$, and that this is tight. Using our list-decodable construction, we study a more nuanced model of noise where the adversary can corrupt up to a fraction $\alpha$ of Alice's communication and up to a fraction $\beta$ of Bob's communication. We use list decoding to characterize fully the region $\mathcal{R}_U$ of pairs $(\alpha,\beta)$ for which unique decoding with a constant rate is possible. The region $\mathcal{R}_U$ turns out to be quite unusual in its shape. In particular, it is bounded by a piecewise-differentiable curve with infinitely many pieces. We show that outside this region the rate must be exponential. This suggests that in some error regimes, list decoding is necessary for optimal unique decoding. ...

Proceedings ArticleDOI
11 Aug 2014
TL;DR: A new family of protograph-based codes with no punctured variable nodes is presented, constructed by using differential evolution, partial brute force search, and the lengthening method introduced by Nguyen et al.
Abstract: A new family of protograph-based codes with no punctured variable nodes is presented. The codes are constructed by using differential evolution, partial brute force search, and the lengthening method introduced by Nguyen et al.. The protograph ensembles satisfy the linear minimum distance growth property and have the lowest iterative decoding thresholds yet reported in the literature among protograph codes without punctured variable nodes. Simulation results show that the new codes perform better than state-of-the-art protograph codes when the number of decoding iterations is small.

Journal ArticleDOI
TL;DR: A new probabilistic decoding algorithm for low-rate interleaved Reed–Solomon (IRS) codes is presented, which increases the error correcting capability of IRS codes compared to other known approaches with high probability.
Abstract: A new probabilistic decoding algorithm for low-rate interleaved Reed---Solomon (IRS) codes is presented. This approach increases the error correcting capability of IRS codes compared to other known approaches (e.g. joint decoding) with high probability. It is a generalization of well-known decoding approaches and its complexity is quadratic with the length of the code. Asymptotic parameters of the new approach are calculated and simulation results are shown to illustrate its performance. Moreover, an upper bound on the failure probability is derived.

Proceedings ArticleDOI
01 Nov 2014
TL;DR: In this article, a log-likelihood-ratio (LLR)-based SCL decoding algorithm is proposed, which only needs half the computation and storage complexity than the conventional one.
Abstract: Successive cancellation list (SCL) decoding algorithm is a powerful method that can help polar codes achieve excellent error-correcting performance. However, the current SCL algorithm and decoders are based on likelihood or log-likelihood forms, which render high hardware complexity. In this paper, we propose a log-likelihood-ratio (LLR)-based SCL (LLR-SCL) decoding algorithm, which only needs half the computation and storage complexity than the conventional one. Then, based on the proposed algorithm, we develop low-complexity VLSI architectures for LLR-SCL decoders. Analysis results show that the proposed LLR-SCL decoder achieves 50% reduction in hardware and 98% improvement in hardware efficiency.

Proceedings ArticleDOI
01 Dec 2014
TL;DR: In this paper, the effect of controlling the decoding delay to reduce the completion time below its currently best known solution is studied, and a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays is proposed.
Abstract: For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.