scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2014"


Journal ArticleDOI
TL;DR: This work aims to increase the throughput of polar decoding hardware by an order of magnitude relative to successive-cancellation decoders and is more than 8 times faster than the current fastest polar decoder.
Abstract: Polar codes provably achieve the symmetric capacity of a memoryless channel while having an explicit construction. The adoption of polar codes however, has been hampered by the low throughput of their decoding algorithm. This work aims to increase the throughput of polar decoding hardware by an order of magnitude relative to successive-cancellation decoders and is more than 8 times faster than the current fastest polar decoder. We present an algorithm, architecture, and FPGA implementation of a flexible, gigabit-per-second polar decoder.

391 citations


Journal ArticleDOI
TL;DR: In this paper, a random access scheme is introduced which relies on the combination of packet erasure correcting codes and successive interference cancellation (SIC), and a bipartite graph representation of the SIC process, resembling iterative decoding of generalized low-density parity check codes over the erasure channel, is exploited to optimize the selection probabilities of the component erasure correction codes via density evolution analysis.
Abstract: In this paper, a random access scheme is introduced which relies on the combination of packet erasure correcting codes and successive interference cancellation (SIC). The scheme is named coded slotted ALOHA. A bipartite graph representation of the SIC process, resembling iterative decoding of generalized low-density parity-check codes over the erasure channel, is exploited to optimize the selection probabilities of the component erasure correcting codes via density evolution analysis. The capacity (in packets per slot) of the scheme is then analyzed in the context of the collision channel without feedback. Moreover, a capacity bound is developed and component code distributions tightly approaching the bound are derived.

241 citations


Journal ArticleDOI
TL;DR: The butterfly structure of polar codes introduces correlation among source bits, justifying the use of the SC algorithm for efficient decoding, and state-of-the-art decoding algorithms, such as the BP and some generalized SC decoding, are explained in a broad framework.
Abstract: Polar codes represent an emerging class of error-correcting codes with power to approach the capacity of a discrete memoryless channel. This overview article aims to illustrate its principle, generation and decoding techniques. Unlike the traditional capacity-approaching coding strategy that tries to make codes as random as possible, the polar codes follow a different philosophy, also originated by Shannon, by creating a jointly typical set. Channel polarization, a concept central to polar codes, is intuitively elaborated by a Matthew effect in the digital world, followed by a detailed overview of construction methods for polar encoding. The butterfly structure of polar codes introduces correlation among source bits, justifying the use of the SC algorithm for efficient decoding. The SC decoding technique is investigated from the conceptual and practical viewpoints. State-of-the-art decoding algorithms, such as the BP and some generalized SC decoding, are also explained in a broad framework. Simulation results show that the performance of polar codes concatenated with CRC codes can outperform that of turbo or LDPC codes. Some promising research directions in practical scenarios are also discussed in the end.

207 citations


Journal ArticleDOI
TL;DR: A low-complexity alternative for soft-output decoding of polar codes that offers better performance but with significantly reduced processing and storage requirements is proposed.
Abstract: The state-of-the-art soft-output decoder for polar codes is a message-passing algorithm based on belief propagation, which performs well at the cost of high processing and storage requirements. In this paper, we propose a low-complexity alternative for soft-output decoding of polar codes that offers better performance but with significantly reduced processing and storage requirements. In particular we show that the complexity of the proposed decoder is only 4% of the total complexity of the belief propagation decoder for a rate one-half polar code of dimension 4096 in the dicode channel, while achieving comparable error-rate performance. Furthermore, we show that the proposed decoder requires about 39% of the memory required by the belief propagation decoder for a block length of 32768.

136 citations


Journal ArticleDOI
TL;DR: This brief presents a hardware architecture and algorithmic improvements for list successive cancellation (SC) decoding of polar codes and shows how to completely avoid copying of the likelihoods, which is algorithmically the most cumbersome part of list SC decoding.
Abstract: This brief presents a hardware architecture and algorithmic improvements for list successive cancellation (SC) decoding of polar codes. More specifically, we show how to completely avoid copying of the likelihoods, which is algorithmically the most cumbersome part of list SC decoding. The hardware architecture was synthesized for a blocklength of N = 1024 bits and list sizes L = 2, 4 using a UMC 90 nm VLSI technology. The resulting decoder can achieve a coded throughput of 181 Mb/s at a frequency of 459 MHz.

135 citations


Journal ArticleDOI
01 Aug 2014
TL;DR: Empirically the performance of polar codes at finite block lengths is boosted by moving along the family C-inter even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder.
Abstract: We explore the relationship between polar and RM codes and we describe a coding scheme which improves upon the performance of the standard polar code at practical block lengths. Our starting point is the experimental observation that RM codes have a smaller error probability than polar codes under MAP decoding. This motivates us to introduce a family of codes that "interpolates" between RM and polar codes, call this family C-inter = {C-alpha:alpha is an element of[0;1]}, where C alpha vertical bar(alpha=1) is the original polar code, and C alpha vertical bar(alpha=0) is an RM code. Based on numerical observations, we remark that the error probability under MAP decoding is an increasing function of alpha. MAP decoding has in general exponential complexity, but empirically the performance of polar codes at finite block lengths is boosted by moving along the family C-inter even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder. We demonstrate the performance gain via numerical simulations for transmission over the erasure channel as well as the Gaussian channel.

133 citations


Journal ArticleDOI
TL;DR: A simple and very flexible method for constructing quasi-cyclic (QC) low density paritycheck (LDPC) codes based on finite fields and a reduced-complexity iterative decoding scheme based on the section-wise cyclic structure of their parity-check matrices is presented.
Abstract: This paper presents a simple and very flexible method for constructing quasi-cyclic (QC) low density paritycheck (LDPC) codes based on finite fields. The code construction is based on two arbitrary subsets of elements from a given field. Some well known constructions of QC-LDPC codes based on finite fields and combinatorial designs are special cases of the proposed construction. The proposed construction in conjunction with a technique, known as masking, results in codes whose Tanner graphs have girth 8 or larger. Experimental results show that codes constructed using the proposed construction perform well and have low error-floors. Also presented in the paper is a reduced-complexity iterative decoding scheme for QC-LDPC codes based on the section-wise cyclic structure of their parity-check matrices. The proposed decoding scheme is an improvement of an earlier proposed reduced-complexity iterative decoding scheme.

122 citations


Journal ArticleDOI
TL;DR: This tutorial elaborate on the concept of EXIT charts using three iteratively decoded prototype systems as design examples, and illustrates further applications ofEXIT charts, including near-capacity designs, the idea of irregular codes and the design of modulation schemes.
Abstract: Near-capacity performance may be achieved with the aid of iterative decoding, where extrinsic soft information is exchanged between the constituent decoders in order to improve the attainable system performance. Extrinsic Information Transfer (EXIT) charts constitute a powerful semi-analytical tool used for analysing and designing iteratively decoded systems. In this tutorial, we commence by providing a rudimentary overview of the iterative decoding principle and the concept of soft information exchange. We then elaborate on the concept of EXIT charts using three iteratively decoded prototype systems as design examples. We conclude by illustrating further applications of EXIT charts, including near-capacity designs, the concept of irregular codes and the design of modulation schemes.

117 citations


Proceedings ArticleDOI
01 Jan 2014
TL;DR: This work uses an outer low-density parity-check code for intermediate channels of finite-length polar codes to show how the performance of belief propagation decoding of the overall concatenated polar code can be improved.
Abstract: The bit-channels of finite-length polar codes are not fully polarized, and a proportion of such bit-channels are neither completely 'noiseless' nor completely 'noisy'. By using an outer low-density parity-check code for these intermediate channels, we show how the performance of belief propagation (BP) decoding of the overall concatenated polar code can be improved. A simple example reports an improvement in Eb over N0 of 0.3 dB with respect to the conventional BP decoder. © 2014 IEEE.

117 citations


Journal ArticleDOI
TL;DR: Here, a fast decoding algorithm, called the adaptive successive decoder, is developed, and for any rate R less than the capacity C, communication is shown to be reliable with nearly exponentially small error probability.
Abstract: For the additive white Gaussian noise channel with average codeword power constraint, sparse superposition codes are developed. These codes are based on the statistical high-dimensional regression framework. In a previous paper, we investigated decoding using the optimal maximum-likelihood decoding scheme. Here, a fast decoding algorithm, called the adaptive successive decoder, is developed. For any rate R less than the capacity C, communication is shown to be reliable with nearly exponentially small error probability. Specifically, for blocklength n, it is shown that the error probability is exponentially small in n/logn.

100 citations


Journal ArticleDOI
TL;DR: Comparing how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes indicates that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here.
Abstract: Objective. Brain–computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

Journal ArticleDOI
TL;DR: This work proposes a new strategy to decode color codes, which is based on the projection of the error onto three surface codes, and establishes a general lower bound on the error threshold of a family of color codes depending on the threshold of the three corresponding surface codes.
Abstract: We propose a general strategy to decode color codes, which is based on the projection of the error onto three surface codes. This provides a method to transform every decoding algorithm of surface codes into a decoding algorithm of color codes. Applying this idea to a family of hexagonal color codes, with the perfect matching decoding algorithm for the three corresponding surface codes, we find a phase error threshold of approximately $8.7%$. Finally, our approach enables us to establish a general lower bound on the error threshold of a family of color codes depending on the threshold of the three corresponding surface codes. These results are based on a chain complex interpretation of surface codes and color codes.

Journal ArticleDOI
TL;DR: A low-complexity sequential soft decision decoding algorithm is proposed, based on the successive cancellation approach, and it employs most likely codeword probability estimates for selection of a path within the code tree to be extended.
Abstract: The problem of efficient decoding of polar codes is considered. A low-complexity sequential soft decision decoding algorithm is proposed. It is based on the successive cancellation approach, and it employs most likely codeword probability estimates for selection of a path within the code tree to be extended.

Journal ArticleDOI
TL;DR: It is reported a surprising discovery that for a broad range of gate failure probability, decoders actually benefit from faults in logic gates which serve as an inherent source of randomness and help the decoding algorithm to escape from local minima associated with trapping sets.
Abstract: We propose a gradient descent type bit flipping algorithm for decoding low density parity check codes on the binary symmetric channel. Randomness introduced in the bit flipping rule makes this class of decoders not only superior to other decoding algorithms of this type, but also robust to logic-gate failures. We report a surprising discovery that for a broad range of gate failure probability our decoders actually benefit from faults in logic gates which serve as an inherent source of randomness and help the decoding algorithm to escape from local minima associated with trapping sets.

Journal ArticleDOI
TL;DR: A new partial-sum updating algorithm and the corresponding PSN architecture are introduced which achieve a delay performance independent of the code length and the area complexity is reduced, for a high-performance and area-efficient semi-parallel SCD implementation.
Abstract: Polar codes have recently received a lot of attention because of their capacity-achieving performance and low encoding and decoding complexity. The performance of the successive cancellation decoder (SCD) of the polar codes highly depends on that of the partial-sum network (PSN) implementation. Hence, in this work, an efficient PSN architecture is proposed, based on the properties of polar codes. First, a new partial-sum updating algorithm and the corresponding PSN architecture are introduced which achieve a delay performance independent of the code length. Moreover, the area complexity is also reduced. Second, for a high-performance and area-efficient semi-parallel SCD implementation, a folded PSN architecture is presented to integrate seamlessly with the folded processing element architecture. This is achieved by using a novel folded decoding schedule. As a result, both the critical path delay and the area (excluding the memory for folding) of the semi-parallel SCD are approximately constant for a large range of code lengths. The proposed designs are implemented in both FPGA and ASIC and compared with the existing designs. Experimental result shows that for polar codes with large code length, the decoding throughput is improved by more than 1.05 times and the area is reduced by as much as 50.4%, compared with the state-of-the-art designs.

Journal ArticleDOI
TL;DR: This paper proposes a low-complexity min-sum algorithm for decoding low-density parity-check codes where the two-minimum calculation is replaced by one minimum calculation and a second minimum emulation, reducing by this way the decoder complexity.
Abstract: This paper proposes a low-complexity min-sum algorithm for decoding low-density parity-check codes. It is an improved version of the single-minimum algorithm where the two-minimum calculation is replaced by one minimum calculation and a second minimum emulation. In the proposed one, variable correction factors that depend on the iteration number are introduced and the second minimum emulation is simplified, reducing by this way the decoder complexity. This proposal improves the performance of the single-minimum algorithm, approaching to the normalized min-sum performance in the water-fall region. Also, the error-floor region is analyzed for the code of the IEEE 802.3an standard showing that the trapping sets are decoded due to a slow down of the convergence of the algorithm. An error-floor free operation below BER=10 -15 is shown for this code by means of a field-programmable gate array (FPGA)-based hardware emulator. A layered decoder is implemented in a 90-nm CMOS technology achieving 12.8 Gbps with an area of 3.84 mm 2 .

Journal ArticleDOI
TL;DR: In this article, the authors gave an explicit construction of a family of capacity-achieving binary t-write WOM codes for any number of writes t, which have polynomial time encoding and decoding algorithms.
Abstract: In this paper, we give an explicit construction of a family of capacity-achieving binary t-write WOM codes for any number of writes t, which have polynomial time encoding and decoding algorithms. The block length of our construction is N=(t/e)O(t/(δe)) when e is the gap to capacity and encoding and decoding run in time N1+δ. This is the first deterministic construction achieving these parameters. Our techniques also apply to larger alphabets.

Journal ArticleDOI
TL;DR: This paper presents an improved architecture for successive-cancellation decoding of polar codes, making use of a novel semi-parallel, encoder-based partial-sum computation module, and explores various optimization techniques such as a chained processing element and a variable quantization scheme.
Abstract: Polar codes are the first error-correcting codes to provably achieve channel capacity, asymptotically in code length, with an explicit construction. However, under successive-cancellation decoding, polar codes require very long code lengths to compete with existing modern codes. Nonetheless, the successive cancellation algorithm enables very-low-complexity implementations in hardware, due to the regular structure exhibited by polar codes. In this paper, we present an improved architecture for successive-cancellation decoding of polar codes, making use of a novel semi-parallel, encoder-based partial-sum computation module. We also provide quantization results for realistic code length N=2 15 , and explore various optimization techniques such as a chained processing element and a variable quantization scheme. This design is shown to scale to code lengths of up to N=2 21 , enabled by its low logic use, low register use and simple datapaths, limited almost exclusively by the amount of available SRAM. It also supports an overlapped loading of frames, allowing full-throughput decoding with a single set of input buffers.

Journal ArticleDOI
TL;DR: In this article, a stack SD (SSD) algorithm with an efficient enumeration is proposed, based on a novel path metric, which can effectively narrow the search range when enumerating the candidates within a sphere.
Abstract: Sphere decoding (SD) of polar codes is an efficient method to achieve the error performance of maximum likelihood (ML) decoding. But the complexity of the conventional sphere decoder is still high, where the candidates in a target sphere are enumerated and the radius is decreased gradually until no available candidate is in the sphere. In order to reduce the complexity of SD, a stack SD (SSD) algorithm with an efficient enumeration is proposed in this paper. Based on a novel path metric, SSD can effectively narrow the search range when enumerating the candidates within a sphere. The proposed metric follows an exact ML rule and takes the full usage of the whole received sequence. Furthermore, another very simple metric is provided as an approximation of the ML metric in the high signal-to-noise ratio regime. For short polar codes, simulation results over the additive white Gaussian noise channels show that the complexity of SSD based on the proposed metrics is up to 100 times lower than that of the conventional SD.

Journal ArticleDOI
TL;DR: The proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.
Abstract: In this work, we consider high-rate error-control systems for storage devices using multi-level per cell (MLC) NAND flash memories. Aiming at achieving a strong error-correcting capability, we propose error-control systems using block-wise parallel/serial concatenations of short Bose-Chaudhuri-Hocquenghem (BCH) codes with two iterative decoding strategies, namely, iterative hard-decision decoding (IHDD) and iterative reliability based decoding (IRBD). It will be shown that a simple but very efficient IRBD is possible by taking advantage of a unique feature of the block-wise concatenation. For tractable performance analysis and design of IHDD and IRBD at very low error rates, we derive semi-analytic approaches. The proposed error-control systems are compared with various error-control systems with well-known coding schemes such as a product code, multiple BCH codes, a single long BCH code, and low-density parity-check codes in terms of page error rates, which confirms our claim: the proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.

Proceedings ArticleDOI
18 Oct 2014
TL;DR: In this paper, the authors extend the notion of list-decoding to the setting of interactive communication and study its limits, showing that any protocol can be encoded, with a constant rate, into a listdecodable protocol which is resilient to a noise rate of up to 1/2.
Abstract: In this paper we extend the notion of list-decoding to the setting of interactive communication and study its limits. In particular, we show that any protocol can be encoded, with a constant rate, into a list-decodable protocol which is resilient to a noise rate of up to 1/2--e, and that this is tight. Using our list-decodable construction, we study a more nuanced model of noise where the adversary can corrupt up to a fraction α Alice's communication and up to a fraction β of Bob's communication. We use list-decoding in order to fully characterize the region RU of pairs (α β) for which unique decoding with a constant rate is possible. The region RU turns out to be quite unusual in its shape. In particular, it is bounded by a piecewise-differentiable curve with infinitely many pieces. We show that outside this region, the rate must be exponential. This suggests that in some error regimes, list-decoding is necessary for optimal unique decoding. We also consider the setting where only one party of the communication must output the correct answer. We precisely characterize the region of all pairs (α β) for which one-sided unique decoding is possible in a way that Alice will output the correct answer.

Journal ArticleDOI
TL;DR: This brief presents the first systematic approach to formally derive the SSC decoding latency for any given polar code and the architecture of the precomputation SSC polar decoder is proposed, which can further reduce the decoding latency.
Abstract: Recently, a low-latency decoding scheme called the simplified successive cancellation (SSC) algorithm has been proposed for polar codes. In this brief, we present the first systematic approach to formally derive the SSC decoding latency for any given polar code. The method to derive the SSC polar decoder architecture for any specific code is also presented. Moreover, the architecture of the precomputation SSC polar decoder is also proposed, which can further reduce the decoding latency. Compared with their SC decoder counterparts, the proposed SSC and precomputation SSC polar decoders can save up to 39.6% decoding latency with the same hardware cost.

Proceedings ArticleDOI
06 Apr 2014
TL;DR: The union bounds as well as the simulation results illustrate that, under ML decoding, SPCs are superior to NSPCs in BER performance while N SPCs and SPC’s have the same FER performance.
Abstract: The distance spectrum is used to estimate the maximum likelihood (ML) performance of block codes. A practical method which can run on a memory-constrained computer is proposed to calculate the distance spectrum of polar codes. Utilizing the distance spectrum, the frame error rate (FER) and bit error rate (BER) performances of non-systematic polar codes (NSPCs) and systematic polar codes (SPCs) are analyzed. The union bounds as well as the simulation results illustrate that, under ML decoding, SPCs are superior to NSPCs in BER performance while NSPCs and SPCs have the same FER performance.

Journal ArticleDOI
TL;DR: A simple bit-level error model is introduced and it is shown that decoder symmetry is preserved under this model and the corresponding density evolution equations are formulated to predict the average bit error probability in the limit of infinite block length.
Abstract: We analyze the performance of quantized min-sum decoding of low-density parity-check codes under unreliable message storage. To this end, we introduce a simple bit-level error model and show that decoder symmetry is preserved under this model. Subsequently, we formulate the corresponding density evolution equations to predict the average bit error probability in the limit of infinite blocklength. We present numerical threshold results and we show that using more quantization bits is not always beneficial in the context of faulty decoders.

Proceedings ArticleDOI
11 Aug 2014
TL;DR: A new family of protograph-based codes with no punctured variable nodes is presented, constructed by using differential evolution, partial brute force search, and the lengthening method introduced by Nguyen et al.
Abstract: A new family of protograph-based codes with no punctured variable nodes is presented. The codes are constructed by using differential evolution, partial brute force search, and the lengthening method introduced by Nguyen et al.. The protograph ensembles satisfy the linear minimum distance growth property and have the lowest iterative decoding thresholds yet reported in the literature among protograph codes without punctured variable nodes. Simulation results show that the new codes perform better than state-of-the-art protograph codes when the number of decoding iterations is small.

Journal ArticleDOI
TL;DR: A new probabilistic decoding algorithm for low-rate interleaved Reed–Solomon (IRS) codes is presented, which increases the error correcting capability of IRS codes compared to other known approaches with high probability.
Abstract: A new probabilistic decoding algorithm for low-rate interleaved Reed---Solomon (IRS) codes is presented. This approach increases the error correcting capability of IRS codes compared to other known approaches (e.g. joint decoding) with high probability. It is a generalization of well-known decoding approaches and its complexity is quadratic with the length of the code. Asymptotic parameters of the new approach are calculated and simulation results are shown to illustrate its performance. Moreover, an upper bound on the failure probability is derived.

Journal ArticleDOI
TL;DR: This letter shows that: 1) layered schedule can be efficiently implemented onto a GPU device; and 2) this approach-implemented onto a low-cost GPU device-provides higher throughputs with identical correction performances (BER) compared to previously published results.
Abstract: Low density parity check (LDPC) decoding process is known as compute intensive. This kind of digital communication applications was recently implemented onto graphic processing unit (GPU) devices for LDPC code performance estimation and/or for real-time measurements. Overall previous studies about LDPC decoding on GPU were based on the implementation of the flooding-based decoding algorithm that provides massive computation parallelism. More efficient layered schedules were proposed in literature because decoder iteration can be split into sublayer iterations. These schedules seem to badly fit onto GPU devices due to restricted computation parallelism and complex memory access patterns. However, the layered schedules enable the decoding convergence to speed up by two. In this letter, we show that: 1) layered schedule can be efficiently implemented onto a GPU device; and 2) this approach—implemented onto a low-cost GPU device—provides higher throughputs with identical correction performances (BER) compared to previously published results.

Proceedings ArticleDOI
01 Sep 2014
TL;DR: Numerical results show that both BCH-polar codes and Convolutional- polar codes can outperform stand-alone polar codes for some lengths and choice of decoding algorithms used to decode the outer codes.
Abstract: We analyze concatenation schemes of polar codes with outer binary BCH codes and convolutional codes. We show that both BCH-polar and Convolutional-polar (Conv-polar) codes can have frame error rate that decays exponentially with the frame length, which is a substantial improvement over standalone polar codes. With the increase in the cutoff rate of the channel after polarization, long constraint-length convolutional codes with sequential decoding suffice to achieve a frame error rate that decays exponentially with the frame length, whereas the average decoding complexity is low. Numerical results show that both BCH-polar codes and Conv-polar codes can outperform stand-alone polar codes for some lengths and choice of decoding algorithms used to decode the outer codes. For the additive white Gaussian noise channel, Conv-polar codes substantially outperform concatenated Reed Solomon-polar codes with a careful choice of the lengths of inner and outer codes.

Patent
Bin Li1, Hui Shen1
09 Oct 2014
TL;DR: In this paper, a reliable subset is extracted from an information bit set of the Polar codes, where reliability of information bits in the reliability subset is higher than reliability of other information bits.
Abstract: Embodiments of the present invention provide a method and a device for decoding Polar codes. A reliable subset is extracted from an information bit set of the Polar codes, where reliability of information bits in the reliable subset is higher than reliability of other information bits. The method includes: obtaining a probability value or an LLR of a current decoding bit of the Polar codes; when the current decoding bit belongs to the reliable subset, performing judgment according to the probability value or the LLR of the current decoding bit to determine a decoding value of the current decoding bit, keeping the number of decoding paths of the Polar codes unchanged, and modifying probability values of all the decoding paths by using the probability value or the LLR of the current decoding bit.

Proceedings ArticleDOI
01 Sep 2014
TL;DR: In this article, a random linear code construction for erasure packet channel was introduced and the in-order delivery delay behavior was analyzed, and it was shown that for rates below the capacity, the mean inorder-delivery delay of the scheme is better than the mean delay introduced by the scheme which implements the random linear block coding.
Abstract: We introduce a random linear code construction for erasure packet channel. We then analyze its in-order delivery delay behavior. We show that for rates below the capacity, the mean in-order-delivery delay of our scheme is better than the mean delay introduced by the scheme which implements the random linear block coding. We also compute the decoding failure probability and encoding and decoding complexity of our scheme.