scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2005"


Journal ArticleDOI
TL;DR: The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
Abstract: Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.

989 citations


Journal ArticleDOI
TL;DR: It is shown that, when optimized, modified quantized min-sum algorithms perform very close to, and in some cases even slightly outperform, the ideal belief-propagation algorithm at observed error rates.
Abstract: The effects of clipping and quantization on the performance of the min-sum algorithm for the decoding of low-density parity-check (LDPC) codes at short and intermediate block lengths are studied. It is shown that in many cases, only four quantization bits suffice to obtain close to ideal performance over a wide range of signal-to-noise ratios. Moreover, we propose modifications to the min-sum algorithm that improve the performance by a few tenths of a decibel with just a small increase in decoding complexity. A quantized version of these modified algorithms is also studied. It is shown that, when optimized, modified quantized min-sum algorithms perform very close to, and in some cases even slightly outperform, the ideal belief-propagation algorithm at observed error rates.

280 citations


Posted Content
TL;DR: In this article, the authors introduce the concept of graph-cover decoding, which is a theoretical tool that can be used to show connections between linear programming decoding and message-passing iterative decoding.
Abstract: The goal of the present paper is the derivation of a framework for the finite-length analysis of message-passing iterative decoding of low-density parity-check codes. To this end we introduce the concept of graph-cover decoding. Whereas in maximum-likelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graph-cover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graph-cover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and message-passing iterative decoding. Namely, on the one hand it turns out that graph-cover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like message-passing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graph-cover decoding can serve as a model to explain the behavior of message-passing iterative decoding. Understanding the behavior of graph-cover decoding is tantamount to understanding the so-called fundamental polytope. Therefore, we give some characterizations of this polytope and explain its relation to earlier concepts that were introduced to understand the behavior of message-passing iterative decoding for finite-length codes.

260 citations


Journal ArticleDOI
TL;DR: In this article, the effects of imperfect estimation of the channel parameters on error probability when known pilot symbols are transmitted among information data were examined under the assumption of a frequency-flat slow Rayleigh fading channel with multiple transmit and receive antennas.
Abstract: Under the assumption of a frequency-flat slow Rayleigh fading channel with multiple transmit and receive antennas, we examine the effects of imperfect estimation of the channel parameters on error probability when known pilot symbols are transmitted among information data. Three different receivers are considered. The first one derives an estimate of the channel [by using either a maximum-likelihood (ML) or a minimum mean square error (MMSE) criterion], and then uses this estimate in the same metric that would be applied if the channel were perfectly known. The second receiver derives again an estimate of the channel, but uses the ML metric conditioned on the channel estimate. Our last receiver simultaneously processes the pilot and data symbols received. Simulation results are exhibited, showing that only a relatively small percentage of the transmitted frame need be allocated to pilot symbols in order to experience an acceptable degradation of error probability due to imperfect channel knowledge. Algorithms for the recursive calculation of the decision metric of the last two receivers are also developed for application to sequential decoding of trellis space-time codes.

232 citations


Journal ArticleDOI
TL;DR: It is explained how replacing rate-1/2 binary component codes by rate-m/(m+1) binary RSC codes can lead to better global performance, and the encoding scheme can be designed so that decoding can be achieved closer to the theoretical limit.
Abstract: The original turbo codes (TCs), presented in 1993 by Berrou et al., consist of the parallel concatenation of two rate-1/2 binary recursive systematic convolutional (RSC) codes. This paper explains how replacing rate-1/2 binary component codes by rate-m/(m+1) binary RSC codes can lead to better global performance. The encoding scheme can be designed so that decoding can be achieved closer to the theoretical limit, while showing better performance in the region of low error rates. These results are illustrated with some examples based on double-binary (m=2) 8-state and 16-state TCs, easily adaptable to a large range of data block sizes and coding rates. The double-binary 8-state code has already been adopted in several telecommunication standards.

186 citations


Proceedings ArticleDOI
31 Oct 2005
TL;DR: A new construction of rank codes is presented, which defines new codes and includes known codes, and it is argued that these are different codes.
Abstract: The only known construction of error-correcting codes in rank metric was proposed in 1985. These were codes with fast decoding algorithm. We present a new construction of rank codes, which defines new codes and includes known codes. This is a generalization of E.M. Gabidulin, 1985. Though the new codes seem to be very similar to subcodes of known rank codes, we argue that these are different codes. A fast decoding algorithm is described

155 citations


Journal ArticleDOI
TL;DR: In this paper, turbo-based coding schemes for relay systems together with iterative decoding algorithms are designed and it is shown that a remarkable advantage can be achieved over the direct and multihop transmission alternatives.
Abstract: In this paper, we design turbo-based coding schemes for relay systems together with iterative decoding algorithms. In the proposed schemes, the source node sends coded information bits to both the relay and the destination nodes, while the relay simultaneously forwards its estimate for the previous coded block to the destination after decoding and re-encoding. The destination observes a superposition of the codewords and uses an iterative decoding algorithm to estimate the transmitted messages. Different from the block-by-block decoding techniques used in the literature, this decoding scheme operates over all the transmitted blocks jointly. Various encoding and decoding approaches are proposed for both single-input single-output and multi-input multi-output systems over several different channel models. Capacity bounds and information-rate bounds with binary inputs are also provided, and it is shown that the performance of the proposed practical scheme is typically about 1.0-1.5 dB away from the theoretical limits, and a remarkable advantage can be achieved over the direct and multihop transmission alternatives.

146 citations


Patent
28 Apr 2005
TL;DR: In this paper, a method and apparatus for decoding a coded data stream of bits using an inner decoder, deinterleaver and an outer decoder is presented. But the decoding is terminated and a decoded word is outputted if the syndromes of the corrected word of the first decoding are all zeros.
Abstract: A method and apparatus for decoding a coded data stream of bits using an inner decoder, deinterleaver and an outer decoder. The outer decoder first decodes by error correction decoding for r errors per word. The decoding is terminated and a decoded word is outputted if the syndromes of the corrected word of the first decoding are all zeros. If the syndromes of the corrected word of the first decoding are not all zeros, a second decoding is performed by error decoding and erasure for the number of errors reduced by one and the number of erasures increased to two. The decoding is terminated and a decoded word is outputted if the syndromes of the corrected word of the second decoding are all zeros. If the syndromes of the corrected word of the second decoding are not all zeros, the second decoding by correcting and erasure decoding is repeated for the number of errors reduced by one and the number of erasures increased by two for each iteration of the second decoding.

141 citations


Book ChapterDOI
14 Mar 2005
TL;DR: In this paper, the decoding of Gabidulin codes is seen as an instance of reconstruction of linearized polynomials, and two efficient decoding algorithms inspired from the Welch-Berlekamp decoding algorithm for Reed-Solomon codes are presented.
Abstract: In this paper, we present a new approach of the decoding of Gabidulin codes. We show that, in the same way as decoding Reed-Solomon codes is an instance of the problem called polynomial reconstruction, the decoding of Gabidulin codes can be seen as an instance of the problem of reconstruction of linearized polynomials. This approach leads to the design of two efficient decoding algorithms inspired from the Welch–Berlekamp decoding algorithm for Reed–Solomon codes. The first algorithm has the same complexity as the existing ones, that is cubic in the number of errors, whereas the second has quadratic complexity in 2.5n2 – 1.5k2.

133 citations


Proceedings ArticleDOI
11 Sep 2005
TL;DR: A unique technique is introduced that improves the performance of the BP decoding in waterfall and error-floor regions by reversing the decoder failures and is able to provide performance improvements for short-length LDPC codes and push or avoid error- floor behaviors of longer codes.
Abstract: In this work, we introduce a unique technique that improves the performance of the BP decoding in waterfall and error-floor regions by reversing the decoder failures. Based on the short cycles existing in the bipartite graph, an importance sampling simulation technique is used to identify the bit and check node combinations that are the dominant sources of error events, called trapping sets. Then, the identified trapping sets are used in the decoding process to avoid the pre-known failures and to converge to the transmitted codeword. With a minimal additional decoding complexity, the proposed technique is able to provide performance improvements for short-length LDPC codes and push or avoid error-floor behaviors of longer codes

101 citations


Journal ArticleDOI
TL;DR: It is proved that there exist codes in Gallager's regular low-density parity-check codes, Tanner's generalized LDPC codes, and the turbo codes due to Berrou et al. for which the block length of the codes N and the number of iterations I go to infinity goes to zero.
Abstract: Asymptotic iterative decoding performance is analyzed for several classes of iteratively decodable codes when the block length of the codes N and the number of iterations I go to infinity. Three classes of codes are considered. These are Gallager's regular low-density parity-check (LDPC) codes, Tanner's generalized LDPC (GLDPC) codes, and the turbo codes due to Berrou et al. It is proved that there exist codes in these classes and iterative decoding algorithms for these codes for which not only the bit error probability P/sub b/, but also the block (frame) error probability P/sub B/, goes to zero as N and I go to infinity.

Patent
02 Aug 2005
TL;DR: In this paper, the decoder state may be stored at the time of reception of an erroneous packet or at identification of a lost packet, and decoding continued, and after FEC repair, the last known state of decoder is restored after the lost/damaged packet(s) is (re) resurrected through FEC, and accelerated decoding accordingly is used.
Abstract: Accelerated video decoding makes use of FEC-repaired media packets that become available through FEC decoding later than their intended decoding time, so to re-establish the integrity of the prediction chain between predicted pictures. The decoder state may be stored at the time of reception of an erroneous packet or at the time of identification of a lost packet, and decoding continued. After FEC repair, the last known state of the decoder is restored after the lost/damaged packet(s) is (are) resurrected through FEC, and accelerated decoding accordingly is used. Cycles “reserved” for decoding of a sub-sequence may be utilized. By freezing the decoded frame at the begin of a sub-sequence and decoding coded pictures of the main sequence that are part of the previous FEC block the integrity of the main prediction chain may be established again. Alternatively, cycles from enhancement layer decoding may be used.

Journal ArticleDOI
TL;DR: In this article, it was shown that maximum likelihood decoding of Reed-Solomon codes is NP-hard even with unlimited preprocessing, thus strengthening a result of Bruck and Naor.
Abstract: Maximum-likelihood decoding is one of the central algorithmic problems in coding theory. It has been known for over 25 years that maximum-likelihood decoding of general linear codes is NP-hard. Nevertheless, it was so far unknown whether maximum-likelihood decoding remains hard for any specific family of codes with nontrivial algebraic structure. In this paper, we prove that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes remains hard even with unlimited preprocessing, thereby strengthening a result of Bruck and Naor.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: Two improved min-sum algorithms, the normalized and offset min-Sum algorithms, are applied to the decoding of irregular LDPC codes and it is shown that the behavior of the two algorithms in decoding irregularLDPC codes is different from that in decoding regular LDPC code.
Abstract: In this paper, we apply two improved min-sum algorithms, the normalized and offset min-sum algorithms, to the decoding of irregular LDPC codes. We show that the behavior of the two algorithms in decoding irregular LDPC codes is different from that in decoding regular LDPC codes, due to the existence of bit nodes of degree two. We analyze and give explanations to the difference, and propose approaches to improve the performance of the two algorithms

Journal ArticleDOI
TL;DR: It is concluded that analysis of estimation decoding for LDPC codes is feasible in channels with memory, and that such analysis shows large potential gains.
Abstract: Density evolution analysis of low-density parity-check (LDPC) codes in memoryless channels is extended to the Gilbert-Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum-product algorithm (SPA) is used to perform LDPC decoding jointly with channel-state detection. Density evolution results show (and simulation results confirm) that such decoders provide a significantly enlarged region of successful decoding within the GE parameter space, compared with decoders that do not exploit the channel memory. By considering a variety of ways in which a GE channel may be degraded, it is shown how knowledge of the decoding behavior at a single point of the GE parameter space may be extended to a larger region within the space, thereby mitigating the large complexity needed in using density evolution to explore the parameter space point-by-point. Using the GE channel as a straightforward example, we conclude that analysis of estimation decoding for LDPC codes is feasible in channels with memory, and that such analysis shows large potential gains.

Journal ArticleDOI
TL;DR: A new low-complexity algorithm to decode low-density parity-check (LDPC) codes that achieves an appealing tradeoff between performance and complexity for FG-LDPC codes.
Abstract: In this paper, we develop a new low-complexity algorithm to decode low-density parity-check (LDPC) codes. The developments are oriented specifically toward low-cost, yet effective, decoding of (high-rate) finite-geometry (FG) LDPC codes. The decoding procedure updates iteratively the hard-decision received vector in search of a valid codeword in the vector space. Only one bit is changed in each iteration, and the bit-selection criterion combines the number of failed checks and the reliability of the received bits. Prior knowledge of the signal amplitude and noise power is not required. An optional mechanism to avoid infinite loops in the search is also proposed. Our studies show that the algorithm achieves an appealing tradeoff between performance and complexity for FG-LDPC codes.

Journal ArticleDOI
TL;DR: Experimental calculations indicate that the use of dynamic reconfiguration leads to a 69% reduction in decoder power consumption over a nonreconfigurable field-programmable gate array implementation with no loss of decode accuracy.
Abstract: Error-correcting convolutional codes provide a proven mechanism to limit the effects of noise in digital data transmission. Although hardware implementations of decoding algorithms, such as the Viterbi algorithm, have shown good noise tolerance for error-correcting codes, these implementations require an exponential increase in very large scale integration area and power consumption to achieve increased decoding accuracy. To achieve reduced decoder power consumption, we have examined and implemented decoders based on the reduced-complexity adaptive Viterbi algorithm (AVA). Run-time dynamic reconfiguration is performed in response to varying communication channel-noise conditions to match minimized power consumption to required error-correction capabilities. Experimental calculations indicate that the use of dynamic reconfiguration leads to a 69% reduction in decoder power consumption over a nonreconfigurable field-programmable gate array implementation with no loss of decode accuracy.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: This paper presents optimal linear transformations of information symbols for quasi-orthogonal space-time block codes (QOSTBC) with minimum ML decoding complexity and shows that the diversity product is maximized when the mean transmission power is fixed.
Abstract: In this paper, we first present a necessary and sufficient condition on linear transformations for an QOSTBC to possess the minimum ML decoding complexity, i.e., real symbol pair-wise decoding. We then present optimal linear transformations of information symbols for quasi-orthogonal space-time block codes (QOSTBC) with minimum ML decoding complexity. The optimality is in the sense that the diversity product (or product distance) is maximized when the mean transmission power is fixed

Journal ArticleDOI
TL;DR: A novel parallel interleaver and an algorithm for its design are presented, achieving the same error correction performance as the standard architecture and achieving a very high coding gain.
Abstract: Standard VLSI implementations of turbo decoding require substantial memory and incur a long latency, which cannot be tolerated in some applications. A parallel VLSI architecture for low-latency turbo decoding, comprising multiple single-input single-output (SISO) elements, operating jointly on one turbo-coded block, is presented and compared to sequential architectures. A parallel interleaver is essential to process multiple concurrent SISO outputs. A novel parallel interleaver and an algorithm for its design are presented, achieving the same error correction performance as the standard architecture. Latency is reduced up to 20 times and throughput for large blocks is increased up to six-fold relative to sequential decoders, using the same silicon area, and achieving a very high coding gain. The parallel architecture scales favorably: latency and throughput are improved with increased block size and chip area.

Journal ArticleDOI
TL;DR: This work introduces and analyzes verification-based decoding for low-density parity-check (LDPC) codes, an approach specifically designed to manipulate data in packet-sized units, and describes how to utilize code scrambling to extend results to channels with errors controlled by an oblivious adversary.
Abstract: We introduce and analyze verification-based decoding for low-density parity-check (LDPC) codes, an approach specifically designed to manipulate data in packet-sized units. Verification-based decoding requires only linear time for both encoding and decoding and succeeds with high probability under random errors. We describe how to utilize code scrambling to extend our results to channels with errors controlled by an oblivious adversary.

Journal ArticleDOI
TL;DR: An in-depth analysis of a low-complexity method recently proposed by Guivarch et al., where the redundancy left by a Huffman encoder is used at a bit level in the channel decoder to improve its performance.
Abstract: Several recent publications have shown that joint source-channel decoding could be a powerful technique to take advantage of residual source redundancy for fixed- and variable-length source codes. This letter gives an in-depth analysis of a low-complexity method recently proposed by Guivarch et al., where the redundancy left by a Huffman encoder is used at a bit level in the channel decoder to improve its performance. Several simulation results are presented, showing for two first-order Markov sources of different sizes that using a priori knowledge of the source statistics yields a significant improvement, either with a Viterbi channel decoder or with a turbo decoder.

Patent
15 Mar 2005
TL;DR: An improved and extended Reed-Solomon-like method for providing a redundancy of m≧3 is described in this article, where a general expression of the codes is described, as well as a systematic criterion for proving correctness and finding decoding algorithms for values of m ≥ 3.
Abstract: An improved and extended Reed-Solomon-like method for providing a redundancy of m≧3 is disclosed. A general expression of the codes is described, as well as a systematic criterion for proving correctness and finding decoding algorithms for values of m≧3. Examples of codes are given for m=3, 4, 5, based on primitive elements of a finite field of dimension N where N is 8, 16 or 32. A Horner's method and accumulator apparatus are described for XOR-efficient evaluation of polynomials with variable vector coefficients and constant sparse square matrix abscissa. A power balancing technique is described to further improve the XOR efficiency of the algorithms. XOR-efficient decoding methods are also described. A tower coordinate technique to efficiently carry out finite field multiplication or inversion for large dimension N forms a basis for one decoding method. Another decoding method uses a stored one-dimensional table of powers of α and Schur expressions to efficiently calculate the inverse of the square submatrices of the encoding matrix.

Journal ArticleDOI
TL;DR: A new version of the RRWBF decoding algorithm is proposed such that decoding time is significantly reduced, especially when the iteration number is small and the code length is large, and provides a more intuitive way of interpreting its superior performance over other bit-flipping-based algorithms.
Abstract: It was recently shown that the reliability ratio based bit-flipping (RRWBF) decoding algorithm for low-density parity-check (LDPC) codes performs best among existing bit-flipping-based algorithms. A new version of this algorithm is proposed such that decoding time is significantly reduced, especially when the iteration number is small and the code length is large. Simulation results showed the proposed version has up to 2322.39%, 823.90%, 511.79%, and 261.92% speedup compared to the original algorithm on a UNIX workstation for 10, 30, 50, and 100 iterations. It is thus much more efficient to adopt this version for simulation and hardware implementation. Moreover, this version of the RRWBF algorithm provides a more intuitive way of interpreting its superior performance over other bit-flipping-based algorithms.

Journal ArticleDOI
TL;DR: This approach allows to bridge the gap between the error performance achieved by the lower order reliability-based decoding algorithms which remain sub-optimum, and the maximum likelihood decoding, which is too complex to be implemented for most codes employed in practice.
Abstract: In this letter, an iterative decoding algorithm for linear block codes combining reliability-based decoding with adaptive belief propagation decoding is proposed. At each iteration, the soft output values delivered by the adaptive belief propagation algorithm are used as reliability values to perform reduced order reliability-based decoding of the code considered. This approach allows to bridge the gap between the error performance achieved by the lower order reliability-based decoding algorithms which remain sub-optimum, and the maximum likelihood decoding, which is too complex to be implemented for most codes employed in practice. Simulations results for various linear block codes are given and elaborated.

Patent
13 Apr 2005
TL;DR: In this paper, a decoding scheme for LDPC (Low-Density Parity-Check) codes using sequential decoding has been proposed, where the nodes are divided according to a parity-check matrix into check nodes for a parity check message and variable nodes for bit messages.
Abstract: Disclosed is a decoding apparatus for LDPC (Low-Density Parity-Check) codes when receiving data encoded with LDPC codes on a channel having consecutive output values, and a method thereof The decoding method for LDPC codes uses sequential decoding and includes the following steps: (a) the nodes are divided according to a parity-check matrix into check nodes for a parity-check message and variable nodes for a bit message; (b) the check nodes are divided into a predetermined number of subsets; (c) the LDPC codeword of each subset for all the check nodes is sequentially decoded; (d) an output message is generated for verifying validity of the decoding result; and (e) the steps (b), (c), and (d) are iteratively performed by a predetermined number of iterations

Journal ArticleDOI
TL;DR: A new family of turbo codes called Multiple Slice Turbo Codes is proposed, based on two ideas: the encoding of each dimension with P independent tail-biting codes and a constrained interleaver structure that allows the parallel decoding of the P independent codewords in each dimension.
Abstract: The main problem with the hardware implementation of turbo codes is the lack of parallelism in the MAP-based decoding algorithm. This paper proposes to overcome this problem by using a new family of turbo codes called Multiple Slice Turbo Codes. This family is based on two ideas: the encoding of each dimension with P independent tail-biting codes and a constrained interleaver structure that allows the parallel decoding of the P independent codewords in each dimension. The optimization of the interleaver is described. A high degree of parallelism is obtained with equivalent or better performance than thedvb-rcs turbo code. For very high throughput applications, the parallel architecture decreases both decoding latency and hardware complexity compared to the classical serial architecture, which requires memory duplication.

Proceedings ArticleDOI
E. Erez1, Meir Feder1
31 Oct 2005
TL;DR: This work shows that network codes can be constructed for cyclic networks as long as at least one edge in each cycle has a delay, but it is not required that every edge would have a delay.
Abstract: In this work we address the problem of network codes for cyclic networks. We show that network codes can be constructed for cyclic networks as long as at least one edge in each cycle has a delay, but it is not required that every edge would have a delay. We then present the algorithm for constructing an optimal multicast network code, developed in our previous work, and analyze its computational complexity, showing that it is polynomial in the graph size. We discuss the properties of the resulting codes, and show the ability to modify the code in a localized manner when sinks are added or removed. This property is also applicable to acyclic networks. Finally, we propose the sequential decoding algorithm we developed in an earlier work for decoding the resulting codes. For this we analyze its decoding delay, for both acyclic and cyclic networks

Proceedings ArticleDOI
31 Oct 2005
TL;DR: It is shown for a large class of LDPC ensembles, including RA and IRA codes, that the bit iterative decoding threshold is essentially identical to the block iterative decode threshold.
Abstract: We show for a large class of LDPC ensembles, including RA and IRA codes, that the bit iterative decoding threshold is essentially identical to the block iterative decoding threshold

Proceedings ArticleDOI
01 Jan 2005
TL;DR: Simulations of codes of very short lengths over BEC reveal the superiority of the proposed decoding algorithm over present improved decoding algorithms for a wide range of bit error rates.
Abstract: This paper presents a new improved decoding algorithm for low-density parity-check (LDPC) codes over the binary erasure channel (BEC). The proposed algorithm combines the fact that a considerable fraction of unsatisfied check nodes are of degree two with the concept of guessing bits to perform simple graph-theoretic manipulations on the Tanner graph. The proposed decoding algorithm has a complexity similar to present improved decoding algorithms [H. Pishro-Nik et al., 2004]. Simulations of codes of very short lengths over BEC reveal the superiority of our algorithm over present improved decoding algorithms for a wide range of bit error rates.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: Simulation results indicate that the bounds are tight for small error rates, and the proposed codes have strong UEP property.
Abstract: A generalization of rateless codes (LT and Raptor codes) to provide unequal error protection (UEP) property is proposed in this paper. The proposed codes (UEP-LT and UEP-Raptor codes) are analyzed for the best possible performance over the binary erasure channel (BEC) in finite-length cases. We derive upper and lower bounds on the bit error probabilities under the maximum-likelihood (ML) decoding. We further verify our work with simulations. Simulation results indicate that the bounds are tight for small error rates, and the proposed codes have strong UEP property.