scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2010"


Journal ArticleDOI
TL;DR: A taxonomy is presented that embeds all binary and ternary ECOC decoding strategies into four groups and shows that the zero symbol introduces two kinds of biases that require redefinition of the decoding design.
Abstract: A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-correcting output codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a ldquodo not carerdquo symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI machine learning repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

273 citations


Book ChapterDOI
25 May 2010
TL;DR: In this paper, a generalization of Stern's information-set decoding algorithm for decoding linear codes over arbitrary finite fields Fq and analyzes the complexity of the algorithm, making it possible to compute the security of recently proposed code-based systems over non-binary fields.
Abstract: The best known non-structural attacks against code-based cryptosystems are based on information-set decoding Stern's algorithm and its improvements are well optimized and the complexity is reasonably well understood However, these algorithms only handle codes over F2 This paper presents a generalization of Stern's information-set- decoding algorithm for decoding linear codes over arbitrary finite fields Fq and analyzes the complexity This result makes it possible to compute the security of recently proposed code-based systems over non-binary fields As an illustration, ranges of parameters for generalized McEliece cryptosystems using classical Goppa codes over F31 are suggested for which the new information-set-decoding algorithm needs 2128 bit operations

177 citations


Journal ArticleDOI
TL;DR: A new family of full-diversity LDPC codes is introduced that exhibit near-outage-limit performance under iterative decoding for all block-lengths and competes favorably with multiplexed parallel turbo codes for nonergodic channels.
Abstract: We design powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Since optimal constructions for maximum-likelihood decoding do not perform well under iterative decoding, we introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes favorably with multiplexed parallel turbo codes for nonergodic channels.

132 citations


Proceedings ArticleDOI
30 Sep 2010
TL;DR: It is proved that the projection of P in the original space is tighter than the fundamental polytope based on the parity check matrix, and the new LP decoder is equivalent to the belief propagation decoder operating on the sparse factor graph representation, and hence achieves capacity.
Abstract: Polar codes are the first codes to provably achieve capacity on the symmetric binary-input discrete memoryless channel (B-DMC) with low encoding and decoding complexity. The parity check matrix of polar codes is high-density and we show that linear program (LP) decoding fails on the fundamental polytope of the parity check matrix. The recursive structure of the code permits a sparse factor graph representation. We define a new polytope based on the fundamental polytope of the sparse graph representation. This new polytope P is defined in a space of dimension O(N logN) where N is the block length. We prove that the projection of P in the original space is tighter than the fundamental polytope based on the parity check matrix. The LP decoder over P obtains the ML-certificate property. In the case of the binary erasure channel (BEC), the new LP decoder is equivalent to the belief propagation (BP) decoder operating on the sparse factor graph representation, and hence achieves capacity. Simulation results of SC (successive cancelation) decoding, LP decoding over tightened polytopes, and (ML) maximum likelihood decoding are provided. For channels other than the BEC, we discuss why LP decoding over P with a linear objective function is insufficient.

121 citations


Journal ArticleDOI
TL;DR: This paper presents two low-complexity reliability-based message-passing algorithms for decoding LDPC codes over non-binary finite fields that provide effective trade-off between error performance and decoding complexity compared to the non- binary sum product algorithm.
Abstract: This paper presents two low-complexity reliability-based message-passing algorithms for decoding LDPC codes over non-binary finite fields. These two decoding algorithms require only finite field and integer operations and they provide effective trade-off between error performance and decoding complexity compared to the non-binary sum product algorithm. They are particularly effective for decoding LDPC codes constructed based on finite geometries and finite fields.

90 citations


Journal ArticleDOI
Jingyu Kang1, Qin Huang1, Li Zhang1, Bo Zhou1, Shu Lin1 
TL;DR: Two new large classes of QC-LDPC codes, one binary and one non-binary, are presented, which have potential to replace Reed-Solomon codes in some communication or storage systems where combinations of random and bursts of errors (or erasures) occur.
Abstract: This paper presents two new large classes of QC-LDPC codes, one binary and one non-binary. Codes in these two classes are constructed by array dispersions of row-distance constrained matrices formed based on additive subgroups of finite fields. Experimental results show that codes constructed perform very well over the AWGN channel with iterative decoding based on belief propagation. Codes of a subclass of the class of binary codes have large minimum distances comparable to finite geometry LDPC codes and they offer effective tradeoff between error performance and decoding complexity when decoded with low-complexity reliability-based iterative decoding algorithms such as binary message passing decoding algorithms. Non-binary codes decoded with a Fast-Fourier Transform based sum-product algorithm achieve significantly large coding gains over Reed-Solomon codes of the same lengths and rates decoded with either the hard-decision Berlekamp-Massey algorithm or the algebraic soft-decision Kotter-Vardy algorithm. They have potential to replace Reed-Solomon codes in some communication or storage systems where combinations of random and bursts of errors (or erasures) occur.

88 citations


Journal ArticleDOI
Duc To1, Jinho Choi1
TL;DR: A low complexity decoding scheme is proposed using a reduced-state trellis that can achieve the same diversity gain as the full-state decoding for fading channels and the Viterbi algorithm can be used as two-user decoding.
Abstract: We study the application of convolutional codes to two-way relay networks (TWRNs) with physical-layer network coding (PNC). When a relay node decodes coded signals transmitted by two source nodes simultaneously, we show that the Viterbi algorithm (VA) can be used by approximating the maximum likelihood (ML) decoding for XORed messages as two-user decoding. In this setup, for given memory length constraint, the two source nodes can choose the same convolutional code that has the largest free distance in order to maximize the performance. Motivated from the fact that the relay node only needs to decode XORed messages, a low complexity decoding scheme is proposed using a reduced-state trellis. We show that the reduced-state decoding can achieve the same diversity gain as the full-state decoding for fading channels.

81 citations


Proceedings ArticleDOI
23 Oct 2010
TL;DR: In this paper, the authors considered coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as the fraction of errors is bounded with high probability by a parameter p and the process which adds the errors can be described by a sufficiently "simple" circuit.
Abstract: In this paper, we consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process which adds the errors can be described by a sufficiently "simple" circuit. Codes for such channel models are attractive since, like codes for standard adversarial errors, they can handle channels whose true behavior is unknown or varying over time. For three classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder/decoder that works for every channel in the class. Unique decoding for additive errors: We give the first construction of a poly-time encodable/decodable code for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1-H(p). List-decoding for online log-space channels: A space-S(N) bounded channel reads and modifies the transmitted codeword as a stream, using at most S(N) bits of workspace on transmissions of N bits. For constant S, this captures many models from the literature, including "discrete channels with finite memory" and "arbitrarily varying channels". We give an efficient code with optimal rate (arbitrarily close to 1-H(p)) that recovers a short list containing the correct message with high probability for channels which read and modify the transmitted codeword as a stream, using at most O(\log N) bits of workspace on transmissions of N bits. List-decoding for poly-time channels: For any constant c we give a similar list-decoding result for channels describable by circuits of size at most N^c, assuming the existence of pseudorandom generators.

74 citations


Journal ArticleDOI
TL;DR: This paper shows that the proposed complexity measure can be accurately estimated from a density-evolution and extrinsic-information transfer chart analysis of the code, and shows that when the decoding complexity is constrained, the complexity- Optimized codes significantly outperform threshold-optimized codes at long block lengths, within the ensemble of irregular codes.
Abstract: The optimal performance-complexity tradeoff for error-correcting codes at rates strictly below the Shannon limit is a central question in coding theory. This paper proposes a numerical approach for the minimization of decoding complexity for long-block-length irregular low-density parity-check (LDPC) codes. The proposed design methodology is applicable to any binary-input memoryless symmetric channel and any iterative message-passing decoding algorithm with a parallel-update schedule. A key feature of the proposed optimization method is a new complexity measure that incorporates both the number of operations required to carry out a single decoding iteration and the number of iterations required for convergence. This paper shows that the proposed complexity measure can be accurately estimated from a density-evolution and extrinsic-information transfer chart analysis of the code. A sufficient condition is presented for convexity of the complexity measure in the variable edge-degree distribution; when it is not satisfied, numerical experiments nevertheless suggest that the local minimum is unique. The results presented herein show that when the decoding complexity is constrained, the complexity-optimized codes significantly outperform threshold-optimized codes at long block lengths, within the ensemble of irregular codes.

70 citations


Proceedings Article
12 Aug 2010
TL;DR: A generalized cryptosystem that uses length-n codes over small finite fields Fq with dimension ≥ n-m(q-1)t efficiently correcting ⌊qt/2⌋ errors where qm ≥ n and considerably smaller keys to achieve the same security level against all known attacks is presented.
Abstract: The original McEliece cryptosystem uses length-n codes over F2 with dimension ≥ n-mt efficiently correcting t errors where 2m ≥ n. This paper presents a generalized cryptosystem that uses length-n codes over small finite fields Fq with dimension ≥ n-m(q-1)t efficiently correcting ⌊qt/2⌋ errors where qm ≥ n. Previously proposed cryptosystems with the same length and dimension corrected only ⌊(q - 1)t/2⌋ errors for q ≥ 3. This paper also presents list-decoding algorithms that efficiently correct even more errors for the same codes over Fq. Finally, this paper shows that the increase from ⌊(q - 1)t/2⌋ errors to more than ⌊qt/2⌋ errors allows considerably smaller keys to achieve the same security level against all known attacks.

68 citations


Proceedings ArticleDOI
Erdal Arikan1
08 Jul 2010
TL;DR: A survey of Reed-Muller (RM) coding is given with the goal of establishing a continuity between RM codes and polar codes.
Abstract: A survey of Reed-Muller (RM) coding is given with the goal of establishing a continuity between RM codes and polar codes. The focus is mainly on recursive decoding methods for RM codes and other ideas that are most relevant to polar coding.

Patent
Thomas J. Kolze1
26 Nov 2010
TL;DR: In this paper, a test error pattern may be identified which covers those affected bits (or symbols) among at least two respective signals (e.g., all of the respective signals or any subset thereof).
Abstract: Modified error distance decoding. In certain communication systems, multiple signals (e.g., which may be viewed as being codewords, groups/sets of bits or symbols, etc.) can be commonly affected by such deleterious phenomenon as burst noise when traversing a communication channel (e.g., from a transmitter communication device to a receiver communication device). In such instances, a test error pattern may be identified which covers those affected bits (or symbols) among at least two respective signals (e.g., all of the respective signals or any subset thereof). Various respective test error patterns may be employed, each having a different respective weight, to the desired group of signals (e.g., codewords, groups/sets of bits or symbols, etc.). As such, more than one possible estimate of each respective signal may be generated. A variety of selection operations may be employed when more than one possible estimate exists (e.g., random selection, that estimate with minimum distance, etc.).

Journal ArticleDOI
TL;DR: It is shown that the inherent code property of a code has many structurally diverse parity-check matrices, capable of detecting different error patterns leads to decoding algorithms with significantly better performance when compared to standard belief-propagation decoding.
Abstract: We introduce a new method for decoding short and moderate-length linear block codes with dense parity check matrix representations of cyclic form. This approach is termed multiple-bases belief-propagation. The proposed iterative scheme makes use of the fact that a code has many structurally diverse parity-check matrices, capable of detecting different error patterns. We show that this inherent code property leads to decoding algorithms with significantly better performance when compared to standard belief-propagation decoding. Furthermore, we describe how to choose sets of parity-check matrices of cyclic form amenable for multiple-bases decoding, based on analytical studies performed for the binary erasure channel. For several cyclic and extended cyclic codes, the multiple-bases belief propagation decoding performance can be shown to closely follow that of the maximum-likelihood decoder.

Posted Content
TL;DR: In this paper, the decoding capabilities of convolutional codes over the erasure channel were studied and two subclasses of MDP codes were defined: reverse-MDP and complete MDP convolutions.
Abstract: In this paper we study the decoding capabilities of convolutional codes over the erasure channel. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. We show how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, we define two subclasses of MDP codes: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. We show that complete-MDP convolutional codes perform in certain sense better than MDS block codes of the same rate over the erasure channel.

Proceedings ArticleDOI
08 Jul 2010
TL;DR: Some key observations are made about regular LDPC Convolutional code ensembles under windowed decoding and modified constructions of these codes are given that allow us to efficiently trade-off performance for gains in latency.
Abstract: We consider windowed decoding of LDPC Convolutional Codes on the Binary Erasure Channel (BEC) to study the trade-off between the decoding latency and the code performance. We make some key observations about regular LDPC Convolutional code ensembles under windowed decoding and give modified constructions of these codes that allow us to efficiently trade-off performance for gains in latency.

Journal ArticleDOI
TL;DR: This paper combines ideas from Alekhnovich (2005) and the concept of key equations to get an algorithm that has complexity O(sl^4nlog^2nloglogn).

Journal ArticleDOI
TL;DR: To decode errors beyond half the minimum distance, the new decoder is allowed to fail for some high-weight error patterns with a very small probability, like classical algebraic decoding algorithms.
Abstract: In this paper, a new approach for decoding low-rate Reed-Solomon codes beyond half the minimum distance is considered and analyzed. The maximum error correcting radius coincides with the error correcting radius of the Sudan algorithm published in 1997. However, unlike the Sudan Algorithm, the approach described here is not a list decoding algorithm, and is not based on polynomial interpolation. The algorithm in this paper is rather syndrome based, like classical algebraic decoding algorithms. The computational complexity of the new algorithm is of the same order as the complexity of the well-known Berlekamp-Massey algorithm. To decode errors beyond half the minimum distance, the new decoder is allowed to fail for some high-weight error patterns with a very small probability.

Journal ArticleDOI
TL;DR: It is proved that there exists a constant cq > 0 and a function fq such that for small enough e > 0, if C is list-decodable to radius (1-1/q)(1- e)n with list size cq/ e2, then C has at most fq( e) codewords, independent of n .
Abstract: A q-ary error-correcting code C ⊆ {1,2,...,q}n is said to be list decodable to radius ρ with list size L if every Hamming ball of radius ρ contains at most L codewords of C. We prove that in order for a q -ary code to be list-decodable up to radius (1-1/q)(1- e)n, we must have L = Ω(1/ e2) . Specifically, we prove that there exists a constant cq > 0 and a function fq such that for small enough e > 0, if C is list-decodable to radius (1-1/q)(1- e)n with list size cq/ e2, then C has at most fq( e) codewords, independent of n . This result is asymptotically tight (treating q as a constant), since such codes with an exponential (in n ) number of codewords are known for list size L = O(1/ e2). A result similar to ours is implicit in Blinovsky ( Problems of Information Transmission, 1986) for the binary (q=2) case. Our proof is simpler and works for all alphabet sizes, and provides more intuition for why the lower bound arises.

Journal ArticleDOI
TL;DR: This work shows combinatorial limitations on efficient list decoding of Reed-Solomon codes beyond the Johnson-Guraswami-Sudan bounds, and presents a family of low rate codes that are efficiently list-decodable beyond theJohnson bound, which leads to an optimal list- decoding algorithm for the family of matrix-codes.
Abstract: We show combinatorial limitations on efficient list decoding of Reed-Solomon codes beyond the Johnson-Guraswami-Sudan bounds. In particular, we show that for arbitrarily large fields FN, |FN| = N, for any ? ? (0,1), and K = N?: (1) Existence: there exists a received word wN : FN ? FN that agrees with a super-polynomial number of distinct degree K polynomials on ? N?? points each; (2) Explicit: there exists a polynomial time constructible received word w'N : FN ? FN that agrees with a superpolynomial number of distinct degree K polynomials, on ?2?(log N) K points each. In both cases, our results improve upon the previous state of the art, which was ? N?/? points of agreement for the existence case (proved by Justesen and Hoholdt), and ? 2N? points of agreement for the explicit case (proved by Guruswami and Rudra). Furthermore, for ? close to 1 our bound approaches the Guruswami-Sudan bound (which is ?(N K)) and implies limitations on extending their efficient Reed-Solomon list decoding algorithm to larger decoding radius. Our proof is based on some remarkable properties of sub-space polynomials. Using similar ideas, we then present a family of low rate codes that are efficiently list-decodable beyond the Johnson bound. This leads to an optimal list-decoding algorithm for the family of matrix-codes.

Journal ArticleDOI
TL;DR: A new code is presented that tests commonly accepted design principles and for which decoding by conditional optimization is both fast and ML, and shows that it is possible to give up on cubic shaping without compromising code performance or decoding complexity.
Abstract: This paper focuses on conditional optimization as a decoding primitive for high rate space-time codes that are obtained by multiplexing in the spatial and code domains. The approach is a crystallization of the work of Hottinen which applies to space-time codes that are assisted by quasi-orthogonality. It is independent of implementation and is more general in that it can be applied to space-time codes such as the Golden Code and perfect space-time block codes, that are not assisted by quasi-orthogonality, to derive fast decoders with essentially maximum likelihood (ML) performance. The conditions under which conditional optimization leads to reduced complexity ML decoding are captured in terms of the induced channel at the receiver. These conditions are then translated back to the transmission domain leading to codes that are constructed by multiplexing orthogonal designs. The methods are applied to several block space-time codes obtained by multiplexing Alamouti blocks where it leads to ML decoding with complexity O(N 2) where N is the size of the underlying QAM signal constellation. A new code is presented that tests commonly accepted design principles and for which decoding by conditional optimization is both fast and ML. The two design principles for perfect space-time codes are nonvanishing determinant of pairwise differences and cubic shaping, and it is cubic shaping that restricts the possible multiplexing structures. The new code shows that it is possible to give up on cubic shaping without compromising code performance or decoding complexity.

Patent
Jong-Seon No1, Beomkyu Shin1, Hosung Park1, Yongjune Kim1, Jaehong Kim1, Young-Hwan Lee1, Junjin Kong1 
06 Jan 2010
TL;DR: In this article, a decoding method is proposed for decoding a first decoding method and decoding a second decoding method when decoding of the first decoding algorithm fails, where the decoding method includes updating multiple variable nodes and multiple check nodes using probability values of received data.
Abstract: A decoding method includes performing a first decoding method and performing a second decoding method when decoding of the first decoding method fails. The first decoding method includes updating multiple variable nodes and multiple check nodes using probability values of received data. The second decoding method includes selecting at least one variable node from among the multiple variable nodes; correcting probability values of data received in the selected at least one variable node; updating the variable nodes and the check nodes using the corrected probability values; and determining whether decoding of the second decoding method is successful.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: A method to improve the finite-length performance of polar codes together with successive cancellation (SC) decoding by means of simple and short inner block codes is presented.
Abstract: Polar coding is a recently introduced capacity-achieving code constructing method for binary-input discrete memoryless channels. We present a method to improve the finite-length performance of polar codes together with successive cancellation (SC) decoding by means of simple and short inner block codes. Example simulations show an improvement of about 0.3 dB.

Journal ArticleDOI
TL;DR: This work considers a new class of random codes which have the following advantages: (i) the overhead is constant (in the range of 5 to 10), independent of the number of data symbols being encoded (ii) the probability of completing decoding for such an overhead is essentially one (iii) the codes are effective for a number of information symbols as low as a few tens (iv) the only probability distribution required is the uniform distribution.
Abstract: The design of erasure correcting codes and their decoding algorithms is now at the point where capacity achieving codes are available with decoding algorithms that have complexity that is linear in the number of information symbols. One aspect of these codes is that the overhead (number of coded symbols beyond the number of information symbols required to achieve decoding completion with high probability) is linear in k. This work considers a new class of random codes which have the following advantages: (i) the overhead is constant (in the range of 5 to 10), independent of the number of data symbols being encoded (ii) the probability of completing decoding for such an overhead is essentially one (iii) the codes are effective for a number of information symbols as low as a few tens (iv) the only probability distribution required is the uniform distribution. The price for these properties is that the decoding complexity is greater, on the order of k 3/2. However, for the lower values of k where these codes are of particular interest, this increase in complexity might be outweighed by their advantages. The parity check matrices of these codes are chosen at random as windowed matrices, i.e. for each column an initial starting position of a window of length w is chosen and the succeeding w positions are chosen at random as zero or one. It can be shown that it is necessary that w=O(k 1/2) for the probabilistic matrix rank properties to behave as a non-windowed random matrix. The sufficiency of the condition has so far been established by extensive simulation, although other arguments strongly support this conclusion. The properties of the codes described depend heavily on the rank properties of random matrices over finite fields. Known results on such matrices are briefly reviewed and several conjectures needed in the discussion of the code properties, are stated. The likelihood of the validity of the conjectures is supported through extensive experimentation. Mathematical proof of the conjectures would be of great value for their own interest as well of the particular coding application described here.

Journal ArticleDOI
TL;DR: A new separation algorithm to improve the error-correcting performance of LP decoding for binary linear block codes using an IP formulation with indicator variables that help in detecting the violated parity checks and an efficient method of finding cuts induced by redundant parity checks.
Abstract: Maximum likelihood (ML) decoding is the optimal decoding algorithm for arbitrary linear block codes and can be written as an integer programming (IP) problem. Feldman relaxed this IP problem and presented linear programming (LP) based decoding. In this paper, we propose a new separation algorithm to improve the error-correcting performance of LP decoding for binary linear block codes. We use an IP formulation with indicator variables that help in detecting the violated parity checks. We derive Gomory cuts from the IP and use them in our separation algorithm. An efficient method of finding cuts induced by redundant parity checks (RPC) is also proposed. Under certain circumstances we can guarantee that these RPC cuts are valid and cut off the fractional optimal solutions of LP decoding. It is demonstrated on three LDPC codes and two BCH codes that our separation algorithm performs significantly better than LP decoding and belief propagation (BP) decoding.

Journal ArticleDOI
TL;DR: It is shown that capacity-achieving codes for memoryless binary-input output-symmetric (MBIOS) channels under maximum-likelihood (ML) decoding with bounded graphical complexity can achieve capacity on any MBIOS channel using ML decoding and also achievecapacity on any BEC using belief propagation (BP) decoding, both with bounded graphics complexity.
Abstract: In this paper, the existence of capacity-achieving codes for memoryless binary-input output-symmetric (MBIOS) channels under maximum-likelihood (ML) decoding with bounded graphical complexity is investigated. Graphical complexity of a code is defined as the number of edges in the graphical representation of the code per information bit and is proportional to the decoding complexity per information bit per iteration under iterative decoding. Irregular repeat-accumulate (IRA) codes are studied first. Utilizing the asymptotic average weight distribution (AAWD) of these codes and invoking Divsalar's bound on the binary-input additive white Gaussian noise (BIAWGN) channel, it is shown that simple nonsystematic IRA ensembles outperform systematic IRA and regular low-density parity-check (LDPC) ensembles with the same graphical complexity, and are at most 0.124 dB away from the Shannon limit. However, a conclusive result as to whether these nonsystematic IRA codes can really achieve capacity cannot be reached. Motivated by this inconclusive result, a new family of codes is proposed, called low-density parity-check and generator matrix (LDPC-GM) codes, which are serially concatenated codes with an outer LDPC code and an inner low-density generator matrix (LDGM) code. It is shown that these codes can achieve capacity on any MBIOS channel using ML decoding and also achieve capacity on any BEC using belief propagation (BP) decoding, both with bounded graphical complexity. Moreover, it is shown that, under certain conditions, these capacity-achieving codes have linearly increasing minimum distances and achieve the asymptotic Gilbert-Varshamov bound for all rates.

Proceedings ArticleDOI
21 Jun 2010
TL;DR: This paper proposes a novel network coding approach based on LT codes, called LTNC, that consistently outperforms dissemination protocols without codes, thus preserving the benefit of coding and evaluates LTNC against random linear network codes in an epidemic content-dissemination application.
Abstract: Network coding has been successfully applied in large-scale content dissemination systems. While network codes provide optimal throughput, its current forms suffer from a high decoding complexity. This is an issue when applied to systems composed of nodes with low processing capabilities, such as sensor networks. In this paper, we propose a novel network coding approach based on LT codes, initially introduced in the context of erasure coding. Our coding scheme, called LTNC, fully benefits from the low complexity of belief propagation decoding. Yet, such decoding schemes are extremely sensitive to statistical properties of the code. Maintaining such properties in a fully decentralized way with only a subset of encoded data is challenging. This is precisely what the recoding algorithms of LTNC achieve. We evaluate LTNC against random linear network codes in an epidemic content-dissemination application. Results show that LTNC increases communication overhead (20\%) and convergence time (30\%) but greatly reduces the decoding complexity (99%) when compared to random linear network codes. In addition, LTNC consistently outperforms dissemination protocols without codes, thus preserving the benefit of coding.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: For any integer L, the list-L decoder guarantess successful recovery of the message subspace provided the normalized dimension of the error is at most L − L2(L + 1) over 2 R where R is the normalized rate of the code.
Abstract: The operator channel was introduced by Koetter and Kschischang as a model of errors and erasures for randomized network coding, in the case where network topology is unknown (the noncoherent case). The input and output of the operator channel are vector subspaces of the ambient space; thus error-correcting codes for this channel are collections of such subspaces. Koetter and Kschischang also constructed a remarkable family of codes for the operator channel. The Koetter-Kschischang codes are similar to Reed-Solomon codes in that codewords are obtained by evaluating certain (linearized) polynomials. In this paper, we consider the problem of list-decoding the Koetter-Kschischang codes on the operator channel. In a sense, we are able to achieve for these codes what Sudan was able to achieve for Reed-Solomon codes. In order to do so, we have to modify and generalize the original Koetter-Kschischang construction in many important respects. The end result is this: for any integer L, our list-L decoder guarantess successful recovery of the message subspace provided the normalized dimension of the error is at most L − L2(L + 1) over 2 R where R is the normalized rate of the code. Just as in the case of Sudan's list-decoding algorithm, this exceeds the previously best-known error-correction radius 1 - R, demonstrated by Koetter and Kschischang, for low rates R.

Posted Content
TL;DR: In this article, a technique based on re-encoding and coordinate transformation is proposed to reduce the complexity of the bivariate interpolation procedure. But the technique is not suitable for soft-decoding.
Abstract: The main computational steps in algebraic soft-decoding, as well as Sudan-type list-decoding, of Reed-Solomon codes are bivariate polynomial interpolation and factorization. We introduce a computational technique, based upon re-encoding and coordinate transformation, that significantly reduces the complexity of the bivariate interpolation procedure. This re-encoding and coordinate transformation converts the original interpolation problem into another reduced interpolation problem, which is orders of magnitude smaller than the original one. A rigorous proof is presented to show that the two interpolation problems are indeed equivalent. An efficient factorization procedure that applies directly to the reduced interpolation problem is also given.

Journal ArticleDOI
TL;DR: An improved syndrome shift-register decode algorithm, called the syndrome-weight decoding algorithm, is proposed for decoding three possible errors and detecting four errors in the (24,12,8) Golay code, which results in a significant reduction in the memory requirement for the lookup table, thereby yielding a faster decoding algorithm.

Journal ArticleDOI
TL;DR: A message independence property and some new performance upper bounds are derived in this work for erasure, list, and decision-feedback schemes with linear block codes transmitted over memoryless symmetric channels.
Abstract: A message independence property and some new performance upper bounds are derived in this work for erasure, list, and decision-feedback schemes with linear block codes transmitted over memoryless symmetric channels. Similar to the classical work of Forney, this work is focused on the derivation of some Gallager-type bounds on the achievable tradeoffs for these coding schemes, where the main novelty is the suitability of the bounds for both random and structured linear block codes (or code ensembles). The bounds are applicable to finite-length codes and to the asymptotic case of infinite block length, and they are applied to low-density parity-check code ensembles.