scispace - formally typeset
Search or ask a question

Showing papers on "Concatenated error correction code published in 2013"


Journal ArticleDOI
TL;DR: A method for efficiently constructing polar codes is presented and analyzed, proving that for any fixed ε > 0 and all sufficiently large code lengths n, polar codes whose rate is within ε of channel capacity can be constructed in time and space that are both linear in n.
Abstract: A method for efficiently constructing polar codes is presented and analyzed. Although polar codes are explicitly defined, straightforward construction is intractable since the resulting polar bit-channels have an output alphabet that grows exponentially with the code length. Thus, the core problem that needs to be solved is that of faithfully approximating a bit-channel with an intractably large alphabet by another channel having a manageable alphabet size. We devise two approximation methods which “sandwich” the original bit-channel between a degraded and an upgraded version thereof. Both approximations can be efficiently computed and turn out to be extremely close in practice. We also provide theoretical analysis of our construction algorithms, proving that for any fixed e > 0 and all sufficiently large code lengths n, polar codes whose rate is within e of channel capacity can be constructed in time and space that are both linear in n.

755 citations


Journal ArticleDOI
TL;DR: The derived architecture has a very low processing complexity while the memory complexity remains similar to that of previous architectures, which allows very large polar code decoders to be implemented in hardware.
Abstract: Polar codes are a recently discovered family of capacity-achieving codes that are seen as a major breakthrough in coding theory. Motivated by the recent rapid progress in the theory of polar codes, we propose a semi-parallel architecture for the implementation of successive cancellation decoding. We take advantage of the recursive structure of polar codes to make efficient use of processing resources. The derived architecture has a very low processing complexity while the memory complexity remains similar to that of previous architectures. This drastic reduction in processing complexity allows very large polar code decoders to be implemented in hardware. An N=217 polar code successive cancellation decoder is implemented in an FPGA. We also report synthesis results for ASIC.

355 citations


Proceedings ArticleDOI
09 Jun 2013
TL;DR: A simple quasi-uniform puncturing algorithm to generate the puncturing table is proposed and it is proved that this method has better row-weight property than that of the random puncturing.
Abstract: CRC (cyclic redundancy check) concatenated polar codes are superior to the turbo codes under the successive cancellation list (SCL) or successive cancellation stack (SCS) decoding algorithms. But the code length of polar codes is limited to the power of two. In this paper, a family of rate-compatible punctured polar (RCPP) codes is proposed to satisfy the construction with arbitrary code length. We propose a simple quasi-uniform puncturing algorithm to generate the puncturing table. And we prove that this method has better row-weight property than that of the random puncturing. Simulation results under the binary input additive white Gaussian noise channels (BI-AWGNs) show that these RCPP codes outperform the performance of turbo codes in WCDMA (Wideband Code Division Multiple Access) or LTE (Long Term Evolution) wireless communication systems in the large range of code lengths. Especially, the RCPP code with CRC-aided SCL/SCS algorithm can provide over 0.7dB performance gain at the block error rate (BLER) of 10-4 with short code length M = 512 and code rate R = 0.5.

199 citations


Patent
03 Dec 2013
TL;DR: In this article, a method for decoding block and concatenated codes based on belief propagation algorithms, with particular advantages when applied to codes having higher density parity check matrices, is presented.
Abstract: Systems and methods for decoding block and concatenated codes are provided. These include advanced iterative decoding techniques based on belief propagation algorithms, with particular advantages when applied to codes having higher density parity check matrices. Improvements are also provided for performing channel state information estimation including the use of optimum filter lengths based on channel selectivity and adaptive decision-directed channel estimation. These improvements enhance the performance of various communication systems and consumer electronics. Particular improvements are also provided for decoding HD Radio signals, including enhanced decoding of reference subcarriers based on soft-diversity combining, joint enhanced channel state information estimation, as well as iterative soft-input soft-output and list decoding of convolutional codes and Reed-Solomon codes. These and other improvements enhance the decoding of different logical channels in HD Radio systems.

132 citations


Proceedings ArticleDOI
17 Mar 2013
TL;DR: A novel SD-FEC employing the concatenation of a spatially-coupled type irregular LDPC code with a BCH code is proposed, showing an NCG of 12.0 dB at a BER of 10-15 with 25.5% redundancy.
Abstract: We propose a novel SD-FEC employing the concatenation of a spatially-coupled type irregular LDPC code with a BCH code. Numerical simulations show an NCG of 12.0 dB at a BER of 10-15 with 25.5% redundancy.

131 citations


Journal ArticleDOI
TL;DR: It is shown that a lifted MRD code can be represented in such a way that it forms a block design known as a transversal design, which can be used to derive a new family of linear codes in the Hamming space.
Abstract: Lifted maximum rank distance (MRD) codes, which are constant dimension codes, are considered. It is shown that a lifted MRD code can be represented in such a way that it forms a block design known as a transversal design. A slightly different representation of this design makes it similar to a q -analog of a transversal design. The structure of these designs is used to obtain upper bounds on the sizes of constant dimension codes which contain a lifted MRD code. Codes that attain these bounds are constructed. These codes are the largest known constant dimension codes for the given parameters. These transversal designs can also be used to derive a new family of linear codes in the Hamming space. Bounds on the minimum distance and the dimension of such codes are given.

122 citations


Journal ArticleDOI
TL;DR: It is shown that any family of LDPC codes, quantum or classical, where distance scales as a positive power of the block length, can correct all errors with certainty if the error rate per (qu)bit is sufficiently small.
Abstract: We discuss error-correction properties for families of quantum low-density parity check (LDPC) codes with relative distance that tends to zero in the limit of large blocklength. In particular, we show that any family of LDPC codes, quantum or classical, where distance scales as a positive power of the block length, $d \propto n^\alpha$, $\alpha>0$, can correct all errors with certainty if the error rate per (qu)bit is sufficiently small. We specifically analyze the case of LDPC version of the quantum hypergraph-product codes recently suggested by Tillich and Z\'emor. These codes are a finite-rate generalization of the toric codes, and, for sufficiently large quantum computers, offer an advantage over the toric codes.

100 citations


Journal ArticleDOI
TL;DR: Numerical results show that length-compatible polar codes designed by the proposed method provide a performance gain of about 1.0 - 5.0 dB over those obtained by random puncturing when successive cancellation decoding is employed.
Abstract: Length-compatible polar codes are a class of polar codes which can support a wide range of lengths with a single pair of encoder and decoder. In this paper we propose a method to construct length-compatible polar codes by employing the reduction of the 2n × 2n polarizing matrix proposed by Arikan. The conditions under which a reduced matrix becomes a polarizing matrix supporting a polar code of a given length are first analyzed. Based on these conditions, length-compatible polar codes are constructed in a suboptimal way by codeword-puncturing and information-refreezing processes. They have low encoding and decoding complexity since they can be encoded and decoded in a similar way as a polar code of length 2n. Numerical results show that length-compatible polar codes designed by the proposed method provide a performance gain of about 1.0 - 5.0 dB over those obtained by random puncturing when successive cancellation decoding is employed.

99 citations


Journal ArticleDOI
TL;DR: In this paper, a two-slice characterization of the parity polytope is presented, which simplifies the representation of points in the parity space and allows the decoding of large-scale error-correcting codes efficiently.
Abstract: When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper, we draw on decomposition methods from optimization theory, specifically the alternating direction method of multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a “two-slice” characterization of the parity polytope, the polytope formed by taking the convex hull of all codewords of the single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by the ADMM decoder and its solution allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher SNR than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using the ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.

98 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the performance of polar codes over the binary erasure channel (BEC) while assuming belief propagation as decoding method and provided a stopping set analysis for the factor graph of the polar codes.
Abstract: This paper investigates properties of polar codes that can be potentially useful in real-world applications We start with analyzing the performance of finite-length polar codes over the binary erasure channel (BEC), while assuming belief propagation as the decoding method We provide a stopping set analysis for the factor graph of polar codes, where we find the size of the minimum stopping set We also find the girth of the graph for polar codes Our analysis along with bit error rate (BER) simulations demonstrate that finite-length polar codes show superior error floor performance compared to the conventional capacity-approaching coding techniques In order to take advantage from this property while avoiding the shortcomings of polar codes, we consider the idea of combining polar codes with other coding schemes We propose a polar code-based concatenated scheme to be used in Optical Transport Networks (OTNs) as a potential real-world application Comparing against conventional concatenation techniques for OTNs, we show that the proposed scheme outperforms the existing methods by closing the gap to the capacity while avoiding error floor, and maintaining a low complexity at the same time

95 citations


Proceedings ArticleDOI
09 Jan 2013
TL;DR: In this paper, the authors explore error-correcting codes derived from the lifting of affine-invariant codes, i.e., linear codes whose coordinates are a vector space over a field and which are invariant under affine transformations of the coordinate space.
Abstract: In this work we explore error-correcting codes derived from the "lifting" of "affine-invariant" codes. Affine-invariant codes are simply linear codes whose coordinates are a vector space over a field and which are invariant under affine-transformations of the coordinate space. Lifting takes codes defined over a vector space of small dimension and lifts them to higher dimensions by requiring their restriction to every subspace of the original dimension to be a codeword of the code being lifted. While the operation is of interest on its own, this work focusses on new ranges of parameters that can be obtained by such codes, in the context of local correction and testing. In particular we present four interesting ranges of parameters that can be achieved by such lifts, all of which are new in the context of affine-invariance and some may be new even in general. The main highlight is a construction of high-rate codes with sublinear time decoding. The only prior construction of such codes is due to Kopparty, Saraf and Yekhanin [33]. All our codes are extremely simple, being just lifts of various parity check codes (codes with one symbol of redundancy), and in the final case, the lift of a Reed-Solomon code.We also present a simple connection between certain lifted codes and lower bounds on the size of "Nikodym sets". Roughly, a Nikodym set in Fqm is a set S with the property that every point has a line passing through it which is almost entirely contained in S. While previous lower bounds on Nikodym sets were roughly growing as qm/2m, we use our lifted codes to prove a lower bound of (1 - o(1))qm for fields of constant characteristic.

Proceedings ArticleDOI
17 Apr 2013
TL;DR: An efficient method to calculate Bhattacharyya parameter and construct polar codes based on Gaussian approximation is introduced and it is shown that the constructed code can have a comparable error performance with a low computation complexity.
Abstract: Polar coding, introduced by Arikan, is the first code construction method that could be proved to construct capacity-achieving codes for any symmetric binary-input discrete memoryless channels (B-DMCs). However, this construction method is not explicit or efficient if the channel is not binary erasure channel (BEC). Density evolution is proposed to help solve this problem for any B-DMCs but the implementation is not tractable and requires a high computation complexity. Here, we introduce an efficient method to calculate Bhattacharyya parameter and construct polar codes based on Gaussian approximation. Then we evaluate the code performance and compare it with the existing methods. It is shown that the constructed code using Gaussian approximation can have a comparable error performance with a low computation complexity.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: Analysis and simulation of the iterative HDD of tightly-braided block codes with BCH component codes for high-speed optical communication shows that these codes are competitive with the best schemes based on HDD.
Abstract: Designing error-correcting codes for optical communication is challenging mainly because of the high data rates (e.g., 100 Gbps) required and the expectation of low latency, low overhead (e.g., 7% redundancy), and large coding gain (e.g., >9dB). Although soft-decision decoding (SDD) of low-density parity-check (LDPC) codes is an active area of research, the mainstay of optical transport systems is still the iterative hard-decision decoding (HDD) of generalized product codes with algebraic syndrome decoding of the component codes. This is because iterative HDD allows many simplifications and SDD of LDPC codes results in much higher implementation complexity. In this paper, we use analysis and simulation to evaluate tightly-braided block codes with BCH component codes for high-speed optical communication. Simulation of the iterative HDD shows that these codes are competitive with the best schemes based on HDD. Finally, we suggest a specific design that is compatible with the G.709 framing structure and exhibits a coding gain of >9.35 dB at 7% redundancy under iterative HDD with a latency of approximately 1 million bits.

Journal ArticleDOI
TL;DR: Almost perfect nonlinearmonomials, and a number of other monomials over GF(3m) are used to construct optimal ternary cyclic codes with the same parameters.
Abstract: Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. Perfect nonlinear monomials were employed to construct optimal ternary cyclic codes with parameters [3m-1, 3m-1-2m, 4] by Carlet, Ding, and Yuan in 2005. In this paper, almost perfect nonlinear monomials, and a number of other monomials over GF(3m) are used to construct optimal ternary cyclic codes with the same parameters. Nine open problems on such codes are also presented.

Journal ArticleDOI
TL;DR: In this article, the problem of finding the shortest length index code with a prescribed error-correcting capability was studied, and the Singleton bound and two other bounds, referred to as the α-bound and the κ -bound, were established.
Abstract: A problem of index coding with side information was first considered by Birk and Kol in 1998. In this study, a generalization of index coding scheme, where transmitted symbols are subject to errors, is studied. Error-correcting methods for such a scheme, and their parameters, are investigated. In particular, the following question is discussed: given the side information hypergraph of index coding scheme and the maximal number of erroneous symbols δ , what is the shortest length of a linear index code, such that every receiver is able to recover the required information? This question turns out to be a generalization of the problem of finding a shortest length error-correcting code with a prescribed error-correcting capability in the classical coding theory. The Singleton bound and two other bounds, referred to as the α-bound and the κ -bound, for the optimal length of a linear error-correcting index code (ECIC) are established. For large alphabets, a construction based on concatenation of an optimal index code with a maximum distance separable classical code is shown to attain the Singleton bound. For smaller alphabets, however, this construction may not be optimal. A random construction is also analyzed. It yields another inexplicit bound on the length of an optimal linear ECIC. Further, the problem of error-correcting decoding by a linear ECIC is studied. It is shown that in order to decode correctly the desired symbol, the decoder is required to find one of the vectors, belonging to an affine space containing the actual error vector. The syndrome decoding is shown to produce the correct output if the weight of the error pattern is less or equal to the error-correcting capability of the corresponding ECIC. Finally, the notion of static ECIC, which is suitable for use with a family of instances of an index coding problem, is introduced. Several bounds on the length of static ECICs are derived, and constructions for static ECICs are discussed. Connections of these codes to weakly resilient Boolean functions are established.

Journal ArticleDOI
TL;DR: In this article, the rank-metric equivalent of Gabidulin codes is studied and three bounds on the list size of rank-matric codes are derived, and it is shown that polynomial-time list decoding is possible only with exponential time complexity.
Abstract: So far, there is no polynomial-time list decoding algorithm (beyond half the minimum distance) for Gabidulin codes These codes can be seen as the rank-metric equivalent of Reed-Solomon codes In this paper, we provide bounds on the list size of rank-metric codes in order to understand whether polynomial-time list decoding is possible or whether it works only with exponential time complexity Three bounds on the list size are proven The first one is a lower exponential bound for Gabidulin codes and shows that for these codes no polynomial-time list decoding beyond the Johnson radius exists Second, an exponential upper bound is derived, which holds for any rank-metric code of length n and minimum rank distance d The third bound proves that there exists a rank-metric code over \BBFqm of length n ≤ m such that the list size is exponential in the length for any radius greater than half the minimum rank distance This implies that there cannot exist a polynomial upper bound depending only on n and d similar to the Johnson bound in Hamming metric All three rank-metric bounds reveal significant differences to bounds for codes in Hamming metric

Journal ArticleDOI
TL;DR: A new family of channel codes, called ISI-free codes, are introduced, which improve the communication reliability while keeping the decoding complexity fairly low in the diffusion environment modeled by the Brownian motion.
Abstract: Molecular communications emerges as a promising scheme for communications between nanoscale devices. In diffusion-based molecular communications, molecules as information symbols diffusing in the fluid environments suffer from molecule crossovers, i.e., the arriving order of molecules is different from their transmission order, leading to intersymbol interference (ISI). In this paper, we introduce a new family of channel codes, called ISI-free codes, which improve the communication reliability while keeping the decoding complexity fairly low in the diffusion environment modeled by the Brownian motion. We propose general encoding/decoding schemes for the ISI-free codes, working upon the modulation schemes of transmitting a fixed number of identical molecules at a time. In addition, the bit error rate (BER) approximation function of the ISI-free codes is derived mathematically as an analytical tool to decide key factors in the BER performance. Compared with the uncoded systems, the proposed ISI-free codes offer good performance with reasonably low complexity for diffusion-based molecular communication systems.

Journal ArticleDOI
TL;DR: A new ECC scheme is introduced that provides not only the basic SEC-DED coverage but also both DAEC and scalable adjacent error detection ( xAED) with a reduction in miscorrection probability as well.
Abstract: The reliability concern associated with radiation-induced soft errors in embedded memories increases as semiconductor technology scales deep into the sub-40-nm regime. As the memory bit-cell area is reduced, single event upsets (SEUs) that would have once corrupted only a single bit-cell are now capable of upsetting multiple adjacent memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the commonly used single error correction double error detection (SEC-DED) error correction codes (ECCs) in embedded memories, the overhead associated with moving to more sophisticated double error correction (DEC) codes is considered to be too costly. To address this, designers have begun leveraging selective bit placement to design SEC-DED codes capable of double adjacent error correction (DAEC) or triple adjacent error detection (TAED). These codes can be implemented for the same check-bit overhead as the conventional SEC-DED codes; however, no codes have been developed that use both DAEC and TAED together. In this paper, a new ECC scheme is introduced that provides not only the basic SEC-DED coverage but also both DAEC and scalable adjacent error detection ( xAED) with a reduction in miscorrection probability as well. Codes capable of up to 11-bit AED have been developed for both 16- and 32-bit standard memory word sizes, and a (39, 32) SEC-DED-DAEC-TAED code implementation that uses the same number of check-bits as a conventional 32-data-bit SEC-DED code is presented.

Journal ArticleDOI
TL;DR: In this paper, a simple linear-algebra-based analysis of folded Reed-Solomon (RS) codes is presented, which eliminates the need for the computationally expensive root-finding step over extension fields.
Abstract: Folded Reed-Solomon (RS) codes are an explicit family of codes that achieve the optimal tradeoff between rate and list error-correction capability: specifically, for any e > 0, Guruswami and Rudra presented an nO(1/ e) time algorithm to list decode appropriate folded RS codes of rate R from a fraction 1-R-e of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices for a statement of the above form. Here, we give a simple linear-algebra-based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list-decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in quadratic time. We also consider a closely related family of codes, called (order m) derivative codes and defined over fields of large characteristic, which consist of the evaluations of f as well as its first m-1 formal derivatives at N distinct field elements. We show how our linear-algebraic methods for folded RS codes can be used to show that derivative codes can also achieve the above optimal tradeoff. The theoretical drawback of our analysis for folded RS codes and derivative codes is that both the decoding complexity and proven worst-case list-size bound are nΩ(1/ e). By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list-size bound of O(1/ e2) which is quite close to the existential O(1/ e) bound (however, the decoding complexity remains nΩ(1/ e)). Our work highlights that constructing an explicit subspace-evasive subset that has small intersection with low-dimensional subspaces-an interesting problem in pseudorandomness in its own right-could lead to explicit codes with better list-decoding guarantees.

Journal ArticleDOI
TL;DR: This work shows a number of ways in which conventional error-correcting codes can be modified to correct errors in the Kendall space and presents several general constructions of codes in permutations that cover a broad range of code parameters.
Abstract: Rank modulation is a way of encoding information to correct errors in flash memory devices as well as impulse noise in transmission lines. Modeling rank modulation involves construction of packings of the space of permutations equipped with the Kendall tau distance. As our main set of results, we present several general constructions of codes in permutations that cover a broad range of code parameters. In particular, we show a number of ways in which conventional error-correcting codes can be modified to correct errors in the Kendall space. Our constructions are nonasymptotic and afford simple encoding and decoding algorithms of essentially the same complexity as required to correct errors in the Hamming metric. As an example, from binary Bose-Chaudhuri-Hocquenghem codes, we obtain codes correcting t Kendall errors in n memory cells that support the order of n!/(log2n!)t messages, for any constant t=1,2,.... We give many examples of rank modulation codes with specific parameters. Turning to asymptotic analysis, we construct families of rank modulation codes that correct a number of errors that grows with n at varying rates, from Θ(n) to Θ(n2). One of our constructions gives rise to a family of rank modulation codes for which the tradeoff between the number of messages and the number of correctable Kendall errors approaches the optimal scaling rate.

Journal ArticleDOI
TL;DR: Euclidean Geometric LDPC codes, which are constructed deterministically using the points and lines of a Euclidean geometry, are reasonably good and can be derived analytically.
Abstract: In this Paper, we focus on a class of LDPC codes known as Euclidean Geometric (EG) LDPC codes, which are constructed deterministically using the points and lines of a Euclidean geometry. Minimum distances for EG codes are also reasonably good and can be derived analytically. memory error correction code has been implemented using pipelined cyclic corrector where majority logic gate determined the error .LDPC soft error decoding is also implemented for the same memory error detection and correction comparison of the results are done .as the majority gate can detect only upto 2 error the extending majority gate with ldpc soft decoding can decrease the bit error rate.

Journal ArticleDOI
TL;DR: It turns out that the proposed cyclic codes have five nonzero weights.
Abstract: Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, a family of p-ary cyclic codes whose duals have three pairwise nonconjugate zeros is proposed. The weight distribution of this family of cyclic codes is determined. It turns out that the proposed cyclic codes have five nonzero weights.

Journal ArticleDOI
TL;DR: Seven classes of three-weight cyclic codes over \gf(p) whose duals have two zeros are presented, where p is an odd prime, and the weight distributions of the seven classes ofcyclic codes are settled.
Abstract: Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms, compared with linear block codes. In this paper, seven classes of three-weight cyclic codes over \gf(p) whose duals have two zeros are presented, where p is an odd prime. The weight distributions of the seven classes of cyclic codes are settled. Some of the cyclic codes are optimal in the sense that they meet certain bounds on linear codes. The application of these cyclic codes in secret sharing is also considered.

Journal ArticleDOI
TL;DR: This work considers rank modulation codes for flash memories that allow for handling arbitrary charge-drop errors and highlights the close connections between the new code family and permutations with short common subsequences, deletion and insertion error-correcting codes for permutations, and permutation codes in the Hamming distance.
Abstract: We consider rank modulation codes for flash memories that allow for handling arbitrary charge-drop errors. Unlike classical rank modulation codes used for correcting errors that manifest themselves as swaps of two adjacently ranked elements, the proposed translocation rank codes account for more general forms of errors that arise in storage systems. Translocations represent a natural extension of the notion of adjacent transpositions and as such may be analyzed using related concepts in combinatorics and rank modulation coding. Our results include derivation of the asymptotic capacity of translocation rank codes, construction techniques for asymptotically good codes, as well as simple decoding methods for one class of constructed codes. As part of our exposition, we also highlight the close connections between the new code family and permutations with short common subsequences, deletion and insertion error-correcting codes for permutations, and permutation codes in the Hamming distance.

Journal ArticleDOI
TL;DR: In this article, a Singleton-type bound on symbol-pair codes was established and infinite families of optimal symbolpair codes were constructed, which are maximum distance separable (MDS) in the sense that they meet the Singleton type bound.
Abstract: We study (symbol-pair) codes for symbol-pair read channels introduced recently by Cassuto and Blaum (2010). A Singleton-type bound on symbol-pair codes is established and infinite families of optimal symbol-pair codes are constructed. These codes are maximum distance separable (MDS) in the sense that they meet the Singleton-type bound. In contrast to classical codes, where all known q-ary MDS codes have length O(q), we show that q-ary MDS symbol-pair codes can have length Ω(q2). In addition, we completely determine the existence of MDS symbol-pair codes for certain parameters.

Proceedings ArticleDOI
14 Apr 2013
TL;DR: In this article, a simple low-delay error correction codes for streaming recovery over a class of packet-erasure channels that introduce both burst-erasures and isolated erasures was proposed.
Abstract: We study low-delay error correction codes for streaming recovery over a class of packet-erasure channels that introduce both burst-erasures and isolated erasures. We propose a simple, yet effective class of codes whose parameters can be tuned to obtain a tradeoff between the capability to correct burst and isolated erasures. Our construction generalizes previously proposed low-delay codes which are effective only against burst erasures. We establish an information theoretic upper bound on the capability of any code to simultaneously correct burst and isolated erasures and show that our proposed constructions meet the upper bound in some special cases. We discuss the operational significance of column-distance and column-span metrics and establish that the rate 1/2 codes discovered by Martinian and Sundberg [IT Trans. 2004] through a computer search indeed attain the optimal column-distance and column-span tradeoff. Numerical simulations over a Gilbert-Elliott channel model and a Fritchman model show significant performance gains over previously proposed low-delay codes and random linear codes for certain range of channel parameters.

Journal ArticleDOI
TL;DR: It is shown that the coding scheme achieves the capacity region of noiseless WOMs when an arbitrary number of multiple writes is permitted and the results can be generalized from binary to generalized WOMs, described by an arbitrary directed acyclic graph.
Abstract: A coding scheme for write once memory (WOM) using polar codes is presented. It is shown that the scheme achieves the capacity region of noiseless WOMs when an arbitrary number of multiple writes is permitted. The encoding and decoding complexities scale as O(N log N), where N is the blocklength. For N sufficiently large, the error probability decreases subexponentially in N. The results can be generalized from binary to generalized WOMs, described by an arbitrary directed acyclic graph, using nonbinary polar codes. In the derivation, we also obtain results on the typical distortion of polar codes for lossy source coding. Some simulation results with finite length codes are presented.

Journal ArticleDOI
TL;DR: A girth-maximizing algorithm is presented that optimizes the degrees of freedom within the family of codes to yield a high-girth HQC LDPC code, subject to bounds imposed by the fact that HQC codes are still quasi-cyclic.
Abstract: We present an approach to designing capacity-approaching high-girth low-density parity-check (LDPC) codes that are friendly to hardware implementation, and compatible with some desired input code structure defined using a protograph. The approach is based on a mapping of any class of codes defined using a protograph into a family of hierarchical quasi-cyclic (HQC) LDPC codes. Whereas the parity check matrices of standard quasi-cyclic (QC) LDPC codes are composed of circulant submatrices, those of HQC LDPC codes are composed of a hierarchy of circulant submatrices that are, in turn, constructed from circulant submatrices, and so on, through some number of levels. Next, we present a girth-maximizing algorithm that optimizes the degrees of freedom within the family of codes to yield a high-girth HQC LDPC code, subject to bounds imposed by the fact that HQC codes are still quasi-cyclic. Finally, we discuss how certain characteristics of a code protograph will lead to inevitable short cycles and show that these short cycles can be eliminated using a “squashing” procedure that results in a high-girth QC LDPC code, although not a hierarchical one. We illustrate our approach with three design examples of QC LDPC codes-two girth-10 codes of rates 1/3 and 0.45 and one girth-8 code of rate 0.7-all of which are obtained from protographs of one-sided spatially coupled codes.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: The results prove that the best asymptotic minimum distance of LDPC surface codes and color codes with non-zero rate is logarithmic in the length.
Abstract: The family of hyperbolic surface codes is one of the rare families of quantum LDPC codes with non-zero rate and unbounded minimum distance First, we introduce a family of hyperbolic color codes This produces a new family of quantum LDPC codes with non-zero rate and with minimum distance logarithmic in the blocklength Second, we show that the parameters [[n, k, d]] of surface codes and color codes satisfy kd2 ≤ C(log k)2n, where C is a constant that depends only on the row weight of the parity-check matrix Our results prove that the best asymptotic minimum distance of LDPC surface codes and color codes with non-zero rate is logarithmic in the length

Journal ArticleDOI
TL;DR: The trapping sets of the asymptotically good protograph-based LDPC convolutional codes considered earlier are studied and it is shown that the size of the smallest non-empty trapping set grows linearly with the constraint length for these ensembles.
Abstract: Low-density parity-check (LDPC) convolutional codes have been shown to be capable of achieving capacity-approaching performance with iterative message-passing decoding. In the first part of this paper, using asymptotic methods to obtain lower bounds on the free distance to constraint length ratio, we show that several ensembles of regular and irregular LDPC convolutional codes derived from protograph-based LDPC block codes have the property that the free distance grows linearly with respect to the constraint length, i.e., the ensembles are asymptotically good. In particular, we show that the free distance to constraint length ratio of the LDPC convolutional code ensembles exceeds the minimum distance to block length ratio of the corresponding LDPC block code ensembles. A large free distance growth rate indicates that codes drawn from the ensemble should perform well at high signal-to-noise ratios under maximum-likelihood decoding. When suboptimal decoding methods are employed, there are many factors that affect the performance of a code. Recently, it has been shown that so-called trapping sets are a significant factor affecting decoding failures of LDPC codes over the additive white Gaussian noise channel with iterative message-passing decoding. In the second part of this paper, we study the trapping sets of the asymptotically good protograph-based LDPC convolutional codes considered earlier. By extending the theory presented in part one and using similar bounding techniques, we show that the size of the smallest non-empty trapping set grows linearly with the constraint length for these ensembles.