scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2011"


Proceedings ArticleDOI
03 Oct 2011
TL;DR: It appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes, and devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O( L · n) space.
Abstract: We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, up to L decoding paths are considered concurrently at each decoding stage. Simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of L. Thus it appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the L “best” paths. In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, straightforward implementation still requires O(L · n2) time, which is in stark contrast with the O(n log n) complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O(L · n) space.

1,338 citations


Journal ArticleDOI
TL;DR: It is shown that encoding and decoding operations can both be used to investigate some of the most common questions about how information is represented in the brain, and a systematic modeling approach is proposed that begins by estimating an encoding model for every voxel in a scan and ends by using the estimated encoding models to perform decoding.

785 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: It is shown that successive cancellation decoding can be implemented in the logarithmic domain, thereby eliminating the multiplication and division operations and greatly reducing the complexity of each processing element.
Abstract: The recently-discovered polar codes are widely seen as a major breakthrough in coding theory. These codes achieve the capacity of many important channels under successive cancellation decoding. Motivated by the rapid progress in the theory of polar codes, we propose a family of architectures for efficient hardware implementation of successive cancellation decoders. We show that such decoders can be implemented with O(n) processing elements and O(n) memory elements, while providing constant throughput. We also propose a technique for overlapping the decoding of several consecutive codewords, thereby achieving a significant speed-up factor. We furthermore show that successive cancellation decoding can be implemented in the logarithmic domain, thereby eliminating the multiplication and division operations and greatly reducing the complexity of each processing element.

246 citations


Journal ArticleDOI
TL;DR: A method that can be used for Minimum Bayes Risk decoding for speech recognition that has similar functionality to the widely used Consensus method, but has a clearer theoretical basis and appears to give better results both for MBR decoding and system combination.

167 citations


Journal ArticleDOI
TL;DR: A new stochastic decoding algorithm, called Delayed Stochastic (DS) decoding, is introduced to implement low-density-parity-check (LDPC) decoders, suitable for fully-parallel implementation of long LDPC codes with applications in optical communications.
Abstract: A new stochastic decoding algorithm, called Delayed Stochastic (DS) decoding, is introduced to implement low-density-parity-check (LDPC) decoders. The delayed stochastic decoding uses an alternative method to track probability values, which results in reduction of hardware complexity and memory requirement of the stochastic decoders. It is therefore suitable for fully-parallel implementation of long LDPC codes with applications in optical communications. Two decoders are implemented using the DS algorithm for medium (2048, 1723) and long (32768, 26624) LDPC codes. The decoders occupy 3.93- mm2 and 56.5- mm2 silicon area using 90-nm CMOS technology and provide maximum core throughputs of 172.4 and 477.7 Gb/s at [(Eb)/(No)]=5.5 and 4.8 dB, respectively.

142 citations


Book ChapterDOI
04 Dec 2011
TL;DR: A new algorithm for decoding linear codes that is inspired by a representation technique due to Howgrave-Graham and Joux in the context of subset sum algorithms is presented that offers a rigorous complexity analysis for random linear codes and brings the time complexity down.
Abstract: Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all code-based cryptography. The best attacks on the most prominent code-based cryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decoding algorithm for random linear codes of length n was for a long time Stern's variant of information-set decoding running in time $\tilde{\mathcal{O}}\left(2^{0.05563n}\right)$ . Recently, Bernstein, Lange and Peters proposed a new technique called Ball-collision decoding which offers a speed-up over Stern's algorithm by improving the running time to $\tilde{\mathcal{O}}\left(2^{0.05558n}\right)$ . In this paper, we present a new algorithm for decoding linear codes that is inspired by a representation technique due to Howgrave-Graham and Joux in the context of subset sum algorithms. Our decoding algorithm offers a rigorous complexity analysis for random linear codes and brings the time complexity down to $\tilde{\mathcal{O}}\left(2^{0.05363n}\right)$ .

123 citations


Book ChapterDOI
29 Nov 2011
TL;DR: In this article, the authors consider the possibility that an attacker has access to many cryptograms and is satisfied by decrypting (i.e. decoding) only one of them, and they show that, for the parameter range corresponding to the McEliece encryption scheme, a variant of Stern's collision decoding can be adapted to gain a factor almost ε(n)$ when N instances are given.
Abstract: Generic decoding of linear codes is the best known attack against most code-based cryptosystems. Understanding and measuring the complexity of the best decoding techniques is thus necessary to select secure parameters. We consider here the possibility that an attacker has access to many cryptograms and is satisfied by decrypting (i.e. decoding) only one of them. We show that, for the parameter range corresponding to the McEliece encryption scheme, a variant of Stern's collision decoding can be adapted to gain a factor almost $\sqrt{N}$ when N instances are given. If the attacker has access to an unlimited number of instances, we show that the attack complexity is significantly lower, in fact the number of security bits is divided by a number slightly smaller than 3/2 (but larger than 1). Finally we give indications on how to counter those attacks.

110 citations


Proceedings ArticleDOI
08 Jun 2011
TL;DR: This work highlights that constructing an explicit subspaceevasive subset that has small intersection with low-dimensional subspaces -- an interesting problem in pseudorandomness in its own right -- could lead to explicit codes with better list decoding guarantees.
Abstract: Folded Reed-Solomon codes are an explicit family of codes that achieve the optimal trade-off between rate and error-correction capability: specifically, for any " > 0, the author and Rudra (2006, 08) presented an nO(1=") time algorithm to list decode appropriate folded RS codes of rate R from a fraction 1--R--e" of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices if one settles for a smaller decoding radius (but still enough for a statement of the above form). Here we give a simple linear-algebra based analysis of this variant that eliminates the need for the computationally expensive rootfinding step over extension fields (and indeed any mention of extension fields). The entire list decoding algorithm is linearalgebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in quadratic time. The theoretical drawback of folded RS codes are that both the decoding complexity and proven worst-case list-size bound are n (1="). By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list size bound of O(1="2) which is quite close to the existential O(1=") bound (however, the decoding complexity remains n (1=")).Our work highlights that constructing an explicit subspaceevasive subset that has small intersection with low-dimensional subspaces -- an interesting problem in pseudorandomness in its own right -- could lead to explicit codes with better listdecoding guarantees.

86 citations


Proceedings ArticleDOI
03 Oct 2011
TL;DR: It is verified theoretically for certain cases and demonstrated numerically for the general cases that BATS codes achieve rates very close to the capacity of linear operator channels.
Abstract: Batched sparse (BATS) codes are proposed for transmitting a collection of packets through communication networks employing linear network coding. BATS codes generalize fountain codes and preserve the properties such as ratelessness and low encoding/decoding complexity. Moreover, the buffer size and the computation capability of the intermediate network nodes required to apply BATS codes are independent of the number of packets for transmission. It is verified theoretically for certain cases and demonstrated numerically for the general cases that BATS codes achieve rates very close to the capacity of linear operator channels.

82 citations


01 Jan 2011
TL;DR: It is shown that the performance of product codes modifications exhibits a threshold that can be estimated from a result about random graphs, and the performance curve can be extrapolated until the error floor is reached.
Abstract: Several modifications of product codes have been suggested as standards for optical networks. We show that the performance exhibits a threshold that can be estimated from a result about random graphs. For moderate input bit error probabilities, the output error rates for codes of finite length can be found by easy simulations. The analysis indicates that the performance curve can be extrapolated until the error floor is reached. The analysis allows us to calculate the error floors and avoid time-consuming simulations. Index Terms—Product codes, iterative decoding, optical net- works.

79 citations


Proceedings ArticleDOI
03 Oct 2011
TL;DR: In this paper, message passing schedules that reduce the decoding complexity of terminated LDPC convolutional code ensembles are compared by means of density evolution, and the results of the analysis together with computer simulations for some (3,6)-regular codes confirm that sliding window decoding is an attractive practical solution for low-latency and low-complexity decoding.
Abstract: Message passing schedules that reduce the decoding complexity of terminated LDPC convolutional code ensembles are analyzed. Considering the AWGN channel, various schedules are compared by means of density evolution. The results of the analysis together with computer simulations for some (3,6)-regular codes confirm that sliding window decoding is an attractive practical solution for low-latency and low-complexity decoding.

Patent
17 Mar 2011
TL;DR: In this article, low-density parity-check (LDPC) codes are proposed to provide error correction at rates approaching the link channel capacity and reliable and efficient information transfer over bandwidth or return channel constrained links with data-corrupting noise present.
Abstract: Low-Density Parity-Check (LDPC) codes offer error correction at rates approaching the link channel capacity and reliable and efficient information transfer over bandwidth or return-channel constrained links with data-corrupting noise present. They also offer performance approaching channel capacity exponentially fast in terms of the code length, linear processing complexity, and parallelism that scales with code length. They also offer challenges relating to decoding complexity and error floors limiting achievable bit-error rates. Accordingly encoders with reduced complexity, reduced power consumption and improved performance are disclosed with various improvements including simplifying communications linking multiple processing nodes by passing messages where pulse widths are modulated with the corresponding message magnitude, delaying a check operation in dependence upon variable node states, running the decoder multiple times with different random number generator seeds for a constant channel value set, and employing a second decoder with a randomizing component when the attempt with the first decoder fails.

Journal ArticleDOI
TL;DR: In this paper, a randomized lattice decoding based on Klein's sampling technique is presented, which is a randomized version of Babai's nearest plane algorithm [i.e., successive interference cancellation (SIC)] and samples lattice points from a Gaussian-like distribution over the lattice.
Abstract: Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm [i.e., successive interference cancelation (SIC)] and samples lattice points from a Gaussian-like distribution over the lattice. To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during preprocessing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is twofold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32.

Proceedings ArticleDOI
06 Jun 2011
TL;DR: A new family of locally decodable codes is constructed that have very efficient local decoding algorithms, and at the same time have rate approaching 1, and is called multiplicity codes, based on evaluating high degree multivariate polynomials and their derivatives.
Abstract: Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(ke) were known only for codes of rate exp(1/e), where k is the length of the message. Furthermore, for codes of rate > 1/2, no nontrivial locality has been achieved.In this paper we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every e > 0 and α > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 - α, and is locally decodable from a constant fraction of errors using O(ke) queries and time. The high rate and local decodability are evident even in concrete settings (and not just in asymptotic behavior), giving hope that local decoding techniques may have practical implications.These codes, which we call multiplicity codes, are based on evaluating high degree multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial based codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in their rate and distance.

Posted Content
TL;DR: In this article, the authors studied the power of sequential decoding strategies for several channels with classical input and quantum output and showed that even a conceptually simple strategy can be used to achieve rates up to the mutual information for a single sender single receiver channel called cq-channel henceforth, as well as the standard inner bound for a two-sender single receiver multiple access channel, called ccq-MAC in this paper.
Abstract: In this paper, we study the power of sequential decoding strategies for several channels with classical input and quantum output. In our sequential decoding strategies, the receiver loops through all candidate messages trying to project the received state onto a `typical' subspace for the candidate message under consideration, stopping if the projection succeeds for a message, which is then declared as the guess of the receiver for the sent message. We show that even such a conceptually simple strategy can be used to achieve rates up to the mutual information for a single sender single receiver channel called cq-channel henceforth, as well as the standard inner bound for a two sender single receiver multiple access channel, called ccq-MAC in this paper. Our decoding scheme for the ccq-MAC uses a new kind of conditionally typical projector which is constructed using a geometric result about how two subspaces interact structurally. As the main application of our methods, we construct an encoding and decoding scheme achieving the Chong-Motani-Garg inner bound for a two sender two receiver interference channel with classical input and quantum output, called ccqq-IC henceforth. This matches the best known inner bound for the interference channel in the classical setting. Achieving the Chong-Motani-Garg inner bound, which is known to be equivalent to the Han-Kobayashi inner bound, answers an open question raised recently by Fawzi et al. (arXiv:1102.2624). Our encoding scheme is the same as that of Chong-Motani-Garg, and our decoding scheme is sequential.

01 Jun 2011
TL;DR: The effects of error propagation in relay networks are investigated and more suitable distributed coding schemes are presented for soft-reencoding as the often used assumption of Gaussian distributed disturbance at the destination is not valid for the considered system setup.
Abstract: Relays in wireless networks can be used to decrease transmit power while additionally increasing diversity Distributed turbo coding as a special case of decode-and-forward is very powerful in relay networks when assuming error free decoding in the relay In practical wireless networks, however, this assumption is only justifiable if an ARQ protocol is applied which leads to lower throughput Soft-reencoding and transmission of the reliability of reencoded bits helps the destination to decode the message Reencoding in the relay with a recursive convolutional code as used for turbo-codes, can lead to error propagation In this paper the effects of error propagation in relay networks are investigated and more suitable distributed coding schemes are presented for soft-reencoding As the often used assumption of Gaussian distributed disturbance at the destination is not valid for the considered system setup, the calculation of Log-Likelihood-Ratios (LLR) for the received noisy reliability information is derived analytically

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the modified EKE algorithm in list decoding of a GRS code provides low complexity, particularly at high signal-to-noise ratios.
Abstract: This work presents a modified extended key equation algorithm in list decoding of generalized Reed-Solomon (GRS) codes. A list decoding algorithmof generalized Reed-Solomon codes has two steps, interpolation and factorization. The extended key equation algorithm (EKE) is an interpolation-based approach with a lower complexity than Sudan's algorithm. To increase the decoding speed, this work proposes a modified EKE algorithm to perform codeword checking prior to such an interpolation process. Since the evaluation mapping is engaged in encoding, a codeword is not generated systematically. Thus, the transmission information is not directly obtained from a received codeword. Therefore, the proposed algorithm undertakes a matrix operation to obtain the transmission information once a received vector has been checked to be error-free. Simulation results demonstrate that the modified EKE algorithm in list decoding of a GRS code provides low complexity, particularly at high signal-to-noise ratios.

Book
01 Jan 2011
TL;DR: The aim of this presentation is to clarify the role of encoding in the development of knowledge representation and to provide some examples of how information theory can be used to improve the quality of coding.
Abstract: Preface. 1 Introduction. 1.1 Communication Systems. 1.2 Information Theory. 1.2.1 Entropy. 1.2.2 Channel Capacity. 1.2.3 Binary Symmetric Channel. 1.2.4 AWGN Channel. 1.3 A Simple Channel Code. 2 Algebraic Coding Theory. 2.1 Fundamentals of Block Codes. 2.1.1 Code Parameters. 2.1.2 Maximum Likelihood Decoding. 2.1.3 Binary Symmetric Channel. 2.1.4 Error Detection and Error Correction. 2.2 Linear Block Codes. 2.2.1 Definition of Linear Block Codes. 2.2.2 Generator Matrix. 2.2.3 Parity Check Matrix. 2.2.4 Syndrome and Cosets. 2.2.5 Dual Code. 2.2.6 Bounds for Linear Block Codes. 2.2.7 Code Constructions. 2.2.8 Examples of Linear Block Codes. 2.3 Cyclic Codes. 2.3.1 Definition of Cyclic Codes. 2.3.2 Generator Polynomial. 2.3.3 Parity Check Polynomial. 2.3.4 Dual Codes. 2.3.5 Linear Feedback Shift Registers. 2.3.6 BCH Codes. 2.3.7 Reed-Solomon Codes. 2.3.8 Algebraic Decoding Algorithm. 2.4 Summary. 3 Convolutional Codes. 3.1 Encoding of Convolutional Codes. 3.1.1 Convolutional Encoder. 3.1.2 Generator Matrix in Time-Domain. 3.1.3 State Diagram of a Convolutional Encoder. 3.1.4 Code Termination. 3.1.5 Puncturing. 3.1.6 Generator Matrix in D -Domain. 3.1.7 Encoder Properties. 3.2 Trellis Diagram and Viterbi's Algorithm. 3.2.1 Minimum Distance Decoding. 3.2.2 Trellises. 3.2.3 Viterbi Algorithm. 3.3 Distance Properties and Error Bounds. 3.3.1 Free Distance. 3.3.2 Active Distances. 3.3.3 Weight Enumerators for Terminated Codes. 3.3.4 Path Enumerators. 3.3.5 Pairwise Error Probability. 3.3.6 Viterbi Bound. 3.4 Soft Input Decoding. 3.4.1 Euclidean Metric. 3.4.2 Support of Punctured Codes. 3.4.3 Implementation Issues. 3.5 Soft Output Decoding. 3.5.1 Derivation of APP Decoding. 3.5.2 APP Decoding in the Log-Domain. 3.6 Convolutional Coding in Mobile Communications. 3.6.1 Coding of Speech Data. 3.6.2 Hybrid ARQ. 3.6.3 EGPRS Modulation and Coding. 3.6.4 Retransmission Mechanism. 3.6.5 Link Adaptation. 3.6.6 Incremental Redundancy. 3.7 Summary. 4 Turbo Codes. 4.1 LDPC Codes. 4.1.1 Codes Based on Sparse Graphs. 4.1.2 Decoding for the Binary Erasure Channel. 4.1.3 Log-Likelihood Algebra. 4.1.4 Belief Propagation. 4.2 A First Encounter with Code Concatenation. 4.2.1 Product Codes. 4.2.2 Iterative Decoding of Product Codes. 4.3 Concatenated Convolutional Codes. 4.3.1 Parallel Concatenation. 4.3.2 The UMTS Turbo Code. 4.3.3 Serial Concatenation. 4.3.4 Partial Concatenation. 4.3.5 Turbo Decoding. 4.4 EXIT Charts. 4.4.1 Calculating an EXIT Chart. 4.4.2 Interpretation. 4.5 Weight Distribution. 4.5.1 Partial Weights. 4.5.2 ExpectedWeight Distribution. 4.6 Woven Convolutional Codes. 4.6.1 Encoding Schemes. 4.6.2 Distance Properties of Woven Codes. 4.6.3 Woven Turbo Codes. 4.6.4 Interleaver Design. 4.7 Summary. 5 Space-Time Codes. 5.1 Introduction. 5.1.1 Digital Modulation Schemes. 5.1.2 Diversity. 5.2 Spatial Channels. 5.2.1 Basic Description. 5.2.2 Spatial Channel Models. 5.2.3 Channel Estimation. 5.3 Performance Measures. 5.3.1 Channel Capacity. 5.3.2 Outage Probability and Outage Capacity. 5.3.3 Ergodic Error Probability. 5.4 Orthogonal Space-Time Block Codes. 5.4.1 Alamouti's Scheme. 5.4.2 Extension to more than two Transmit Antennas. 5.4.3 Simulation Results. 5.5 Spatial Multiplexing. 5.5.1 General Concept. 5.5.2 Iterative APP Preprocessing and Per-Layer Decoding. 5.5.3 Linear Multi-Layer Detection. 5.5.4 Original Bell Labs Layered Space Time (BLAST) Detection. 5.5.5 QL Decomposition and Interference Cancellation. 5.5.6 Performance of Multi-Layer Detection Schemes. 5.5.7 Unified Description by Linear Dispersion Codes. 5.6 Summary. A. Algebraic Structures. A.1 Groups, Rings and Finite Fields. A.1.1 Groups. A.1.2 Rings. A.1.3 Finite Fields. A.2 Vector Spaces. A.3 Polynomials and Extension Fields. A.4 Discrete Fourier Transform. B. Linear Algebra. C. Acronyms. Bibliography . Index.

Journal ArticleDOI
TL;DR: Error analysis and simulation results indicate that for the additive white Gaussian noise (AWGN) channel, convolutional lattice codes with computationally reasonable decoders can achieve low error rate close to the channel capacity.
Abstract: The coded modulation scheme proposed in this paper has a simple construction: an integer sequence, representing the information, is convolved with a fixed, continuous-valued, finite impulse response (FIR) filter to generate the codeword - a lattice point. Due to power constraints, the code construction includes a shaping mechanism inspired by precoding techniques such as the Tomlinson-Harashima filter. We naturally term these codes “convolutional lattice codes” or alternatively “signal codes” due to the signal processing interpretation of the code construction. Surprisingly, properly chosen short FIR filters can generate good codes with large minimal distance. Decoding can be done efficiently by sequential decoding or for better performance by bidirectional sequential decoding. Error analysis and simulation results indicate that for the additive white Gaussian noise (AWGN) channel, convolutional lattice codes with computationally reasonable decoders can achieve low error rate close to the channel capacity.

Proceedings ArticleDOI
03 Oct 2011
TL;DR: This work considers the decoding of spatially coupled codes through a windowed decoder that aims to retain many of the attractive features of belief propagation, while trying to reduce complexity further, by defining thresholds on channel erasure rates that guarantee a target erasure rate.
Abstract: We study windowed decoding of spatially coupled codes when the transmission occurs over the binary erasure channel. We characterize the performance of this scheme by defining thresholds on channel erasure rates that guarantee a target bit erasure rate. We give analytical lower bounds on these thresholds and show that the performance approaches that of belief propagation exponentially fast in the window size. We give numerical results including the thresholds computed using density evolution and the erasure rate curves for finite-length spatially coupled codes.

Journal ArticleDOI
TL;DR: It is shown that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate, which implies that no considerably fast decoding algorithm exists for the general quantum decode problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Abstract: Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

Journal ArticleDOI
TL;DR: In this article, the min-sum algorithm is incorporated into column-layered decoding and then algorithmic transformations and judicious approximations are explored to minimize the overall computation complexity.
Abstract: Layered decoding is well appreciated in low-density parity-check (LDPC) decoder implementation since it can achieve effectively high decoding throughput with low computation complexity. This work, for the first time, addresses low-complexity column-layered decoding schemes and very-large-scale integration (VLSI) architectures for multi-Gb/s applications. At first, the min-sum algorithm is incorporated into the column-layered decoding. Then algorithmic transformations and judicious approximations are explored to minimise the overall computation complexity. Compared to the original column-layered decoding, the new approach can reduce the computation complexity in check node processing for high-rate LDPC codes by up to 90% while maintaining the fast convergence speed of layered decoding. Furthermore, a relaxed pipelining scheme is presented to enable very high clock speed for VLSI implementation. Equipped with these new techniques, an efficient decoder architecture for quasi-cyclic LDPC codes is developed and implemented with 0.13%%m VLSI implementation technology. It is shown that a decoding throughput of nearly 4%Gb/s at a maximum of 10 iterations can be achieved for a (4096, 3584) LDPC code. Hence, this work has facilitated practical applications of column-layered decoding and particularly made it very attractive in high-speed, high-rate LDPC decoder implementation.

Journal ArticleDOI
TL;DR: A technique is proposed to break trapping sets while decoding to lower the error-floor, which has moderate complexity overhead and is applicable to any code without requiring a prior knowledge of the structure of its trapping sets.
Abstract: Error-floors are the main reason for excluding LDPC codes from applications requiring very low bit-error rate. They are attributed to a particular structure in the codes' Tanner graphs, known as trapping sets, which traps the message-passing algorithms commonly used to decode LDPC codes, and prevents decoding from converging to the correct codeword. A technique is proposed to break trapping sets while decoding. Based on decoding results leading to a decoding failure, some bits are identified in a previous iteration and flipped and decoding is restarted. This backtracking may enable the decoder to get out of the trapped state. A semi-analytical method is also proposed to predict the error-floor after backtracking. Simulation results indicate the effectiveness of the proposed technique in lowering the error-floor. The technique, which has moderate complexity overhead, is applicable to any code without requiring a prior knowledge of the structure of its trapping sets.

Journal ArticleDOI
TL;DR: An overview of typical LDPC code structures and commonly-used LDPC decoding algorithms is provided, and efficient VLSI architectures for random-like codes and structured LDPC codes are discussed.
Abstract: Low-Density Parity-check (LDPC) code, being one of the most promising near-Shannon limit error correction codes (ECCs) in practice, has attracted tremendous attention in both academia and industry since its rediscovery in middle 1990's. Owning excellent coding gain, LDPC code also has very low error floor, and inherent parallizable decoding schemes. Compared to other ECCs such as Turbo codes, BCH codes and RS codes, LDPC code has many more varieties in code construction, which result in various optimum decoding architectures associated with different structures of the parity-check matrix. In this work, we first provide an overview of typical LDPC code structures and commonly-used LDPC decoding algorithms. We then discuss efficient VLSI architectures for random-like codes and structured LDPC codes. We further present layered decoding schemes and corresponding VLSI architectures. Finally we briefly address non-binary LDPC decoding and multi-rate LDPC decoder design.

Proceedings ArticleDOI
15 May 2011
TL;DR: A multi-layer parallel decoding algorithm and VLSI architecture for decoding of structured quasi-cyclic low-density parity-check codes and a double- layer parallel LDPC decoder for the IEEE 802.11n standard is proposed.
Abstract: We propose a multi-layer parallel decoding algorithm and VLSI architecture for decoding of structured quasi-cyclic low-density parity-check codes. In the conventional layered decoding algorithm, the block-rows of the parity check matrix are processed sequentially, or layer after layer. The maximum number of rows that can be simultaneously processed by the conventional layered decoder is limited to the sub-matrix size. To remove this limitation and support layer-level parallelism, we extend the conventional layered decoding algorithm and architecture to enable simultaneously processing of multiple (K) layers of a parity check matrix, which will lead to a roughly K-fold throughput increase. As a case study, we have designed a double-layer parallel LDPC decoder for the IEEE 802.11n standard. The decoder was synthesized for a TSMC 45-nm CMOS technology. With a synthesis area of 0.81 mm2 and a maximum clock frequency of 815 MHz, the decoder achieves a maximum throughput of 3.0 Gbps at 15 iterations.

Journal ArticleDOI
TL;DR: For video sources which are spatially stationary memoryless and temporally Gauss-Markov, MSE frame distortions, and a sum-rate constraint, the results expose the optimality of idealized differential predictive coding among all causal sequential coders, when the encoder uses a positive rate to describe each frame.
Abstract: Motivated by video coding applications, the problem of sequential coding of correlated sources with encoding and/or decoding frame-delays is studied. The fundamental tradeoffs between individual frame rates, individual frame distortions, and encoding/decoding frame-delays are derived in terms of a single-letter information-theoretic characterization of the rate-distortion region for general interframe source correlations and certain types of potentially frame specific and coupled single-letter fidelity criteria. The sum-rate-distortion region is characterized in terms of generalized directed information measures highlighting their role in delayed sequential source coding problems. For video sources which are spatially stationary memoryless and temporally Gauss-Markov, MSE frame distortions, and a sum-rate constraint, our results expose the optimality of idealized differential predictive coding among all causal sequential coders, when the encoder uses a positive rate to describe each frame. Somewhat surprisingly, causal sequential encoding with one-frame-delayed noncausal sequential decoding can exactly match the sum-rate-MSE performance of joint coding for all nontrivial MSE-tuples satisfying certain positive semidefiniteness conditions. Thus, even a single frame-delay holds potential for yielding significant performance improvements. Generalizations to higher order Markov sources are also presented and discussed. A rate-distortion performance equivalence between, causal sequential encoding with delayed noncausal sequential decoding, and delayed noncausal sequential encoding with causal sequential decoding, is also established.

Book
30 Sep 2011
TL;DR: A number of techniques are presented which promise to make the soft-decision trellis decoding of block codes as powerful and cost effective as that of convolutional codes.
Abstract: We present a number of techniques which promise to make the soft-decision trellis decoding of block codes as powerful and cost effective as that of convolutional codes. The techniques are based on the concept of generalised array codes (GACs) and enable sufficient simplification in the decoding complexity.

Journal ArticleDOI
TL;DR: A one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model is described and instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist.

Journal ArticleDOI
TL;DR: A link is provided between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed-Solomon codes, capable of decoding beyond half the minimum distance.
Abstract: The key step of syndrome-based decoding of Reed-Solomon codes up to half the minimum distance is to solve the so-called Key Equation. List decoding algorithms, capable of decoding beyond half the minimum distance, are based on interpolation and factorization of multivariate polynomials. This article provides a link between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed-Solomon codes. The original interpolation conditions of Guruswami and Sudan for Reed-Solomon codes are reformulated in terms of a set of Key Equations. These equations provide a structured homogeneous linear system of equations of Block-Hankel form, that can be solved by an adaption of the Fundamental Iterative Algorithm. For an (n,k) Reed-Solomon code, a multiplicity s and a list size l , our algorithm has time complexity O(ls4n2).

Proceedings ArticleDOI
01 Dec 2011
TL;DR: This work considers some problems not addressed in already published works, mainly the ML decoding of the compute-and-forward strategy as a physical-layer network coding scheme.
Abstract: In a recent work, Nazer and Gastpar proposed the compute-and-forward strategy as a physical-layer network coding scheme. They described a code structure based on nested lattices whose algebraic structure makes the scheme reliable and efficient. In a more recent paper, Feng et al. introduced an algebraic approach to the lattice implementation. In this work, we consider some problems not addressed in already published works, mainly the ML decoding.