scispace - formally typeset
Search or ask a question

Showing papers on "Turbo code published in 2012"


Journal ArticleDOI
TL;DR: Simulation results show that CA-SCL/SCS can provide significant gain over the turbo codes used in 3GPP standard with code rate 1/2 and code length 1024 at the block error probability (BLER) of 10-4.
Abstract: CRC (cyclic redundancy check)-aided decoding schemes are proposed to improve the performance of polar codes. A unified description of successive cancellation decoding and its improved version with list or stack is provided and the CRC-aided successive cancellation list/stack (CA-SCL/SCS) decoding schemes are proposed. Simulation results in binary-input additive white Gaussian noise channel (BI-AWGNC) show that CA-SCL/SCS can provide significant gain of 0.5 dB over the turbo codes used in 3GPP standard with code rate 1/2 and code length 1024 at the block error probability (BLER) of 10-4. Moreover, the time complexity of CA-SCS decoder is much lower than that of turbo decoder and can be close to that of successive cancellation (SC) decoder in the high SNR regime.

722 citations


Journal ArticleDOI
TL;DR: It is shown that Gaussian approximation for density evolution enables one to accurately predict the performance of polar codes and concatenated codes based on them.
Abstract: Polar codes are shown to be instances of both generalized concatenated codes and multilevel codes. It is shown that the performance of a polar code can be improved by representing it as a multilevel code and applying the multistage decoding algorithm with maximum likelihood decoding of outer codes. Additional performance improvement is obtained by replacing polar outer codes with other ones with better error correction performance. In some cases this also results in complexity reduction. It is shown that Gaussian approximation for density evolution enables one to accurately predict the performance of polar codes and concatenated codes based on them.

664 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble.
Abstract: We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a-priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble which fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in that ensemble have that property. The quantifier universal refers to the single ensemble/code which is good for all channels if we assume that the channel is known at the receiver. The key technical result is a proof that under belief propagation decoding spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems.

321 citations


Journal ArticleDOI
TL;DR: Staircase codes, a new class of forward-error-correction (FEC) codes suitable for high-speed optical communications, are introduced, and an ITU-T G.709-compatible staircase code with rate R = 239/255 is proposed, exhibiting a net coding gain and an error floor analysis technique.
Abstract: Staircase codes, a new class of forward-error-correction (FEC) codes suitable for high-speed optical communications, are introduced. An ITU-T G.709-compatible staircase code with rate R = 239/255 is proposed, and field-programmable-gate-array-based simulation results are presented, exhibiting a net coding gain of 9.41 dB at an output error rate of 10-15, an improvement of 0.42 dB relative to the best code from the ITU-T G.975.1 recommendation. An error floor analysis technique is presented, and the proposed code is shown to have an error floor at 4.0 × 10-21.

315 citations


Journal ArticleDOI
TL;DR: The constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound, and Interference alignment is a theme that runs throughout the paper.
Abstract: Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary k of n nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary d nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact. The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n - 1 ≥ 2k - 1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d <; 2k - 3 in the absence of symbol extension, and (d) the construction, also explicit, of high-rate MSR codes for d = k + 1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the nonexistence proof for d <; 2k - 3. To the best of our knowledge, the constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound.

262 citations


Journal ArticleDOI
TL;DR: New M-algorithm BCJR (M-BCJR) algorithms for low-complexity turbo equalization and application to severe intersymbol interference (ISI) introduced by faster than Nyquist signaling are proposed and compared to reduced-trellis VA and BCJR benchmarks.
Abstract: We propose new M-algorithm BCJR (M-BCJR) algorithms for low-complexity turbo equalization and apply them to severe intersymbol interference (ISI) introduced by faster than Nyquist signaling. These reduced-search detectors are evaluated in simple detection over the ISI channel and in iterative decoding of coded FTN transmissions. In the second case, accurate log likelihood ratios are essential and we introduce a 3-recursion M-BCJR that provides this. Focusing signal energy by a minimum phase conversion before the M-BCJR is also essential; we propose an improvement to this older idea. The new M-BCJRs are compared to reduced-trellis VA and BCJR benchmarks. The FTN signals carry 4-8 bits/Hz-s in a fixed spectrum, with severe ISI models as long as 32 taps. The combination of coded FTN and the reduced-complexity BCJR is an attractive narrowband coding method.

170 citations


Proceedings ArticleDOI
13 Aug 2012
TL;DR: An approximate maximum-likelihood decoder is developed, called the bubble decoder, which runs in time polynomial in the message size and achieves the Shannon capacity over both additive white Gaussian noise (AWGN) and binary symmetric channel (BSC) models.
Abstract: Spinal codes are a new class of rateless codes that enable wireless networks to cope with time-varying channel conditions in a natural way, without requiring any explicit bit rate selection. The key idea in the code is the sequential application of a pseudo-random hash function to the message bits to produce a sequence of coded symbols for transmission. This encoding ensures that two input messages that differ in even one bit lead to very different coded sequences after the point at which they differ, providing good resilience to noise and bit errors. To decode spinal codes, this paper develops an approximate maximum-likelihood decoder, called the bubble decoder, which runs in time polynomial in the message size and achieves the Shannon capacity over both additive white Gaussian noise (AWGN) and binary symmetric channel (BSC) models. Experimental results obtained from a software implementation of a linear-time decoder show that spinal codes achieve higher throughput than fixed-rate LDPC codes, rateless Raptor codes, and the layered rateless coding approach of Strider, across a range of channel conditions and message sizes. An early hardware prototype that can decode at 10 Mbits/s in FPGA demonstrates that spinal codes are a practical construction.

159 citations


Proceedings ArticleDOI
04 Mar 2012
TL;DR: The first layered decoding for LDPC convolutional codes designed for application in high speed optical transmission systems was successfully realized.
Abstract: We successfully realized layered decoding for LDPC convolutional codes designed for application in high speed optical transmission systems. A relatively short code with 20% redundancy was FPGA-emulated with a Q-factor of 5.7dB at BER of 10−15.

150 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: This paper provides explicit regenerating codes that are resilient to errors and erasures, and shows that these codes are optimal with respect to storage and bandwidth requirements.
Abstract: Regenerating codes are a class of codes proposed for providing reliability of data and efficient repair of failed nodes in distributed storage systems. In this paper, we address the fundamental problem of handling errors and erasures at the nodes or links, during the data-reconstruction and node-repair operations. We provide explicit regenerating codes that are resilient to errors and erasures, and show that these codes are optimal with respect to storage and bandwidth requirements. As a special case, we also establish the capacity of a class of distributed storage systems in the presence of malicious adversaries. While our code constructions are based on previously constructed Product-Matrix codes, we also provide necessary and sufficient conditions for introducing resilience in any regenerating code.

116 citations


Journal ArticleDOI
TL;DR: Several classes of finite-geometry and finite-field cyclic and quasi-cyclic LDPC codes with large minimum distances are shown to have no harmful trapping sets of size smaller than their minimum distances, Consequently, their error-floor performances are dominated by theirminimum distances.
Abstract: This paper is concerned with construction and structural analysis of both cyclic and quasi-cyclic codes, particularly low-density parity-check (LDPC) codes. It consists of three parts. The first part shows that a cyclic code given by a parity-check matrix in circulant form can be decomposed into descendant cyclic and quasi-cyclic codes of various lengths and rates. Some fundamental structural properties of these descendant codes are developed, including the characterization of the roots of the generator polynomial of a cyclic descendant code. The second part of the paper shows that cyclic and quasi-cyclic descendant LDPC codes can be derived from cyclic finite-geometry LDPC codes using the results developed in the first part of the paper. This enlarges the repertoire of cyclic LDPC codes. The third part of the paper analyzes the trapping set structure of regular LDPC codes whose parity-check matrices satisfy a certain constraint on their rows and columns. Several classes of finite-geometry and finite-field cyclic and quasi-cyclic LDPC codes with large minimum distances are shown to have no harmful trapping sets of size smaller than their minimum distances. Consequently, their error-floor performances are dominated by their minimum distances.

102 citations


Journal ArticleDOI
TL;DR: In this article, an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code is presented. But the algorithm is not suitable for finding a variety of graphical objects, such as absorbing sets and Zyablov-Pinsker trapping sets.
Abstract: This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. The algorithm can be used to estimate the error floor of LDPC codes or as a tool to design LDPC codes with low error floors. For regular codes, the algorithm is initiated with a set of short cycles as the input. For irregular codes, in addition to short cycles, variable nodes with low degree and cycles with low approximate cycle extrinsic message degree (ACE) are also used as the initial inputs. The initial inputs are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles, low-degree variable nodes, and cycles with low ACE. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find a variety of graphical objects, such as absorbing sets and Zyablov-Pinsker trapping sets, known to dominate the performance of LDPC codes in the error floor region over different channels and for different iterative decoding algorithms. Simulation results on several LDPC codes demonstrate the accuracy and efficiency of the proposed algorithm. In particular, the algorithm is significantly faster than the existing search algorithms for dominant trapping sets.

Journal ArticleDOI
TL;DR: This paper presents a simple yet powerful method for designing embedded rate-compatible families of LDPC codes based on successively extending a high-rate protograph, which not only inherit the advantages of protograph codes, but also cover a wide range of rates and have very good performance with thresholds that are all within 0.15 dB of their capacity limits.
Abstract: This paper presents a simple yet effective method for designing nested families of LDPC codes. Rate compatible codes are essential for many communication applications, e.g. hybrid automatic repeat request (HARQ) systems, and their design is nontrivial due to the difficulty of simultaneously guaranteeing the quality of several related codes. Puncturing can be used to generate rate-compatible LDPC codes, but it produces a gap to capacity that, in practice, often significantly exceeds the gap of the mother code. We propose an alternative method based on successively extending a high-rate protograph. The resulting codes not only inherit the advantages of protograph codes, namely low encoding complexity and efficient decoding algorithms, but also cover a wide range of rates and have very good performance with iterative decoding thresholds that are within 0.2 dB of their capacity limits.

Journal ArticleDOI
TL;DR: This paper introduces an effective receiver for the LDS-OFDM scheme, and proposes a framework to analyze and design this iterative receiver using extrinsic information transfer (EXIT) charts, and shows how the turbo MUDD is tuned using EXIT charts analysis.
Abstract: Low density signature orthogonal frequency division multiplexing (LDS-OFDM) is an uplink multi-carrier multiple access scheme that uses low density signatures (LDS) for spreading the symbols in the frequency domain. In this paper, we introduce an effective receiver for the LDS-OFDM scheme. We propose a framework to analyze and design this iterative receiver using extrinsic information transfer (EXIT) charts. Furthermore, a turbo multi-user detector/decoder (MUDD) is proposed for the LDS-OFDM receiver. We show how the turbo MUDD is tuned using EXIT charts analysis. By tuning the turbo-style processing, the turbo MUDD can approach the performance of optimum MUDD with a smaller number of inner iterations. Using the suggested design guidelines in this paper, we show that the proposed structure brings about 2.3 dB performance improvement at a bit error rate (BER) equal to 10-5 over conventional LDS-OFDM while keeping the complexity affordable. Simulations for different scenarios also show that the LDS-OFDM outperforms similar well-known multiple access techniques such as multi-carrier code division multiple access (MC-CDMA) and group-orthogonal MC-CDMA.

Journal ArticleDOI
TL;DR: This letter proposes a very simple iterative decoding technique, accumulator-assisted distributed turbo code (ACC-DTC) using 2-state (memory-1) convolutional codes (CC), where the correlation knowledge between the source and the relay is estimated and exploited at the destination.
Abstract: In relay systems, the probability of errors occurring in the source-relay (S-R) link can be viewed as representing correlation between the source and the relay. This letter proposes a very simple iterative decoding technique, accumulator-assisted distributed turbo code (ACC-DTC) using 2-state (memory-1) convolutional codes (CC), where the correlation knowledge between the source and the relay is estimated and exploited at the destination.

Journal ArticleDOI
TL;DR: Using the principle of tailbiting, compact representations of bipartite graphs based on convolutional codes can be found and bounds on the girth and the minimum distance of LDPC block codes constructed in such a way are discussed.
Abstract: The relation between parity-check matrices of quasi-cyclic (QC) low-density parity-check (LDPC) codes and biadjacency matrices of bipartite graphs supports searching for powerful LDPC block codes. Using the principle of tailbiting, compact representations of bipartite graphs based on convolutional codes can be found. Bounds on the girth and the minimum distance of LDPC block codes constructed in such a way are discussed. Algorithms for searching iteratively for LDPC block codes with large girth and for determining their minimum distance are presented. Constructions based on all-one matrices, Steiner Triple Systems, and QC block codes are introduced. Finally, new QC regular LDPC block codes with girth up to 24 are given.

Journal ArticleDOI
TL;DR: This paper proposes product codes which use Reed-Solomon codes along rows and Hamming codes along columns and have reduced hardware overhead which provide an easy mechanism to increase the lifetime of the Flash memory devices.
Abstract: Error control coding (ECC) is essential for correcting soft errors in Flash memories. In this paper we propose use of product code based schemes to support higher error correction capability. Specifically, we propose product codes which use Reed-Solomon (RS) codes along rows and Hamming codes along columns and have reduced hardware overhead. Simulation results show that product codes can achieve better performance compared to both Bose-Chaudhuri-Hocquenghem codes and plain RS codes with less area and low latency. We also propose a flexible product code based ECC scheme that migrates to a stronger ECC scheme when the numbers of errors due to increased program/erase cycles increases. While these schemes have slightly larger latency and require additional parity bit storage, they provide an easy mechanism to increase the lifetime of the Flash memory devices.

Posted Content
TL;DR: In this paper, a simple proof of threshold saturation that applies to a broad class of coupled scalar recursions is presented, which is based on potential functions and was motivated mainly by the ideas of Takeuchi et al.
Abstract: Low-density parity-check (LDPC) convolutional codes (or spatially-coupled codes) have been shown to approach capacity on the binary erasure channel (BEC) and binary-input memoryless symmetric channels. The mechanism behind this spectacular performance is the threshold saturation phenomenon, which is characterized by the belief-propagation threshold of the spatially-coupled ensemble increasing to an intrinsic noise threshold defined by the uncoupled system. In this paper, we present a simple proof of threshold saturation that applies to a broad class of coupled scalar recursions. The conditions of the theorem are verified for the density-evolution (DE) equations of irregular LDPC codes on the BEC, a class of generalized LDPC codes, and the joint iterative decoding of LDPC codes on intersymbol-interference channels with erasure noise. Our approach is based on potential functions and was motivated mainly by the ideas of Takeuchi et al. The resulting proof is surprisingly simple when compared to previous methods.

Journal ArticleDOI
TL;DR: Several algebraic and geometric constructions of quasi-cyclic codes are presented as applications along with simulation results showing their performance over additive white Gaussian noise channels decoded with iterative message-passing algorithms.
Abstract: A matrix-theoretic approach for studying quasi-cyclic codes based on matrix transformations via Fourier transforms and row and column permutations is developed. These transformations put a parity-check matrix in the form of an array of circulant matrices into a diagonal array of matrices of the same size over an extension field. The approach is amicable to the analysis and construction of quasi-cyclic low-density parity-check codes since it takes into account the specific parity-check matrix used for decoding with iterative message-passing algorithms. Based on this approach, the dimension of the codes and parity-check matrices for the dual codes can be determined. Several algebraic and geometric constructions of quasi-cyclic codes are presented as applications along with simulation results showing their performance over additive white Gaussian noise channels decoded with iterative message-passing algorithms.

Journal ArticleDOI
TL;DR: To provide dimming control of on-off keying, the proposed coding scheme yields a codewords with the codeword weight adapted to the dimming requirement unlike existing coding schemes which have a limited support of this feature.
Abstract: This letter presents an error correction scheme for dimmable visible light communication systems. To provide dimming control of on-off keying, the proposed coding scheme yields a codeword with the codeword weight adapted to the dimming requirement. It also allows an arbitrary value of the dimming requirement unlike existing coding schemes which have a limited support of this feature. To this end, turbo codes are employed together with puncturing and scrambling techniques to match the Hamming weight of codewords with the desired dimming rate. We demonstrate the decoding performance of the proposed coding scheme under iterative decoding and compare it with other existing error correction schemes. The simulation results prove that the proposed scheme has superior decoding performance, arbitrary dimming rate support, and diverse code rate options.

Journal ArticleDOI
TL;DR: A range of powerful novel MIMO detectors are introduced, such as Markov Chain assisted Minimum Bit-Error Rate (MC-MBER) detectors, which are capable of reliably operating in the challenging high-importance rank-deficient scenarios, where there are more transmitters than receivers and hence the resultant channel-matrix becomes non-invertible.
Abstract: In this treatise, we firstly review the associated Multiple-Input Multiple-Output (MIMO) system theory and review the family of hard-decision and soft-decision based detection algorithms in the context of Spatial Division Multiplexing (SDM) systems. Our discussions culminate in the introduction of a range of powerful novel MIMO detectors, such as for example Markov Chain assisted Minimum Bit-Error Rate (MC-MBER) detectors, which are capable of reliably operating in the challenging high-importance rank-deficient scenarios, where there are more transmitters than receivers and hence the resultant channel-matrix becomes non-invertible. As a result, conventional detectors would exhibit a high residual error floor. We then invoke the Soft-Input Soft-Output (SISO) MIMO detectors for creating turbo-detected two- or three-stage concatenated SDM schemes and investigate their attainable performance in the light of their computational complexity. Finally, we introduce the powerful design tools of EXtrinsic Information Transfer (EXIT)-charts and characterize the achievable performance of the diverse near-capacity SISO detectors with the aid of EXIT charts.

Journal ArticleDOI
TL;DR: It is shown how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve.
Abstract: In this paper the decoding capabilities of convolutional codes over the erasure channel are studied. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. It is shown how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, two subclasses of MDP codes are defined: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. It is shown that complete-MDP convolutional codes perform in many cases better than comparable MDS block codes of the same rate over the erasure channel.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: This paper analyzes a class of spatially-coupled generalized LDPC codes and observes that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding.
Abstract: A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we analyze a class of spatially-coupled generalized LDPC codes and observe that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding. These codes can be seen as generalized product codes and are closely related to braided block codes.

Book
12 Jul 2012
TL;DR: Constrained coding and error-control coding are considered in a combined framework that allows the ECC decoder to gain direct access to the probabilities from the channel decoder.
Abstract: Constrained coding and error-control coding (ECC) are considered in a combined framework. In the context of soft iterative decoding, this allows the ECC decoder (e.g., for a turbo or LDPC code) to gain direct access to the probabilities from the channel decoder. In addition, a soft decoder for the constraint can be introduced to yield additional coding gain. Practical methods for combining the constraint and ECC include a modified concatenation scheme and a bit insertion scheme.

Patent
26 Sep 2012
TL;DR: In this article, a polarization code decoding method for cyclic redundancy check assistance was proposed, which is equivalent to or even lower than that of a Turbo code coding and decoding method used in a WCDMA (wideband code division multiple access) system.
Abstract: The invention relates to a polarization code decoding method for cyclic redundancy check assistance. When a polarization code is decoded, in all the routes with cyclic redundancy check values of corresponding bit estimation sequences of being zero from a root node to leaf nodes on a code tree corresponding to the polarization code, one route with maximum reliability metric value is searched by taking a list or stack as assistance for route search, and the bit estimation sequence corresponding to the route is output as a decoding result. The method comprises the following operation steps of: determining parameters according to a search assistance method, constructing an auxiliary structure of the decoding method, searching a candidate bit estimation sequence and executing cyclic redundancy check. By adopting the method disclosed by the invention, error correcting capability of a communication system which adopts the polarization code as channel coding is greatly improved, operation steps are simpler, and operation complexity is equivalent to or even lower than that of a Turbo code coding and decoding method used in a WCDMA (wideband code division multiple access) system, thus the method disclosed by the invention has a good practical prospect.

Proceedings ArticleDOI
08 Oct 2012
TL;DR: It has been observed that LDPC convolutional codes perform better than the block codes from which they are derived even at low latency, as well as in terms of their complexity as a function of Eb/N0.
Abstract: We compare LDPC block and LDPC convolutional codes with respect to their decoding performance under low decoding latencies. Protograph based regular LDPC codes are considered with rather small lifting factors. LDPC block and convolutional codes are decoded using belief propagation. For LDPC convolutional codes, a sliding window decoder with different window sizes is applied to continuously decode the input symbols. We show the required E b /N 0 to achieve a bit error rate of 10−5 for the LDPC block and LDPC convolutional codes for the decoding latency of up to approximately 550 information bits. It has been observed that LDPC convolutional codes perform better than the block codes from which they are derived even at low latency. We demonstrate the trade off between complexity and performance in terms of lifting factor and window size for a fixed value of latency. Furthermore, the two codes are also compared in terms of their complexity as a function of E b /N 0 . Convolutional codes with Viterbi decoding are also compared with the two above mentioned codes.

Journal ArticleDOI
TL;DR: In this article, the authors considered quantum error correction over depolarizing channels with nonbinary low-density parity-check codes defined over Galois field of size 2p, and proposed quantum error correcting codes are based on the binary quasi-cyclic Calderbank, Shor, and Steane (CSS) codes.
Abstract: In this paper, we consider quantum error correction over depolarizing channels with nonbinary low-density parity-check codes defined over Galois field of size 2p. The proposed quantum error correcting codes are based on the binary quasi-cyclic Calderbank, Shor, and Steane (CSS) codes. The resulting quantum codes outperform the best known quantum codes and surpass the performance limit of the bounded distance decoder. By increasing the size of the underlying Galois field, i.e., 2p, the error floors are considerably improved.

Journal ArticleDOI
TL;DR: This brief proposes higher and mixed radix implementations that improve the architecture latency, and post place and route results show that the proposed architectures achieve lower latency than radix-2 solutions with a moderate area increase.
Abstract: High speed architectures for finding the first two maximum/minimum values are of paramount importance in several applications, including iterative (e.g., turbo and low-density-parity-check) decoders. In this brief, stemming from a previous work, based on radix-2 solutions, we propose higher and mixed radix implementations that improve the architecture latency. Post place and route results on a 180-nm CMOS standard cell technology show that the proposed architectures achieve lower latency than radix-2 solutions with a moderate area increase.

Proceedings ArticleDOI
08 Oct 2012
TL;DR: This work presents the first LTE advanced compliant LTE turbo code decoder with a throughput of 2.15GBit/s at frequency of 450MHz and area of 7.7mm2 in a 65nm process node with worst case P&R constraints.
Abstract: The LTE standard [1], will soon be upgraded to LTE advanced, which will add new techniques like multi user MIMO using iterative demodulation, cooperative multi point reception (CoMP) and beam forming to increase the system throughput in the uplink of one cell An eNodeB supporting multiple cells will require a throughput of multiple GBit/s Thus a turbo code decoder with a very high throughput target while maintaining an excellent communications performance is required The major challenge is the support of very high code rates and the stringent latency requirements We present the first LTE advanced compliant LTE turbo code decoder with a throughput of 215GBit/s at frequency of 450MHz and area of 77mm2 in a 65nm process node with worst case P&R constraints The decoder can perform 6 full iterations at a large window size of 192 at full throughput, which results in a highly competitive communications performance

Journal ArticleDOI
TL;DR: In this paper, the authors consider Gray codes capable of detecting a single error, also known as snake-in-the-box codes, and study two error metrics: Kendall's τ-metric, which applies to charge-constrained errors, and the l∞-measure, which is useful in the case of limited-magnitude errors.
Abstract: Motivated by the rank-modulation scheme with applications to flash memory, we consider Gray codes capable of detecting a single error, also known as snake-in-the-box codes. We study two error metrics: Kendall's τ-metric, which applies to charge-constrained errors, and the l∞-metric, which is useful in the case of limited-magnitude errors. In both cases, we construct snake-in-the-box codes with rate asymptotically tending to 1. We also provide efficient successor-calculation functions, as well as ranking and unranking functions. Finally, we also study bounds on the parameters of such codes.

Journal ArticleDOI
TL;DR: The two-prime sequence is employed to construct several classes of cyclic codes over GF(q) and some of the codes obtained are optimal or almost optimal.
Abstract: Cyclic codes are a subclass of linear codes and have wide applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, the two-prime sequence is employed to construct several classes of cyclic codes over GF(q). Lower bounds on the minimum weight of these cyclic codes are developed. Some of the codes obtained are optimal or almost optimal. The p-ranks of the twin-prime difference sets and a class of almost difference sets are computed.