scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2004"


Journal ArticleDOI
TL;DR: A class of algebraically structured quasi-cyclic low-density parity-check (LDPC) codes and their convolutional counterparts is presented and bounds on the girth and minimum distance of the codes are found, and several possible encoding techniques are described.
Abstract: A class of algebraically structured quasi-cyclic (QC) low-density parity-check (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse parity-check matrices comprised of blocks of circulant matrices. The sparse parity-check representation allows for practical graph-based iterative message-passing decoding. Based on the algebraic structure, bounds on the girth and minimum distance of the codes are found, and several possible encoding techniques are described. The performance of the QC LDPC block codes compares favorably with that of randomly constructed LDPC codes for short to moderate block lengths. The performance of the LDPC convolutional codes is superior to that of the QC codes on which they are based; this performance is the limiting performance obtained by increasing the circulant size of the base QC code. Finally, a continuous decoding procedure for the LDPC convolutional codes is described.

695 citations


Proceedings ArticleDOI
D.E. Hocevar1
06 Dec 2004
TL;DR: The previously devised irregular partitioned permutation LDPC codes have a construction that easily accommodates a layered decoding and it is shown that the decoding performance is improved by a factor of two in the number of iterations required.
Abstract: We apply layered belief propagation decoding to our previously devised irregular partitioned permutation LDPC codes These codes have a construction that easily accommodates a layered decoding and we show that the decoding performance is improved by a factor of two in the number of iterations required We show how our previous flexible decoding architecture can be adapted to facilitate layered decoding This results in a significant reduction in the number of memory bits and memory instances required, in the range of 45-50% The faster decoding speed means the decoder logic can also be reduced by nearly 50% to achieve the same throughput and error performance In total, the overall decoder architecture can be reduced by nearly 50%

628 citations


Proceedings ArticleDOI
20 Jun 2004
TL;DR: A log-domain decoding scheme for LDPC codes over GF(q) is introduced, which is mathematically equivalent to the conventional sum-product decoder but has advantages in terms of implementation, computational complexity and numerical stability.
Abstract: This paper introduces a log-domain decoding scheme for LDPC codes over GF(q). While this scheme is mathematically equivalent to the conventional sum-product decoder, log-domain decoding has advantages in terms of implementation, computational complexity and numerical stability. Further, a suboptimal variant of the log-domain decoding algorithm is proposed, yielding a lower computational complexity. The proposed algorithms and the sum-product algorithm are compared both in terms of simulated BER performance and computational complexity.

329 citations


Journal ArticleDOI
TL;DR: A modification of the Fincke-Pohst (sphere decoding) algorithm to estimate the maximum a posteriori probability of the received symbol sequence is proposed and, over a wide range of rates and signal-to-noise ratios, has polynomial-time complexity.
Abstract: In recent years, soft iterative decoding techniques have been shown to greatly improve the bit error rate performance of various communication systems. For multiantenna systems employing space-time codes, however, it is not clear what is the best way to obtain the soft information required of the iterative scheme with low complexity. In this paper, we propose a modification of the Fincke-Pohst (sphere decoding) algorithm to estimate the maximum a posteriori probability of the received symbol sequence. The new algorithm solves a nonlinear integer least squares problem and, over a wide range of rates and signal-to-noise ratios, has polynomial-time complexity. Performance of the algorithm, combined with convolutional, turbo, and low-density parity check codes, is demonstrated on several multiantenna channels. The results for systems that employ space-time modulation schemes seem to indicate that the best performing schemes are those that support the highest mutual information between the transmitted and received signals, rather than the best diversity gain.

298 citations


Book
01 Jan 2004
TL;DR: This thesis presents a detailed investigation of list decoding, and proves its potential, feasibility, and importance as a combinatorial and algorithmic concept and presents the first polynomial time algorithm to decode Reed-Solomon codes beyond d/2 errors for every value of the rate.
Abstract: Error-correcting codes are combinatorial objects designed to cope with the problem of reliable transmission of information on a noisy channel. A fundamental algorithmic challenge in coding theory and practice is to efficiently decode the original transmitted message even when a few symbols of the received word are in error. The naive search algorithm runs in exponential time, and several classical polynomial time decoding algorithms are known for specific code families. Traditionally, however, these algorithms have been constrained to output a unique codeword. Thus they faced a “combinatorial barrier” and could only correct up to d/2 errors, where d is the minimum distance of the code. An alternate notion of decoding called list decoding, proposed independently by Elias and Wozencraft in the late 50s, allows the decoder to output a list of all codewords that differ from the received word in a certain number of positions. Even when constrained to output a relatively small number of answers, list decoding permits recovery from errors well beyond the d/2 barrier, and opens up the possibility of meaningful error-correction from large amounts of noise. However, for nearly four decades after its conception, this potential: of list decoding was largely untapped due to the lack of efficient algorithms to list decode beyond d/2 errors for useful families of codes. This thesis presents a detailed investigation of list decoding, and proves its potential, feasibility, and importance as a combinatorial and algorithmic concept. We prove several; combinatorial results that sharpen our understanding of the potential and limits of list; decoding, and its relation to more classical parameters like the rate and minimum distance. The crux of the thesis is its algorithmic results, which were lacking in the early works on list decoding. Our algorithmic results include: (1) Efficient list decoding algorithms for classically studied codes such as Reed-Solomon codes and algebraic-geometric codes. In particular, building upon an earlier algorithm due to Sudan, we present the first polynomial time algorithm to decode Reed-Solomon codes beyond d/2 errors for every value of the rate. (2) A new soft list decoding algorithm for Reed-Solomon and algebraic-geometric codes and novel decoding algorithms for concatenated codes based on it. (3) New code constructions using concatenation and/or expander graphs that have good (and sometimes near-optimal) rate and are efficiently list decodable from extremely large amounts of noise. (4) Expander-based constructions of linear time encodable and decodable codes that ca4 correct up to the maximum possible fraction of errors, using unique (not list) decoding. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

276 citations


Journal ArticleDOI
TL;DR: A modified weighted bit-flipping decoding algorithm for low-density parity-check codes is proposed, and improvement in performance is observed by considering both the check constraint messages and the intrinsic message for each bit.
Abstract: In this letter, a modified weighted bit-flipping decoding algorithm for low-density parity-check codes is proposed. Improvement in performance is observed by considering both the check constraint messages and the intrinsic message for each bit.

232 citations


Journal ArticleDOI
TL;DR: The proposed algorithm has almost the same complexity as the standard iterative decoding, however, it has better performance andSimulations show that the error rate can be decreased by several orders of magnitude using the proposed algorithm.
Abstract: This paper investigates decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC). We study the iterative and maximum-likelihood (ML) decoding of LDPC codes on this channel. We derive bounds on the ML decoding of LDPC codes on the BEC. We then present an improved decoding algorithm. The proposed algorithm has almost the same complexity as the standard iterative decoding. However, it has better performance. Simulations show that we can decrease the error rate by several orders of magnitude using the proposed algorithm. We also provide some graph-theoretic properties of different decoding algorithms of LDPC codes over the BEC which we think are useful to better understand the LDPC decoding methods, in particular, for finite-length codes.

184 citations


Journal ArticleDOI
TL;DR: A systematic approach is proposed to develop a high throughput decoder for quasi-cyclic low-density parity check (LDPC) codes, whose parity check matrix is constructed by circularly shifted identity matrices, and the maximum concurrency of the two stages is explored by a novel scheduling algorithm.
Abstract: In this paper, a systematic approach is proposed to develop a high throughput decoder for quasi-cyclic low-density parity check (LDPC) codes, whose parity check matrix is constructed by circularly shifted identity matrices. Based on the properties of quasi-cyclic LDPC codes, the two stages of belief propagation decoding algorithm, namely, check node update and variable node update, could be overlapped and thus the overall decoding latency is reduced. To avoid the memory access conflict, the maximum concurrency of the two stages is explored by a novel scheduling algorithm. Consequently, the decoding throughput could be increased by about twice assuming dual-port memory is available.

173 citations


Journal ArticleDOI
TL;DR: A general recursive filter decoding algorithm based on a point process model of individual neuron spiking activity and a linear stochastic state-space model of the biological signal is presented and an integrated approach to dynamically reading neural codes, measuring their properties, and quantifying the accuracy with which encoded information is extracted is suggested.
Abstract: Neural spike train decoding algorithms and techniques to compute Shannon mutual information are important methods for analyzing how neural systems represent biological signals. Decoding algorithms are also one of several strategies being used to design controls for brain-machine interfaces. Developing optimal strategies to desig n decoding algorithms and compute mutual information are therefore important problems in computational neuroscience. We present a general recursive filter decoding algorithm based on a point process model of individual neuron spiking activity and a linear stochastic state-space model of the biological signal. We derive from the algorithm new instantaneous estimates of the entropy, entropy rate, and the mutual information between the signal and the ensemble spiking activity. We assess the accuracy of the algorithm by computing, along with the decoding error, the true coverage probability of the approximate 0.95 confidence regions for the individual signal estimates. We illustrate the new algorithm by reanalyzing the position and ensemble neural spiking activity of CA1 hippocampal neurons from two rats foraging in an open circular environment. We compare the performance of this algorithm with a linear filter constructed by the widely used reverse correlation method. The median decoding error for Animal 1 (2) during 10 minutes of open foraging was 5.9 (5.5) cm, the median entropy was 6.9 (7.0) bits, the median information was 9.4 (9.4) bits, and the true coverage probability for 0.95 confidence regions was 0.67 (0.75) using 34 (32) neurons. These findings improve significantly on our previous results and suggest an integrated approach to dynamically reading neural codes, measuring their properties, and quantifying the accuracy with which encoded information is extracted.

169 citations


Proceedings ArticleDOI
30 Nov 2004
TL;DR: An efficient decoding schedule for low-density parity-check (LDPC) codes that outperforms the conventional approach, in terms of both complexity and performance, is presented.
Abstract: An efficient decoding schedule for low-density parity-check (LDPC) codes that outperforms the conventional approach, in terms of both complexity and performance, is presented. Conventionally, in each iteration, all symbol nodes and, subsequently, all the check nodes, send messages to their neighbors ("flooding schedule"). In contrast, in the proposed method, the updating of nodes is performed according to a serial schedule which propagates the information twice as fast. A density evolution (DE) algorithm for asymptotic analysis of the new schedule is derived, showing that, when working near the code's capacity, the decoder converges in approximately half the number of iterations. In addition, a concentration theorem is proved, showing that, for a randomly chosen serial schedule, code graph, and decoder input, the decoder's performance approaches its expected one as predicted by the DE algorithm, when the code length increases.

163 citations


Journal ArticleDOI
TL;DR: Simulation results show that this method provides significant gain over hard decision decoding and is superior to some other popular soft decision methods for short RS codes.
Abstract: This letter presents an iterative decoding method for Reed-Solomon (RS) codes. The proposed algorithm is a stochastic shifting based iterative decoding (SSID) algorithm which takes advantage of the cyclic structure of RS codes. The performances of different updating schemes are compared. Simulation results show that this method provides significant gain over hard decision decoding and is superior to some other popular soft decision methods for short RS codes.

Journal ArticleDOI
TL;DR: This work proposes PA codes as a class of prospective codes with good performance, low decoding complexity, regular structure, and flexible rate adaptivity for all rates above 1/2, and shows that these codes provide performance similar to turbo codes but with significantly less decoding complexity and with a lower error floor.
Abstract: We propose a novel class of provably good codes which are a serial concatenation of a single-parity-check (SPC)-based product code, an interleaver, and a rate-1 recursive convolutional code. The proposed codes, termed product accumulate (PA) codes, are linear time encodable and linear time decodable. We show that the product code by itself does not have a positive threshold, but a PA code can provide arbitrarily low bit-error rate (BER) under both maximum-likelihood (ML) decoding and iterative decoding. Two message-passing decoding algorithms are proposed and it is shown that a particular update schedule for these message-passing algorithms is equivalent to conventional turbo decoding of the serial concatenated code, but with significantly lower complexity. Tight upper bounds on the ML performance using Divsalar's (1999) simple bound and thresholds under density evolution (DE) show that these codes are capable of performance within a few tenths of a decibel away from the Shannon limit. Simulation results confirm these claims and show that these codes provide performance similar to turbo codes but with significantly less decoding complexity and with a lower error floor. Hence, we propose PA codes as a class of prospective codes with good performance, low decoding complexity, regular structure, and flexible rate adaptivity for all rates above 1/2.

Journal ArticleDOI
TL;DR: An efficient maximum-likelihood decoding algorithm for decoding low-density parity-check codes over the binary-erasure channel (BEC) and the computational complexity of the proposed algorithm is analyzed.
Abstract: We propose an efficient maximum-likelihood (ML) decoding algorithm for decoding low-density parity-check (LDPC) codes over the binary-erasure channel (BEC). We also analyze the computational complexity of the proposed algorithm.

Journal ArticleDOI
TL;DR: This correspondence shows that q-ary RM codes are subfield subcodes of RS codes over F/sub q//sup m/ and presents a list-decoding algorithm, applicable to codes of any rates, and achieves an error-correction bound n(1-/spl radic)/n.
Abstract: The q-ary Reed-Muller (RM) codes RM/sub q/(u,m) of length n=q/sup m/ are a generalization of Reed-Solomon (RS) codes, which use polynomials in m variables to encode messages through functional encoding. Using an idea of reducing the multivariate case to the univariate case, randomized list-decoding algorithms for RM codes were given in and . The algorithm in Sudan et al. (1999) is an improvement of the algorithm in , it is applicable to codes RM/sub q/(u,m) with u

Journal ArticleDOI
TL;DR: To evaluate decoding capability, a probabilistic technique is developed that disintegrates decoding into a sequence of recursive steps and subsequent outputs can be tightly evaluated under the assumption that all preceding decodings are correct.
Abstract: Recursive decoding techniques are considered for Reed-Muller (RM) codes of growing length n and fixed order r. An algorithm is designed that has complexity of order nlogn and corrects most error patterns of weight up to n(1/2-/spl epsiv/) given that /spl epsiv/ exceeds n/sup -1/2r/. This improves the asymptotic bounds known for decoding RM codes with nonexponential complexity. To evaluate decoding capability, we develop a probabilistic technique that disintegrates decoding into a sequence of recursive steps. Although dependent, subsequent outputs can be tightly evaluated under the assumption that all preceding decodings are correct. In turn, this allows us to employ second-order analysis and find the error weights for which the decoding error probability vanishes on the entire sequence of decoding steps as the code length n grows.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: A new modified Berlekamp-Massey algorithm for correcting rank errors and column erasures is described, which is about half as complex as the known algorithms.
Abstract: This paper describes the decoding of Rank-Codes with different decoding algorithms. A new modified Berlekamp-Massey algorithm for correcting rank errors and column erasures is described. These algorithms consist of two decoding steps. The first step is the puncturing of the code and the decoding in the punctured code. The second step is the column erasure decoding in the original code. Thus decoding step is about half as complex as the known algorithms

Patent
15 Apr 2004
TL;DR: In this paper, a video encoder uses two-layer run level coding to reduce bitrate for frequency transform coefficients in a quick and efficient manner, and a video decoder uses corresponding two layer decoding.
Abstract: Entropy coding and decoding techniques are described, which may be implemented separately or in combination. For example, a video encoder uses two-layer run level coding to reduce bitrate for frequency transform coefficients in a quick and efficient manner, and a video decoder uses corresponding two-layer run level decoding. This two-layer coding/decoding can be generalized to more than two layers of run level coding/decoding. The video encoder and decoder exploit common patterns in run level information to reduce code table size and create opportunities for early termination of decoding. Using zoned Huffman code tables helps limit overall table size while still providing a level of adaptivity in encoding and decoding. Using embedded Huffman code tables allows the encoder and decoder to reuse codes for 8×8, 8×4, 4×8, and 4×4 blocks.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: A method is presented for constructing LDPC codes with excellent performance, simple hardware implementation, low encoder complexity, and which can be concisely documented.
Abstract: A method is presented for constructing LDPC codes with excellent performance, simple hardware implementation, low encoder complexity, and which can be concisely documented. The simple code structure is achieved by using a base graph, expanded with circulants. The base graph is chosen by computer search using simulated annealing, driven by density evolution's decoding threshold as determined by the reciprocal channel approximation. To build a full parity check matrix, each edge of the base graph is replaced by a circulant permutation, chosen to maximize loop length by using a Viterbi-like algorithm.

Journal ArticleDOI
TL;DR: This paper develops codes suitable for iterative decoding using the sum-product algorithm that can achieve improved error-correction performance over randomly constructed LDPC codes and, in some cases, achieve this with a significant decrease in decoding complexity.
Abstract: This paper develops codes suitable for iterative decoding using the sum-product algorithm. By considering a large class of combinatorial structures, known as partial geometries, we are able to define classes of low-density parity-check (LDPC) codes, which include several previously known families of codes as special cases. The existing range of algebraic LDPC codes is limited, so the new families of codes obtained by generalizing to partial geometries significantly increase the range of choice of available code lengths and rates. We derive bounds on minimum distance, rank, and girth for all the codes from partial geometries, and present constructions and performance results for the classes of partial geometries which have not previously been proposed for use with iterative decoding. We show that these new codes can achieve improved error-correction performance over randomly constructed LDPC codes and, in some cases, achieve this with a significant decrease in decoding complexity.

Journal ArticleDOI
TL;DR: A novel reliability ratio based weighted bit-flipping decoding scheme is proposed for low-density parity-check codes, achieving a coding gain of 1 dB when communicating over an AWGN channel, while maintaining the same decoding complexity.
Abstract: A novel reliability ratio based weighted bit-flipping decoding scheme is proposed for low-density parity-check codes. A coding gain of 1 dB is achieved in comparison to the weighted bit-flipping scheme, when communicating over an AWGN channel, while maintaining the same decoding complexity.

Journal ArticleDOI
Zhan Guo1, P. Nilsson1
TL;DR: Simulation results show that modifications to the Schnorr-Euchner decoding algorithm reduce the algorithm complexity efficiently, with only a small degradation in bit error rate at high signal to noise ratios.
Abstract: A new reduced-complexity Schnorr-Euchner decoding algorithm is proposed in this letter for uncoded multi-input multi-output systems with q-QAM (q=4,16,...) modulation. Furthermore, a Fano-like metric bias is introduced to the algorithm from the perspective of sequential decoding, as well as an early termination technique. Simulation results show that these modifications reduce the algorithm complexity efficiently, with only a small degradation in bit error rate at high signal to noise ratios.

Journal ArticleDOI
TL;DR: A new explicit error-correcting code based on Trevisan's extractor that can handle high-noise, almost-optimal rate list-decodable codes over large alphabets and soft decoding is proposed.
Abstract: We study error-correcting codes for highly noisy channels. For example, every received signal in the channel may originate from some half of the symbols in the alphabet. Our main conceptual contribution is an equivalence between error-correcting codes for such channels and extractors. Our main technical contribution is a new explicit error-correcting code based on Trevisan's extractor that can handle such channels, and even noisier ones. Our new code has polynomial-time encoding and polynomial-time soft-decision decoding. We note that Reed-Solomon codes cannot handle such channels, and our study exposes some limitations on list decoding of Reed-Solomon codes. Another advantage of our equivalence is that when the Johnson bound is restated in terms of extractors, it becomes the well-known Leftover Hash Lemma. This yields a new proof of the Johnson bound which applies to large alphabets and soft decoding. Our explicit codes are useful in several applications. First, they yield algorithms to extract many hardcore bits using few auxiliary random bits. Second, they are the key tool in a recent scheme to compactly store a set of elements in a way that membership in the set can be determined by looking at only one bit of the representation. Finally, they are the basis for the recent construction of high-noise, almost-optimal rate list-decodable codes over large alphabets.

Patent
09 Feb 2004
TL;DR: In this article, various modifications to conventional information coding schemes that result in an improvement in one or more performance measures for a given coding scheme are presented. But, these modifications are not applicable to the LDPC encoder/decoder pairs.
Abstract: Various modifications to conventional information coding schemes that result in an improvement in one or more performance measures for a given coding scheme. Some examples are directed to improved decoding techniques for linear block codes, such as low-density parity-check (LDPC) codes. In one example, modifications to a conventional belief-propagation (BP) decoding algorithm for LDPC codes significantly improve the performance of the decoding algorithm so as to more closely approximate that of the theoretically optimal maximum-likelihood (ML) decoding scheme. BP decoder performance generally is improved for lower code block lengths, and significant error floor reduction or elimination may be achieved for higher code block lengths. In one aspect, significantly improved performance of a modified BP algorithm is achieved while at the same time essentially maintaining the benefits of relative computational simplicity and execution speed of a conventional BP algorithm as compared to an ML decoding scheme. In another aspect, modifications for improving the performance of conventional BP decoders are universally applicable to “off the shelf” LDPC encoder/decoder pairs. Furthermore, the concepts underlying the various methods and apparatus disclosed herein may be more generally applied to various decoding schemes involving iterative decoding algorithms and message-passing on graphs, as well as coding schemes other than LDPC codes to similarly improve their performance. Exemplary applications for improved coding schemes include wireless (mobile) networks, satellite communication systems, optical communication systems, and data recording and storage systems (e.g., CDs, DVDs, hard drives, etc.).

Journal ArticleDOI
TL;DR: A modified algorithm for decoding of low-density parity-check codes over finite-state binary Markov channels that outperforms systems in which the channel statistics are not exploited in the decoding, even when the channel parameters are not known a priori at the decoder.
Abstract: We propose a modified algorithm for decoding of low-density parity-check codes over finite-state binary Markov channels. The proposed approach clearly outperforms systems in which the channel statistics are not exploited in the decoding, even when the channel parameters are not known a priori at the decoder.

Proceedings ArticleDOI
23 Mar 2004
TL;DR: A general iterative Slepian-Wolf decoding algorithm that incorporates the graphical structure of all the encoders and operates in a 'turbo-like' fashion, and a linear programming relaxation to maximum-likelihood sequence decoding that exhibits the ML-certificate property.
Abstract: We introduce three new innovations for compression using LDPCs for the Slepian-Wolf problem. The first is a general iterative Slepian-Wolf decoding algorithm that incorporates the graphical structure of all the encoders and operates in a 'turbo-like' fashion. The second innovation introduces source-splitting to enable low-complexity pipelined implementations of Slepian-Wolf decoding at rates besides corner points of the Slepian-Wolf region. This innovation can also be applied to single-source block coding for reduced decoder complexity. The third approach is a linear programming relaxation to maximum-likelihood sequence decoding that exhibits the ML-certificate property. This can be used for decoding a single binary block-compressed source as well as decoding at vertex points for the binary Slepian-Wolf problem. All three of these innovations were motivated by recent analogous results in the channel coding domain.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: Simulation results show that the new method provides significant gain over hard decision decoding (HDD) and compares favorably with other popular soft decision decoding methods.
Abstract: We present a soft decision decoding algorithm for Reed Solomon (RS) codes using their binary image representations. The novelty of the proposed decoding algorithm is in reducing the submatrix corresponding to the less reliable bits to a sparse nature prior to each decoding iteration and in adapting the parity check matrix from one iteration to another. Simulation results show that the new method provides significant gain over hard decision decoding (HDD) and compares favorably with other popular soft decision decoding methods [R. Koetter et al., 2003].

Proceedings ArticleDOI
E. Erez1, Meir Feder1
27 Jun 2004
TL;DR: Convolutional network codes are considered for cyclic graphs and coefficients of lower polynomial degree are drawn to consideration in order to minimize the memory and overhead.
Abstract: Convolutional network codes are considered for cyclic graphs. In CNC each node receives several streams and generates output streams whose current symbols depend on the current input symbols and previous input symbols in the node memory. A multicast CNC can be constructed using an algorithm, in order to minimize the memory and overhead, coefficients of lower polynomial degree are drawn to consideration. For CNC the overhead is the initial delay before the sinks start receiving symbols. CNC with the sequential decoder achieves good performance for some networks.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: It is shown that the error probability for decoding interleaved Reed-Solomon Codes with the decoder found by Bleichenbacher et al. is upper bounded by O(1/q), independently of n.
Abstract: We show that the error probability for decoding interleaved Reed-Solomon Codes with the decoder found by Bleichenbacher et al. (Ref.1) is upper bounded by O(1/q), independently of n. The decoding algorithm presented here is similar to that of standard RS codes. It involves computing the error-locator polynomial. These polynomials are found by computing the right kernel of the matrix. The correct solution is always in the right kernel, and so we can correctly decode if the right kernel is one-dimensional

Journal ArticleDOI
TL;DR: An efficient decoding algorithm for turbo product codes as introduced by Pyndiah has no performance degradation and reduces the complexity of the original decoder by an order of magnitude.
Abstract: In this letter, we propose an efficient decoding algorithm for turbo product codes as introduced by Pyndiah. The proposed decoder has no performance degradation and reduces the complexity of the original decoder by an order of magnitude. We concentrate on extended Bose-Chaudhuri-Hocquengem codes as the constituent row and column codes because of their already low implementation complexity. For these component codes, we observe that the weight and reliability factors can be fixed, and that there is no need for normalization. Furthermore, as opposed to previous efficient decoders, the newly proposed decoder naturally scales with a test-pattern parameter p that can change as a function of iteration number, i.e., the efficient Chase algorithm presented here uses conventionally ordered test patterns, and the syndromes, even parities, and extrinsic metrics are obtained with a minimum number of operations.

Proceedings ArticleDOI
23 May 2004
TL;DR: This paper explores the design spaces of both serial and parallel MAP decoders using graphical analysis and several existing designs are compared, and three new parallel decoding schemes are presented.
Abstract: Turbo codes are one of the most powerful error correcting codes. The VLSI implementation of Turbo codes for higher decoding speed requires use of parallel architectures. This paper explores the design spaces of both serial and parallel MAP decoders using graphical analysis. Several existing designs are compared, and three new parallel decoding schemes are presented.