scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 2006"


Journal ArticleDOI
TL;DR: The excellent performance-complexity tradeoff achieved by the proposed MMSE-DFE Fano decoder is established via simulation results and analytical arguments in several multiple-input multiple-output (MIMO) and intersymbol interference (ISI) scenarios.
Abstract: We consider receiver design for coded transmission over linear Gaussian channels. We restrict ourselves to the class of lattice codes and formulate the joint detection and decoding problem as a closest lattice point search (CLPS). Here, a tree search framework for solving the CLPS is adopted. In our framework, the CLPS algorithm is decomposed into the preprocessing and tree search stages. The role of the preprocessing stage is to expose the tree structure in a form matched to the search stage. We argue that the forward and feedback (matrix) filters of the minimum mean-square error decision feedback equalizer (MMSE-DFE) are instrumental for solving the joint detection and decoding problem in a single search stage. It is further shown that MMSE-DFE filtering allows for solving underdetermined linear systems and using lattice reduction methods to diminish complexity, at the expense of a marginal performance loss. For the search stage, we present a generic method, based on the branch and bound (BB) algorithm, and show that it encompasses all existing sphere decoders as special cases. The proposed generic algorithm further allows for an interesting classification of tree search decoders, sheds more light on the structural properties of all known sphere decoders, and inspires the design of more efficient decoders. In particular, an efficient decoding algorithm that resembles the well-known Fano sequential decoder is identified. The excellent performance-complexity tradeoff achieved by the proposed MMSE-DFE Fano decoder is established via simulation results and analytical arguments in several multiple-input multiple-output (MIMO) and intersymbol interference (ISI) scenarios.

334 citations


Book
20 Jun 2006
TL;DR: Upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs and establish the goodness of linear Codes under optimal maximum-likelihood (ML) decoding.
Abstract: This article is focused on the performance evaluation of linear codes under optimal maximum-likelihood (ML) decoding. Though the ML decoding algorithm is prohibitively complex for most practical codes, their performance analysis under ML decoding allows to predict their performance without resorting to computer simulations. It also provides a benchmark for testing the sub-optimality of iterative (or other practical) decoding algorithms. This analysis also establishes the goodness of linear codes (or ensembles), determined by the gap between their achievable rates under optimal ML decoding and information theoretical limits. In this article, upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we address de Caen's based bounds and their improvements, and also consider sphere-packing bounds with their recent improvements targeting codes of moderate block lengths.

190 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is presented for soft-input soft-output (SISO) decoding of Reed-Solomon (RS) codes that uses the sum-product algorithm (SPA) in conjunction with a binary parity-check matrix of the RS code.
Abstract: An iterative algorithm is presented for soft-input soft-output (SISO) decoding of Reed-Solomon (RS) codes. The proposed iterative algorithm uses the sum-product algorithm (SPA) in conjunction with a binary parity-check matrix of the RS code. The novelty is in reducing a submatrix of the binary parity-check matrix that corresponds to less reliable bits to a sparse nature before the SPA is applied at each iteration. The proposed algorithm can be geometrically interpreted as a two-stage gradient descent with an adaptive potential function. This adaptive procedure is crucial to the convergence behavior of the gradient descent algorithm and, therefore, significantly improves the performance. Simulation results show that the proposed decoding algorithm and its variations provide significant gain over hard-decision decoding (HDD) and compare favorably with other popular soft-decision decoding methods

184 citations


Journal ArticleDOI
TL;DR: This letter presents the first successful method for iterative stochastic decoding of state-of-the-art low-density parity-check (LDPC) codes and has a significant potential for high-throughput and/or low complexity iterative decoding.
Abstract: This letter presents the first successful method for iterative stochastic decoding of state-of-the-art low-density parity-check (LDPC) codes. The proposed method shows the viability of the stochastic approach for decoding LDPC codes on factor graphs. In addition, simulation results for a 200 and a 1024 length LDPC code demonstrate the near-optimal performance of this method with respect to sum-product decoding. The proposed method has a significant potential for high-throughput and/or low complexity iterative decoding.

184 citations


Journal ArticleDOI
TL;DR: The Viterbi algorithm is now used in most digital cellular phones and digital satellite receivers as well as in such diverse fields as magnetic recoding, voice recognition, and DNA sequence analysis.
Abstract: This paper describes how Andrew J. Viterbi developed a non-sequential decoding algorithm which proved useful in showing the superiority of convolutional codes over block codes for a given degree of decoding complexity. The Viterbi algorithm is now used in most digital cellular phones and digital satellite receivers as well as in such diverse fields as magnetic recoding, voice recognition, and DNA sequence analysis.

151 citations


Journal ArticleDOI
TL;DR: This paper introduces a method to construct high coding gain lattices with low decoding complexity based on LDPC codes and applies Construction D', due to Bos, Conway, and Sloane, to a set of parity checks defining a family of nestedLDPC codes to construct such lattices.
Abstract: Low-density parity-check codes (LDPC) can have an impressive performance under iterative decoding algorithms. In this paper we introduce a method to construct high coding gain lattices with low decoding complexity based on LDPC codes. To construct such lattices we apply Construction D', due to Bos, Conway, and Sloane, to a set of parity checks defining a family of nested LDPC codes. For the decoding algorithm, we generalize the application of max-sum algorithm to the Tanner graph of lattices. Bounds on the decoding complexity are derived and our analysis shows that using LDPC codes results in low decoding complexity for the proposed lattices. The progressive edge growth (PEG) algorithm is then extended to construct a class of nested regular LDPC codes which are in turn used to generate low density parity check lattices. Using this approach, a class of two-level lattices is constructed. The performance of this class improves when the dimension increases and is within 3 dB of the Shannon limit for error probabilities of about 10-6. This is while the decoding complexity is still quite manageable even for dimensions of a few thousands

119 citations


Journal ArticleDOI
TL;DR: Analysis using EXIT charts shows that the TDMP algorithm offers a better performance-complexity tradeoff when the number of decoding iterations is small, which is attractive for high-speed applications.
Abstract: A turbo-decoding message-passing (TDMP) algorithm for sparse parity-check matrix (SPCM) codes such as low-density parity-check, repeat-accumulate, and turbo-like codes is presented. The main advantages of the proposed algorithm over the standard decoding algorithm are 1) its faster convergence speed by a factor of two in terms of decoding iterations, 2) improvement in coding gain by an order of magnitude at high signal-to-noise ratio (SNR), 3) reduced memory requirements, and 4) reduced decoder complexity. In addition, an efficient algorithm for message computation using simple "max" operations is also presented. Analysis using EXIT charts shows that the TDMP algorithm offers a better performance-complexity tradeoff when the number of decoding iterations is small, which is attractive for high-speed applications. A parallel version of the TDMP algorithm in conjunction with architecture-aware (AA) SPCM codes, which have embedded structure that enables efficient high-throughput decoder implementation, are presented. Design examples of AA-SPCM codes based on graphs with large girth demonstrate that AA-SPCM codes have very good error-correcting capability using the TDMP algorithm

100 citations


Patent
Thierry Lestable1, Sung-Eun Park1
26 Oct 2006
TL;DR: An apparatus and a method for receiving a signal in a communication system using a Low Density Parity Check (LDPC) code is described in this paper. But the method is not suitable for the use of a large number of users.
Abstract: An apparatus and a method for receiving a signal in a communication system using a Low Density Parity Check (LDPC) code. The apparatus and the method includes decoding a received signal according to a hybrid decoding scheme, wherein the hybrid decoding scheme is generated by combining two of a first decoding scheme, a second decoding scheme, and a third decoding scheme.

96 citations


Journal ArticleDOI
TL;DR: This paper presents an iterative soft-decision decoding algorithm for Reed-Solomon (RS) codes offering both complexity and performance advantages over previously known decoding algorithms, and introduces the concept of using a belief-propagation-based decoder to enhance the soft-input information prior to decoding with an algebraic soft- decoder.
Abstract: In this paper, we present an iterative soft-decision decoding algorithm for Reed-Solomon (RS) codes offering both complexity and performance advantages over previously known decoding algorithms. Our algorithm is a list decoding algorithm which combines two powerful soft-decision decoding techniques which were previously regarded in the literature as competitive, namely, the Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation based on adaptive parity-check matrices, recently proposed by Jiang and Narayanan. Building on the Jiang-Narayanan algorithm, we present a belief-propagation-based algorithm with a significant reduction in computational complexity. We introduce the concept of using a belief-propagation-based decoder to enhance the soft-input information prior to decoding with an algebraic soft-decision decoder. Our algorithm can also be viewed as an interpolation multiplicity assignment scheme for algebraic soft-decision decoding of RS codes.

96 citations


Journal ArticleDOI
TL;DR: These Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel.
Abstract: We consider the problem of optimally decoding a quantum error correction code—that is, to find the optimal recovery procedure given the outcomes of partial "check" measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead.

96 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: A 650-Mbps bit-serial (480, 355) RS-based LDPC decoder implemented on a single Altera Stratix EP1S80 FPGA device is presented, which is to the authors' knowledge, this is the fastestFPGA-basedLDPC decoders reported in the literature.
Abstract: We propose a bit-serial LDPC decoding scheme to reduce interconnect complexity in fully-parallel low-density parity-check decoders. Bit-serial decoding also facilitates efficient implementation of wordlength-programmable LDPC decoding which is essential for gear shift decoding. To simplify the implementation of bit-serial decoding we propose a new approximation to the check update function in the min-sum decoding algorithm. The new check update rule computes only the absolute minimum and applies a correction to outgoing messages if required. We present a 650-Mbps bit-serial (480, 355) RS-based LDPC decoder implemented on a single Altera Stratix EP1S80 FPGA device. To our knowledge, this is the fastest FPGA-based LDPC decoder reported in the literature.

Journal ArticleDOI
TL;DR: Two-dimensional (2-D) correction schemes are proposed to improve the performance of conventional min-sum (MS) decoding of irregular low density parity check codes and the proposed method provides a comparable performance as belief propagation (BP) decoding while requiring less complexity.
Abstract: Two-dimensional (2-D) correction schemes are proposed to improve the performance of conventional min-sum (MS) decoding of irregular low density parity check codes. An iterative procedure based on parallel differential optimization is presented to obtain the optimal 2-D factors. Both density evolution analysis and simulation show that the proposed method provides a comparable performance as belief propagation (BP) decoding while requiring less complexity. Interestingly, the new method exhibits a lower error floor than that of BP decoding. With respect to conventional MS and 1-D normalized MS decodings, the 2-D normalized MS offers a better performance. The 2-D offset MS decoding exhibits a similar behavior.

Journal ArticleDOI
TL;DR: A new early stopping criterion is proposed for low density parity check (LDPC) codes to reduce the number of decoding iterations while preserving a good decoding performance.
Abstract: A new early stopping criterion is proposed for low density parity check (LDPC) codes to reduce the number of decoding iterations while preserving a good decoding performance. The new criterion is based on the convergence of the mean magnitude (CMM) of the log-likelihood ratio messages at the output of each decoding iteration. Information-theoretic support and extensive simulations are provided to demonstrate the efficiency of the criterion

Proceedings ArticleDOI
22 Mar 2006
TL;DR: This paper proposes a new approach to channel code design for wireless network applications that provides flexibility in the design of error protection schemes for multi-terminal wireless networks.
Abstract: This paper proposes a new approach to channel code design for wireless network applications. The resulting nested codes can be decoded at different effective rates by different receivers ? rates that depend on the prior knowledge possessed by each receiver; we say these codes have multiple interpretations. We have identified several applications in wireless networks where this property is useful. Specific nested code constructions as well as efficient soft and hard decision decoding algorithms are described. The concept of a nested code with multiple interpretations provides flexibility in the design of error protection schemes for multi-terminal wireless networks.

PatentDOI
Jung-Hoe Kim1, Miao Lei1, Oh Eun-Mi1
TL;DR: In this article, a decoding level generation unit produces decoding-level information that helps a bitstream including a number of audio channel signals and space information to be decoded into multiple channels, wherein the space information includes information about magnitude differences and/or similarities between channels.
Abstract: An system, method, and method of encoding/decoding a multi-channel audio signal, including a decoding level generation unit producing decoding-level information that helps a bitstream including a number of audio channel signals and space information to be decoded into a number of audio channel signals, wherein the space information includes information about magnitude differences and/or similarities between channels, and an audio decoder decoding the bitstream according to the decoding-level information. Accordingly, even a single input bitstream can be decoded into a suitable number of channels depending on the type of a speaker configuration used. Scalable channel decoding can be achieved by partially decoding an input bitstream. In the scalable channel decoding, a decoder may set decoding levels and outputs audio channel signals according to the decoding levels, thereby reducing decoding complexity.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: This paper addresses the problem of distributed space-time coding with reduced decoding complexity for wireless relay network and admits code constructions with lower decoding complexity compared to codes based on some earlier system models.
Abstract: We address the problem of distributed space-time coding with reduced decoding complexity for wireless relay network. The transmission protocol follows a two-hop model wherein the source transmits a vector in the first hop and in the second hop the relays transmit a vector, which is a transformation of the received vector by a relay-specific unitary transformation. Design criteria is derived for this system model and codes are proposed that achieve full diversity. For a fixed number of relay nodes, the general system model considered in this paper admits code constructions with lower decoding complexity compared to codes based on some earlier system models.

Patent
14 Nov 2006
TL;DR: In this paper, a method, medium, and apparatus encoding and decoding an image in order to increase the decoding efficiency by performing binary-arithmetic coding/decoding on a binary value of a syntax element using a probability model having the same syntax element probability value for respective context index information of each of at least two image components.
Abstract: A method, medium, and apparatus encoding and/or decoding an image in order to increase encoding and decoding efficiency by performing binary-arithmetic coding/decoding on a binary value of a syntax element using a probability model having the same syntax element probability value for respective context index information of each of at least two image components.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the dynamics of a continuous-time analog implementation of iterative decoding, and showed that it can be approximated as the application of the well-known successive relaxation (SR) method for solving the fixed-point problem.
Abstract: Conventional iterative decoding with flooding or parallel schedule can be formulated as a fixed-point problem solved iteratively by a successive substitution (SS) method. In this paper, we investigate the dynamics of a continuous-time (asynchronous) analog implementation of iterative decoding, and show that it can be approximated as the application of the well-known successive relaxation (SR) method for solving the fixed-point problem. We observe that SR with the optimal relaxation factor can considerably improve the error-rate performance of iterative decoding for short low-density parity-check (LDPC) codes, compared with SS. Our simulation results for the application of SR to belief propagation (sum-product) and min-sum algorithms demonstrate improvements of up to about 0.7 dB over the standard SS for randomly constructed LDPC codes. The improvement in performance increases with the maximum number of iterations, and by accordingly reducing the relaxation factor. The asymptotic result, corresponding to an infinite maximum number of iterations and infinitesimal relaxation factor, represents the steady-state performance of analog iterative decoding. This means that under ideal circumstances, continuous-time (asynchronous) analog decoders can outperform their discrete-time (synchronous) digital counterparts by a large margin. Our results also indicate that with the assumption of a truncated Gaussian distribution for the random delays among computational modules, the error-rate performance of the analog decoder, particularly in steady state, is rather independent of the variance of the distribution. The proposed simple model for analog decoding, and the associated performance curves, can be used as an "ideal analog decoder" benchmark for performance evaluation of analog decoding circuits.

Journal ArticleDOI
TL;DR: This paper approaches the soft-decision KV algorithm from the point of view of a communications systems designer who wants to know what benefits the algorithm can give, and how the extra complexity introduced by soft decoding can be managed at the systems level.
Abstract: Efficient soft-decision decoding of Reed–Solomon codes is made possible by the Koetter–Vardy (KV) algorithm which consists of a front-end to the interpolation-based Guruswami–Sudan list decoding algorithm. This paper approaches the soft-decision KV algorithm from the point of view of a communications systems designer who wants to know what benefits the algorithm can give, and how the extra complexity introduced by soft decoding can be managed at the systems level. We show how to reduce the computational complexity and memory requirements of the soft-decision front-end. Applications to wireless communications over Rayleigh fading channels and magnetic recording channels are proposed. For a high-rate (RS 9225,239) Reed–Solomon code, 2–3 dB of soft-decision gain is possible over a Rayleigh fading channel using 16-quadrature amplitude modulation. For shorter codes and at lower rates, the gain can be as large as 9 dB. To lower the complexity of decoding on the systems level, the redecoding architecture is explored which uses only the appropriate amount of complexity to decode each packet. An error-detection criterion based on the properties of the KV decoder is proposed for the redecoding architecture. Queuing analysis verifies the practicality of the redecoding architecture by showing that only a modestly sized RAM buffer is required.

Journal ArticleDOI
TL;DR: It is shown that selecting easily constructable "expander"-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance and therefore the Slepian-Wolf problem is considered.
Abstract: This paper discusses the Slepian-Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple "source-splitting" strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian-Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the "min-sum" iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable "expander"-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance

Proceedings ArticleDOI
09 Jul 2006
TL;DR: It is shown that any quantum convolutional code contains a subcode of finite index which has a non-catastrophic encoding circuit and the encodes and their inverse constructed by the method naturally can be applied online, i.e., qubits can be sent and received with constant delay.
Abstract: We present an algorithm to construct quantum circuits for encoding and inverse encoding of quantum convolutional codes. We show that any quantum convolutional code contains a subcode of finite index which has a non-catastrophic encoding circuit. Our work generalizes the conditions for non-catastrophic encoders derived in a paper by Ollivier and Tillich (quantph/0401134) which are applicable only for a restricted class of quantum convolutional codes. We also show that the encoders and their inverses constructed by our method naturally can be applied online, i.e., qubits can be sent and received with constant delay.

01 Jan 2006
TL;DR: This paper investigates the dynamics of a continuous-time (asynchronous) analog implementation of iterative decoding, and shows that it can be approximated as the application of the well-known successive relaxation (SR) method for solving the fixed-point problem.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: An efficient algorithm that solves the minimal polynomial of the ideal of interpolating polynomials with respect to a certain monomial order is presented based on the theory of Grobner bases of modules.
Abstract: A central problem of algebraic soft-decision decoding of Reed-Solomon codes is to find the minimal polynomial of the ideal of interpolating polynomials with respect to a certain monomial order. An efficient algorithm that solves the problem is presented based on the theory of Grobner bases of modules.

Journal ArticleDOI
TL;DR: It is shown that the exploited extrinsic information transfer functions of single parity-check and repetition codes over the binary input additive white Gaussian noise channel allows more accurate prediction of the decoding threshold in the biAWGN channel than the earlier known GA methods.
Abstract: We exploit extrinsic information transfer functions of single parity-check and repetition codes over the binary input additive white Gaussian noise (biAWGN) channel, derived by the authors, for asymptotic performance analysis of belief propagation decoding of low-density parity-check codes. The approach is based on a Gaussian approximation (GA) of the density evolution algorithm using the mutual information measure. We show that this method allows more accurate prediction of the decoding threshold in the biAWGN channel than the earlier known GA methods

Proceedings ArticleDOI
09 Jul 2006
TL;DR: This work presents iterative soft-in soft-out (SISO) decoding algorithms in a common framework and presents a related algorithm - random redundant iterative decoding - that is both practically realizable and applicable to arbitrary linear block codes.
Abstract: A number of authors have recently considered iterative soft-in soft-out (SISO) decoding algorithms for classical linear block codes that utilize redundant Tanner graphs. Jiang and Narayanan presented a practically realizable algorithm that applies only to cyclic codes while Kothiyal et al. presented an algorithm that, while applicable to arbitrary linear block codes, does not imply a low-complexity implementation. This work first presents the aforementioned algorithms in a common framework and then presents a related algorithm - random redundant iterative decoding - that is both practically realizable and applicable to arbitrary linear block codes. Simulation results illustrate the successful application of the random redundant iterative decoding algorithm to the extended binary Golay code. Additionally, the proposed algorithm is shown to outperform Jiang and Narayanan's algorithm for a number of Bose-Chaudhuri-Hocquenghem (BCH) codes

Patent
31 Mar 2006
TL;DR: In this article, a low-complexity MIMO detector that combines sphere decoding and m-algorithm approaches, while accounting for the effect of channel condition on the decoding operation, is provided.
Abstract: A method and system for detecting and decoding multiple signals. A low-complexity MIMO detector that combines sphere decoding and m-algorithm approaches, while accounting for the effect of channel condition on the decoding operation, is provided. Taking into account the channel condition effectively controls the size of the search tree, and consequently the search complexity, in an adaptive manner. The channel condition is exploited in the construction of the tree to manage the number of branches in the tree and to avoid undesirable growth.

Journal ArticleDOI
TL;DR: A recursive decoding algorithm is designed and its decoding threshold is derived for long RM codes and it corrects most error patterns of the Euclidean weight of order radicn/lnn, instead of the decoding threshold radicd/2 of the bounded distance decoding.
Abstract: Soft-decision decoding is considered for general Reed-Muller (RM) codes of length n and distance d used over a memoryless channel. A recursive decoding algorithm is designed and its decoding threshold is derived for long RM codes. The algorithm has complexity of order nlnn and corrects most error patterns of the Euclidean weight of order radicn/lnn, instead of the decoding threshold radicd/2 of the bounded distance decoding. Also, for long RM codes of fixed rate R, the new algorithm increases 4/pi times the decoding threshold of its hard-decision counterpart

Patent
01 Feb 2006
TL;DR: In this article, a decoding apparatus and method is described by which the decoder error occurrence probability is suppressed and a high decoding performance can be achieved, where the decoding apparatus diagonalizes a parity check matrix, updates LLR values and then adds a decoded word obtained by the decoding to a decoding word list.
Abstract: A decoding apparatus and method is disclosed by which the decoder error occurrence probability is suppressed and a high decoding performance can be achieved An ABP decoding apparatus diagonalizes a parity check matrix, updates LLR values, decodes the LLR values and then adds a decoded word obtained by the decoding to a decoded word list The ABP decoding apparatus repeats the decoding procedure as inner repetitive decoding by a predetermined number of times Further, as the ABP decoding apparatus successively changes initial values for priority ranks of the LLR values, it repeats the inner repetitive decoding as outer repetitive decoding by a predetermined number of times Then, the ABP decoding apparatus selects an optimum one of the decoded words from within a decoded word list obtained by the repeated inner repetitive decoding The invention is applied to an error correction system

Patent
Kai Yang1, Lin Wang1, Yi Lin1, Wei Yu1
12 Oct 2006
TL;DR: In this paper, a CABAC decoding system with at least a decoding unit group is proposed, where each decoding unit groups includes N decoding units connected with each other, and each group receives parameter information for decoding bins and bit streams to be decoded.
Abstract: A CABAC decoding system includes at least a decoding unit group. Each decoding unit group includes N decoding units connected with each other. The M th decoding unit receives parameter information for decoding bins and bit streams to be decoded, decodes the bins of the bit streams to be decoded, obtains the decoding result of the current decoding unit bin, and sends the updated parameter information to the (M+ 1 ) th decoding unit and an output unit. The CABAC decoding system achieves high decoding rate and keeps a reasonable cost of hardware resource, and thereby provides a high efficient and reasonable decoding solution.

K.K.Y. Wong1
01 Jan 2006
TL;DR: The Soft-Output M-Algorithm (SOMA) as mentioned in this paper is a reduced-complexity trellis decoder based on a sequential decoding technique known as the M-algorithm.
Abstract: The Soft-Output M-Algorithm (SOMA) is a reduced-complexity trellis decoder based on a sequential decoding technique known as the M-algorithm. Instead of extending all survivors from one trellis depth to the next, it extends only from the best M. The remaining survivors are terminated. The novelty of the SOMA is the use of terminated paths to obtain reliable soft-information. Soft-information is extracted from terminated paths through a simple update-and-discard procedure. It is found that using the M best fully-extended survivors alone is inadequate to deliver soft-information due to their similarity. On the other hand, terminated paths of a pruned trellis carry a significant amount of soft-information which should not be ignored for reliable bit-detection. The SOMA is particularly useful in reducing the complexity of an iterative receiver. Applications include turbo equalization of single-input single-output Inter-Symbol Interference (ISI) channels and Multiple-Input Multiple-Output (MIMO) frequency selective fading channels, iterative decoding of MIMO flat fading channels, and multi-user detection and equalization in coded Direct-Sequence Code-Division Multiple-Access (DS-CDMA) systems. For turbo equalization of a 16-tap ISI channel with convolutional coding, nearoptimal bit-error-rate is achieved by retaining only 128 (out of 32,768) states. The SOMA is also applicable to reduce the complexity of tree decoding. For a coded MIMO system with 8 antennae transmitting 64-QAM symbols over a flat fading channel, the respective code-tree with 648 hypotheses can be decoded with the SOMA by retaining only 32 paths per tree depth. The complexity of multi-user detection for coded DS-CDMA systems for flat-fading channels can also be significantly reduced using the SOMA. High-order trellises with millions of states can also be decoded with modest complexity using the SOMA. The trellis is first expanded into an equivalent trellis/tree structure. Then, the M-algorithm is applied twice, once to reduce the number of states, and the other to reduce the number of branches emanating from each state. The proposed method gives promising results in equalizing a high-order modulated MIMO system with bit-interleaved coded modulation undergoing frequency selective fading. The method is also applicable to reduce the complexity of multi-user equalization of a coded DS-CDMA system undergoing frequency selective fading.