scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1995"


Book
01 Feb 1995
TL;DR: This work has shown that polynomials over Galois Fields, particularly the Hadamard, Quadratic Residue, and Golay Codes, are good candidates for Error Control Coding for Digital Communication Systems.
Abstract: 1. Error Control Coding for Digital Communication Systems. 2. Galois Fields. 3. Polynomials over Galois Fields. 4. Linear Block Codes. 5. Cyclic Codes. 6. Hadamard, Quadratic Residue, and Golay Codes. 7. Reed-Muller Codes 8. BCH and Reed-Solomon Codes. 9. Decoding BCH and Reed-Solomon Codes. 10. The Analysis of the Performance of Block Codes. 11. Convolutional Codes. 12. The Viterbi Decoding Algorithm. 13. The Sequential Decoding Algorithms. 14. Trellis Coded Modulation. 15. Error Control for Channels with Feedback. 16. Applications. Appendices: A. Binary Primitive Polynomials. B. Add-on Tables and Vector Space Representations for GF(8) Through GF(1024). C. Cyclotronic Cosets Modulo 2m-1. D. Minimal Polynomials for Elements in GF (2m). E. Generator Polynomials of Binary BCH Codes of Lengths Through 511. Bibliography.

1,944 citations


Journal ArticleDOI
TL;DR: A modification of the Viterbi decoding algorithm (VA) for binary trellises which uses a priori or a posteriori information about the source bit probability for better decoding in addition to soft inputs and channel state information is proposed.
Abstract: Source and channel coding have been treated separately in most cases. It can be observed that most source coding algorithms for voice, audio and images still have correlation in certain bits. Transmission errors in these bits usually account for the significant errors in the reconstructed source signal. This paper proposes a modification of the Viterbi decoding algorithm (VA) for binary trellises which uses a priori or a posteriori information about the source bit probability for better decoding in addition to soft inputs and channel state information. Analytical upper bounds for the BER of convolutional codes for this modified VA (APRI-VA) are given. The algorithm is combined with the soft output viterbi algorithm (SOVA) and an estimator for the residual correlation of the source bits to achieve source-controlled channel decoding for framed source bits. The description is simplified by an algebra for the log-likelihood ratio L(u)=log(P(u=+1)/P(u=-1)) which allows a clear definition of the "soft" values of source-, channel-, and decoded bits as well as a simplified description of the traceback version of the SOVA. Applications are given for PCM transmission and the full rate GSM speech codec. For an PCM coded oversampled bandlimited Gaussian source transmitted over Gaussian and Rayleigh channels with convolutional codes the decoding errors are reduced by a factor of 4 to 5 when the APRI-SOVA is used instead of the VA. A simple dynamic Markov correlation estimator is used. With these receiver-only modifications the channel SNR in a bad mobile environment can be lowered by 2 to 4 dB resulting in the same voice quality. Further applications are briefly discussed. >

476 citations


Journal ArticleDOI
Erdal Arikan1
17 Sep 1995
TL;DR: The present approach enables the determination of the cutoff rate of sequential decoding in a simple manner and yields the (previously unknown) cutoff rate region of sequential decode for multiaccess channels.
Abstract: Let (X,Y) be a pair of discrete random variables with X taking one of M possible values, Suppose the value of X is to be determined, given the value of Y, by asking questions of the form "Is X equal to x?" until the answer is "Yes". Let G(x|y) denote the number of guesses in any such guessing scheme when X=x, Y=y. We prove that E[G(X|Y)/sup /spl rho//]/spl ges/(1+lnM)/sup -/spl rho///spl Sigma//sub y/[/spl Sigma//sub x/P/sub X,Y/(x,y)/sup 1/1+/spl rho//]/sup 1+/spl rho// for any /spl rho//spl ges/0. This provides an operational characterization of Renyi's entropy. Next we apply this inequality to the estimation of the computational complexity of sequential decoding. For this, we regard X as the input, Y as the output of a communication channel. Given Y, the sequential decoding algorithm works essentially by guessing X, one value at a time, until the guess is correct. Thus the computational complexity of sequential decoding, which is a random variable, is given by a guessing function G(X|Y) that is defined by the order in which nodes in the tree code are hypothesized by the decoder. This observation, combined with the above lower bound on moments of G(X|Y), yields lower bounds on moments of computation in sequential decoding. The present approach enables the determination of the (previously known) cutoff rate of sequential decoding in a simple manner; it also yields the (previously unknown) cutoff rate region of sequential decoding for multiaccess channels. These results hold for memoryless channels with finite input alphabets.

358 citations


Journal ArticleDOI
TL;DR: A general framework, based on ideas of Tanner, for the description of codes and iterative decoding (“turbo coding”) is developed, which clarifies, in particular, which a priori probabilities are admissible and how they are properly dealt with.
Abstract: A general framework, based on ideas of Tanner, for the description of codes and iterative decoding (“turbo coding”) is developed. Just like trellis-based code descriptions are naturally matched to Viterbi decoding, code descriptions based on Tanner graphs (which may be viewed as generalized trellises) are naturally matched to iterative decoding. Two basic iterative decoding algorithms (which are versions of the algorithms of Berrou et al. and of Hagenauer, respectively) are shown to be natural generalizations of the forward-backward algorithm (Bahl et al.) and the Viterbi algorithm, respectively, to arbitrary Tanner graphs. The careful derivation of these algorithms clarifies, in particular, which a priori probabilities are admissible and how they are properly dealt with. For cycle codes (a class of binary linear block codes), a complete characterization is given of the error patterns that are corrected by the generalized Viterbi algorithm after infinitely many iterations.

354 citations


Journal ArticleDOI
H.-L. Lou1
TL;DR: The article explains the basics of the Viterbi algorithm as applied to systems in digital communication systems, and speech and character recognition, and focuses on the operations and the practical memory requirements to implement the Vterbi algorithm in real-time.
Abstract: The Viterbi algorithm, an application of dynamic programming, is widely used for estimation and detection problems in digital communications and signal processing. It is used to detect signals in communication channels with memory, and to decode sequential error-control codes that are used to enhance the performance of digital communication systems. The Viterbi algorithm is also used in speech and character recognition tasks where the speech signals or characters are modeled by hidden Markov models. The article explains the basics of the Viterbi algorithm as applied to systems in digital communication systems, and speech and character recognition. It also focuses on the operations and the practical memory requirements to implement the Viterbi algorithm in real-time. >

253 citations


Proceedings ArticleDOI
23 Oct 1995
TL;DR: In this paper, the authors describe erasure codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n. The encoding algorithm produces a set of l-bit packets of total length cn from an n-bit message, and the decoding algorithm is able to recover the message from any set of packets whose total length is r.
Abstract: An (n,c,l,r) erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of l-bit packets of total length cn from an n-bit message. The decoding algorithm is able to recover the message from any set of packets whose total length is r, i.e., from any set of r/l packets. We describe erasure codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n.

143 citations


Journal ArticleDOI
TL;DR: This paper provides a survey of the existing literature on the decoding of algebraic-geometric codes and shows what has been done, discusses what still has to be done, and poses some open problems.
Abstract: This paper provides a survey of the existing literature on the decoding of algebraic-geometric codes. Definitions, theorems, and cross references will be given. We show what has been done, discuss what still has to be done, and pose some open problems.

127 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: In this paper, a theory of minimal trellises for convolutional codes is proposed, based on the trellis-canonical generator matrix (TCG) for block codes.
Abstract: Convolutional codes have a natural, regular, trellis structure that facilitates the implementation of Viterbi's algorithm. Linear block codes also have a natural, though not in general a regular, "minimal" trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of an unenhanced Viterbi decoding algorithm can be accurately estimated by the number of trellis edge symbols per encoded bit. It would therefore appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations which are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the "minimal" trellis representation. Thus ironically, we seem to know more about the minimal trellis representation for block than for convolutional codes. We provide a remedy, by developing a theory of minimal trellises for convolutional codes. This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-canonical generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

105 citations


Journal ArticleDOI
TL;DR: The algorithm is designed for low decoding complexity and is applicable to all Reed-Muller codes, and gives better decoding performance than soft-decision bounded-distance decoding.
Abstract: Constructs Reed-Muller codes by generalized multiple concatenation of binary block codes of length 2. As a consequence of this construction, a new decoding procedure is derived that uses soft-decision information. The algorithm is designed for low decoding complexity and is applicable to all Reed-Muller codes. It gives better decoding performance than soft-decision bounded-distance decoding. Its decoding complexity is much lower than that of maximum-likelihood trellis decoding of Reed-Muller codes, especially for long codes. >

102 citations


Journal ArticleDOI
TL;DR: A decoding algorithm for algebraic-geometric codes from regular plane curves, in particular the Hermitian curve, which corrects all error patterns of weight less than d*/2 with low complexity is presented.
Abstract: We present a decoding algorithm for algebraic-geometric codes from regular plane curves, in particular the Hermitian curve, which corrects all error patterns of weight less than d*/2 with low complexity. The algorithm is based on the majority scheme of Feng and Rao (1993) and uses a modified version of Sakata's (1988) generalization of the Berlekamp-Massey algorithm.

88 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that any algebraic-geometric (AG) code can be expressed as a cross section of an extended multidimensional cyclic code, and that the decoding problem can be solved using Grobner bases.
Abstract: It is proved that any algebraic-geometric (AG) code can be expressed as a cross section of an extended multidimensional cyclic code. Both AG codes and multidimensional cyclic codes are described by a unified theory of linear block codes defined over point sets: AG codes are defined over the points of an algebraic curve, and an m-dimensional cyclic code is defined over the points in m-dimensional space. The power of the unified theory is in its description of decoding techniques using Grobner bases. In order to fit an AG code into this theory, a change of coordinates must be applied to the curve over which the code is defined so that the curve is in special position. For curves in special position, all computations can be performed with polynomials and this also makes it possible to use the theory of Grobner bases. Next, a transform is defined for AG codes which generalizes the discrete Fourier transform. The transform is also related to a Grobner basis, and is useful in setting up the decoding problem. In the decoding problem, a key step is finding a Grobner basis for an error locator ideal. For AG codes, multidimensional cyclic codes, and indeed, any cross section of an extended multidimensional cyclic code, Sakata's algorithm can be used to find linear recursion relations which hold on the syndrome array. In this general context, the authors give a self-contained and simplified presentation of Sakata's algorithm, and present a general framework for decoding algorithms for this family of codes, in which the use of Sakata's algorithm is supplemented by a procedure for extending the syndrome array.

Proceedings ArticleDOI
27 Sep 1995
TL;DR: A novel real-time MAP algorithm (SMAP) is derived, and it is proved that in terms of hard decoding performance, the SMAP is equivalent to the VA with path memory truncation and best state decoding.
Abstract: Soft output decoding algorithms have been attracting considerable attention for concatenated or iterative decoding systems. The most prominent is the soft output Viterbi algorithm (SOVA), which can be regarded as an approximation to the optimum symbol by symbol detector, the symbol by symbol MAP algorithm. The MAP algorithm is a block detector, i.e. it requires completed reception of a terminated block of received symbols, which prevents an efficient real-time implementation. Therefore, up to now, the SOVA was considered to be the more attractive alternative. In this paper, we first introduce an algebraic formulation for the MAP. Using this formulation, a novel real-time MAP algorithm (SMAP) is derived, and it is proved that in terms of hard decoding performance, the SMAP is equivalent to the VA with path memory truncation and best state decoding. The finally presented SMAP architectures provide a competitive alternative to the known SOVA architectures for soft output decoding applications.

Journal ArticleDOI
TL;DR: A modified Euclidean distance function that substantially simplifies the search for good codes has been derived and used to find new codes and consistently improves the performance as compared to coded schemes using binary convolutional codes with the same decoding complexity.
Abstract: Rate 1/2 convolutional codes over the ring of integers modulo M are combined with M-ary continuous phase modulation (CPM) schemes whose modulation indices are of the form h=1/M. An M-ary CPM scheme with h=1/M can be modeled by a continuous-phase encoder (CPE) followed by a memoryless modulator (MM), where the CPE is linear over the ring of integers modulo M. The fact that the convolutional code and the CPE are over the same algebra allows the state of the CPE to be fed back and used by the convolutional encoder. A modified Euclidean distance function that substantially simplifies the search for good codes has been derived and used to find new codes. Numerical results show that this approach consistently improves the performance as compared to coded schemes using binary convolutional codes with the same decoding complexity.

Journal ArticleDOI
V.B. Balakirsky1
TL;DR: An upper bound on the maximal transmission rate over binary-input memoryless channels, provided that the decoding decision rule is given, is derived and if the decisionRule is equivalent to the maximum-likelihood decoding, then the bound coincides with the channel capacity.
Abstract: An upper bound on the maximal transmission rate over binary-input memoryless channels, provided that the decoding decision rule is given, is derived. If the decision rule is equivalent to the maximum-likelihood decoding (matched decoding), then the bound coincides with the channel capacity. Otherwise (mismatched decoding), it coincides with a known lower bound.

Journal ArticleDOI
TL;DR: The authors extend this pragmatic approach to the case where the core of the trellis decoder is a Viterbi decoder for a punctured version of the de facto standard, rate 1/2 convolutional code.
Abstract: A single convolutional code of fixed rate can be punctured to form a class of higher rate convolutional codes. The authors extend this pragmatic approach to the case where the core of the trellis decoder is a Viterbi decoder for a punctured version of the de facto standard, rate 1/2 convolutional code. >

Journal ArticleDOI
17 Sep 1995
TL;DR: Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Abstract: This article presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the ML codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

Journal ArticleDOI
TL;DR: A new bounded-distance decoding algorithm is presented for the hexacode, which requires at most 34 real operations in the worst case, as compared to 57 such Operations in the best previously known decoder.
Abstract: We present a new bounded-distance decoding algorithm for the hexacode, which requires at most 34 real operations in the worst case, as compared to 57 such operations in the best previously known decoder. The new algorithm is then employed for bounded-distance decoding of the Leech lattice and the Golay code. The error-correction radius of the resulting decoders is equal to that of a maximum-likelihood decoder. The resulting decoding complexity is at most 331 real operations for the Leech lattice and at most 121 operations for the Golay code. For all the three codes-the hexacode, the Golay code, and the Leech lattice-the proposed decoders are considerably more efficient than any decoder presently known. >

Proceedings ArticleDOI
17 Sep 1995
TL;DR: The main thesis of the present paper is that, with respect to iterative decoding, the natural way of describing a code is by means of a Tanner graph, which may be viewed as a generalized trellis.
Abstract: Until recently, most known decoding procedures for error-correcting codes were based either on algebraically calculating the error pattern or on some sort of tree or trellis search. With the advent of turbo coding, a third decoding principle has finally had its breakthrough: iterative decoding. With respect to Viterbi decoding, a code is most naturally described by means of a trellis diagram. The main thesis of the present paper is that, with respect to iterative decoding, the natural way of describing a code is by means of a Tanner graph, which may be viewed as a generalized trellis. More precisely, it is the "time axis" of a trellis that is generalized to a Tanner graph.

Journal ArticleDOI
TL;DR: A theoretical foundation is developed from which an improved bound for the minimum distance and results on decoding up to that bound are derived.
Abstract: A decoding algorithm for certain codes from algebraic curves is presented. The dual code, which is used for decoding, is formed by evaluating rational functions having poles at a single point. A theoretical foundation is developed from which an improved bound for the minimum distance and results on decoding up to that bound are derived. The decoding is done by a computationally efficient Berlekamp-Massey type algorithm, which is intrinsic to the curve.

Journal ArticleDOI
TL;DR: The problem of decoding cyclic error correcting codes is one of solving a constrained polynomial congruence, often achieved with solutions to the inequality of the following type:
Abstract: The problem of decoding cyclic error correcting codes is one of solving a constrained polynomial congruence, often achieved

Journal ArticleDOI
TL;DR: The programmable scheme can be easily integrated into data paths of video processors to support different Huffman tables used in image/video applications.
Abstract: Huffman coding, a variable-length entropy coding scheme, is an integral component of international standards on image and video compression including high-definition television (HDTV). The high-bandwidth HDTV systems of data rate in excess of 100 Mpixels/s presents a challenge for designing a fast and economic circuit for intrinsically sequential Huffman decoding operations. This paper presents an algorithm and a circuit implementation for parallel decoding of programmable Huffman codes by using the numerical properties of Huffman codes. The 1.2 /spl mu/m CMOS implementation for a single JPEG AC table of 256 codewords of up to 16-b codeword lengths is estimated to run at 10 MHz with a chip area of 11 mm/sup 2/, decoding one codeword per cycle. The design can be pipelined to deliver a throughput of 80 MHz for decoding input streams of consecutive Huffman codes. Furthermore, our programmable scheme can be easily integrated into data paths of video processors to support different Huffman tables used in image/video applications. >

Proceedings ArticleDOI
M. Blaum1, H.C.A. van Tilborg
17 Sep 1995
TL;DR: A different, but similar approach is proposed to deal with bursts in data storage systems by exploiting a dependency in byte-error correcting codes to enhance the decoding performance.
Abstract: A common way to deal with bursts in data storage systems is to interleave byte-error correcting codes. In the decoding of each of these byte-error correcting codes, one normally does not make use of the information obtained from the previous or subsequent code, while the bursty character of the channel indicates a dependency. Berlekamp and Tong (1985) exploited such a dependency to enhance the decoding performance. Here a different, but similar approach is proposed.

Journal ArticleDOI
01 Aug 1995
TL;DR: A new simple encoding technique is introduced which allows the design of a wide variety of linear block codes and it is shown that the trellises of the designed codes are similar to the treller of coset codes and allow low complexity soft maximum likelihood decoding.
Abstract: The authors introduce a new simple encoding technique which allows the design of a wide variety of linear block codes. They present a number of examples in which the most widely used codes (Reed-Muller, Hamming, Golay, optimum etc.) have been designed. They also introduce a novel trellis design procedure for the proposed codes. It is shown that the trellises of the designed codes are similar to the trellises of coset codes and allow low complexity soft maximum likelihood decoding.

Journal ArticleDOI
TL;DR: It is shown with the design that transmission schemes using soft-output decoding can be considered practical even at very high throughput, since such decoding systems are more complex to design than hard output systems.
Abstract: Soft-output decoding has evolved as a key technology for new error correction approaches with unprecedented performance as well as for improvement of well established transmission techniques. In this paper, we present a high-speed VLSI implementation of the soft-output Viterbi algorithm, a low complexity soft-output algorithm, for a 16-state convolutional code. The 43 mm/sup 2/ standard cell chip achieves a simulated throughput of 40 Mb/s, while tested samples achieved a throughput of 50 Mb/s. The chip is roughly twice as big as a 16-state Viterbi decoder without soft outputs. It is thus shown with the design that transmission schemes using soft-output decoding can be considered practical even at very high throughput. Since such decoding systems are more complex to design than hard output systems, special emphasis is placed on the employed design methodology. >

Proceedings ArticleDOI
15 Feb 1995
TL;DR: A turbo-code is the parallel concatenation of two recursive systematic convolutional codes separated by a non-uniform interleaving, made up of 2 soft-output Viterbi algorithm decoders, interleaved modules, some delay lines, and a synchronization block that also features supervision functions.
Abstract: A turbo-code is the parallel concatenation of two recursive systematic convolutional codes separated by a non-uniform interleaving The decoding process is iterative and error-correcting capacity increases with the number of iterations The coding memory of 2 identical recursive systematic coders is 4 b long and their polynomials are 23,35 The decoding module is made up of 2 soft-output Viterbi algorithm decoders, interleaving and de-interleaving modules, some delay lines, and a synchronization block that also features supervision functions

Patent
14 Dec 1995
Abstract: A decoding device is provided including a code FIFO memory unit for sequentially storing a bit stream, a barrel shifter for shifting and then outputting codes properly, an accumulator for computing the shift amount of the barrel shifter and issuing a request to read data to the code FIFO memory unit, a DCT coefficient decoder for decoding DCT coefficients, a variable-length code decoder for decoding variable-length codes other than DCT coefficients, a fixed-length code decoder for decoding fixed-length codes, a register unit for storing decoded data, a decoding controller for controlling the decoders in accordance with the decoded data stored in the register unit and decoded data output by the decoders and a memory controller for controlling operations to store DCT coefficients in a memory unit A. The DCT coefficient decoder, the variable-length code decoder and the fixed-length code decoder are connected in parallel to the output of the barrel shifter and the memory controller is controlled by the decoding controller. By virtue of this structure, the decoding device is capable decoding a bit stream comprising variable-length codes mixed with fixed-length codes. The decoding device is also capable of changing processing of a next code in accordance with decoded codes. Further, these operations can be implemented by means of a simple and reasonable configuration. On top of that, the process of decoding codes at a required high speed is carried out by an independent circuit, allowing the processing power of the decoding circuit to be enhanced.

Proceedings ArticleDOI
14 Nov 1995
TL;DR: In order to achieve the high performance expected by turbo-codes and to assume a great flexibility for the system, this work combines a multilevel turbo-code with 8PSK modulation and examines in detail the performance of this combination compared with different coding and decoding approaches.
Abstract: Different concepts of iterative decoding algorithms for serial or parallel concatenated coding (called turbo-code) as well as for the special case of the generalized concatenated coding schemes have been introduced. These algorithms being very efficient, are based on soft output decoding, where, the soft decisions are used as feed-forward information for the next steps of iterations. In order to achieve the high performance expected by turbo-codes and to assume a great flexibility for the system, we combine a multilevel turbo-code with 8PSK modulation. This allow us to examine in detail the performance of this combination compared with different coding and decoding approaches: turbo-code with Gray mapping and iterative multistage decoding using convolutional codes or a combination of turbo codes, convolutional and single parity check codes. In all of our studies we consider a super-concatenated coding scheme, where the outer code is based on Reed-Solomon code. The results showed that the coding gain at a BER of 10/sup -4/ (since we employ a powerful outer RS decoder) for the iterative convolutional coding and for the combination of convolutional/turbo coding is quite the same compared to turbo coding with Gray mapping, which is 0.5 dB. However, compared to uncoded QPSK, the coding gain is 4 dB (at a BER of 10/sup -4/), which is quite significant.

Journal ArticleDOI
Henk D. L. Hollmann1
TL;DR: A new technique to construct sliding-block modulation codes with a small decoding window, which involves both state splitting and look-ahead encoding and crucially depends on a new "local" construction method for bounded-delay codes.
Abstract: We present a new technique to construct sliding-block modulation codes with a small decoding window. Our method, which involves both state splitting and look-ahead encoding, crucially depends on a new "local" construction method for bounded-delay codes. We apply our method to construct several new codes, all with a smaller decoding window than previously known codes for the same constraints at the same rate. >

Journal ArticleDOI
17 Sep 1995
TL;DR: This paper proposes an efficient modified MAP algorithm for obtaining P/sub c/ for the outputs of convolutional inner decoders for the purposes of soft decision decoding.
Abstract: The reliability measure for a decoded symbol is the probability P/sub c/ that the symbol is correct or the probability of error P/sub e/=1-P/sub c/. Such quantities can be obtained by the symbol-by-symbol MAP (maximum a posteriori probability) algorithm. Unfortunately this algorithm is computationally inefficient. A soft output Viterbi algorithm (SOVA) can provide an estimate of P/sub e/ which is accurate only for large SNR. This paper proposes an efficient modified MAP algorithm for obtaining P/sub c/ for the outputs of convolutional inner decoders. The outer decoder uses P/sub c/ to perform soft decision decoding by choosing a codeword which maximizes the maximum likelihood (ML) metric. Decoding based on this ML metric is referred to as generalised soft decision decoding since it includes the Euclidean metric on AWGN channels and binary memoryless channels as special cases.

Journal ArticleDOI
TL;DR: Probabilistic algorithms are given for constructing good large constraint length trellis codes for use with sequential decoding that can achieve the channel cutoff rate bound at a bit error rate (BER) of 10/sup -5/-10/Sup -6/.
Abstract: Probabilistic algorithms are given for constructing good large constraint length trellis codes for use with sequential decoding that can achieve the channel cutoff rate bound at a bit error rate (BER) of 10/sup -5/-10/sup -6/. The algorithms are motivated by the random coding principle that an arbitrary selection of code symbols will produce a good code with high probability. One algorithm begins by choosing a relatively small set of codes randomly. The error performance of each of these codes is evaluated using sequential decoding and the code with the best performance among the chosen set is retained. Another algorithm treats the code construction as a combinatorial optimization problem and uses simulated annealing to direct the code search. Trellis codes for 8 PSK and 16 QAM constellations with constraint lengths v up to 20 are obtained. Simulation results with sequential decoding show that these codes reach the channel cutoff rate bound at a BER of 10/sup -5/-10/sup -6/ and achieve 5.0-6.35 dB real coding gains over uncoded systems with the same spectral efficiency and up to 2.0 dB real coding gains over 64 state trellis codes using Viterbi decoding. >