scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1995"


Journal ArticleDOI
TL;DR: A general framework, based on ideas of Tanner, for the description of codes and iterative decoding (“turbo coding”) is developed, which clarifies, in particular, which a priori probabilities are admissible and how they are properly dealt with.
Abstract: A general framework, based on ideas of Tanner, for the description of codes and iterative decoding (“turbo coding”) is developed. Just like trellis-based code descriptions are naturally matched to Viterbi decoding, code descriptions based on Tanner graphs (which may be viewed as generalized trellises) are naturally matched to iterative decoding. Two basic iterative decoding algorithms (which are versions of the algorithms of Berrou et al. and of Hagenauer, respectively) are shown to be natural generalizations of the forward-backward algorithm (Bahl et al.) and the Viterbi algorithm, respectively, to arbitrary Tanner graphs. The careful derivation of these algorithms clarifies, in particular, which a priori probabilities are admissible and how they are properly dealt with. For cycle codes (a class of binary linear block codes), a complete characterization is given of the error patterns that are corrected by the generalized Viterbi algorithm after infinitely many iterations.

354 citations


Journal ArticleDOI
TL;DR: The concept of an error-correcting array gives a new bound on the minimum distance of linear codes and a decoding algorithm which decodes up to half this bound and is explained in terms of linear algebra and the theory of semigroups only.
Abstract: The concept of an error-correcting array gives a new bound on the minimum distance of linear codes and a decoding algorithm which decodes up to half this bound. This gives a unified point of view which explains several improvements on the minimum distance of algebraic-geometric codes. Moreover, it is explained in terms of linear algebra and the theory of semigroups only.

199 citations


Journal ArticleDOI
TL;DR: This paper provides a survey of the existing literature on the decoding of algebraic-geometric codes and shows what has been done, discusses what still has to be done, and poses some open problems.
Abstract: This paper provides a survey of the existing literature on the decoding of algebraic-geometric codes. Definitions, theorems, and cross references will be given. We show what has been done, discuss what still has to be done, and pose some open problems.

127 citations


Journal ArticleDOI
TL;DR: The algorithm is designed for low decoding complexity and is applicable to all Reed-Muller codes, and gives better decoding performance than soft-decision bounded-distance decoding.
Abstract: Constructs Reed-Muller codes by generalized multiple concatenation of binary block codes of length 2. As a consequence of this construction, a new decoding procedure is derived that uses soft-decision information. The algorithm is designed for low decoding complexity and is applicable to all Reed-Muller codes. It gives better decoding performance than soft-decision bounded-distance decoding. Its decoding complexity is much lower than that of maximum-likelihood trellis decoding of Reed-Muller codes, especially for long codes. >

102 citations


Journal ArticleDOI
TL;DR: A decoding algorithm for algebraic-geometric codes from regular plane curves, in particular the Hermitian curve, which corrects all error patterns of weight less than d*/2 with low complexity is presented.
Abstract: We present a decoding algorithm for algebraic-geometric codes from regular plane curves, in particular the Hermitian curve, which corrects all error patterns of weight less than d*/2 with low complexity. The algorithm is based on the majority scheme of Feng and Rao (1993) and uses a modified version of Sakata's (1988) generalization of the Berlekamp-Massey algorithm.

88 citations


01 Jan 1995
TL;DR: It is proved that any algebraic-geometric (AG) code can be expressed as a cross section of an extended multidimensional cyclic code, and Sakata's algorithm can be used to find linear recursion relations which hold on the syndrome array.
Abstract: It is proved that any algebraic-geometric (AG) code can be expressed as a cross section of an extended multidimensional cyclic code. Both AG codes and multidimensional cyclic codes are described by a unified theory of linear block codes defined over point sets: AG codes are defined over the points of an algebraic curve, and an m-dimensional cyclic code is defined over the points in m-dimensional space. The power of the unified theory is in its description of decoding techniques using Grobner bases. In order to fit an AG code into this theory, a change of coordinates must be applied to the curve over which the code is defined so that the curve is in special position. For curves in special position, all computations can be performed with polynomials and this also makes it possible to use the theory of Grobner bases. Next, a transform is defined for AG codes which generalizes the discrete Fourier transform. The transform is also related to a Grobner basis, and is useful in setting up the decoding problem. In the decoding problem, a key step is finding a Grobner basis for an error locator ideal. For AG codes, multidimensional cyclic codes, and indeed, any cross section of an extended multidimensional cyclic code, Sakata's algorithm can be used to find linear recursion relations which hold on the syndrome array. In this general context, the authors give a self-contained and simplified presentation of Sakata's algorithm, and present a general framework for decoding algorithms for this family of codes, in which the use of Sakata's algorithm is supplemented by a procedure for extending the syndrome array.

83 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that any algebraic-geometric (AG) code can be expressed as a cross section of an extended multidimensional cyclic code, and that the decoding problem can be solved using Grobner bases.
Abstract: It is proved that any algebraic-geometric (AG) code can be expressed as a cross section of an extended multidimensional cyclic code. Both AG codes and multidimensional cyclic codes are described by a unified theory of linear block codes defined over point sets: AG codes are defined over the points of an algebraic curve, and an m-dimensional cyclic code is defined over the points in m-dimensional space. The power of the unified theory is in its description of decoding techniques using Grobner bases. In order to fit an AG code into this theory, a change of coordinates must be applied to the curve over which the code is defined so that the curve is in special position. For curves in special position, all computations can be performed with polynomials and this also makes it possible to use the theory of Grobner bases. Next, a transform is defined for AG codes which generalizes the discrete Fourier transform. The transform is also related to a Grobner basis, and is useful in setting up the decoding problem. In the decoding problem, a key step is finding a Grobner basis for an error locator ideal. For AG codes, multidimensional cyclic codes, and indeed, any cross section of an extended multidimensional cyclic code, Sakata's algorithm can be used to find linear recursion relations which hold on the syndrome array. In this general context, the authors give a self-contained and simplified presentation of Sakata's algorithm, and present a general framework for decoding algorithms for this family of codes, in which the use of Sakata's algorithm is supplemented by a procedure for extending the syndrome array.

81 citations


Journal ArticleDOI
V.B. Balakirsky1
TL;DR: An upper bound on the maximal transmission rate over binary-input memoryless channels, provided that the decoding decision rule is given, is derived and if the decisionRule is equivalent to the maximum-likelihood decoding, then the bound coincides with the channel capacity.
Abstract: An upper bound on the maximal transmission rate over binary-input memoryless channels, provided that the decoding decision rule is given, is derived. If the decision rule is equivalent to the maximum-likelihood decoding (matched decoding), then the bound coincides with the channel capacity. Otherwise (mismatched decoding), it coincides with a known lower bound.

64 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Abstract: This article presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the ML codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

43 citations


Journal ArticleDOI
TL;DR: A new bounded-distance decoding algorithm is presented for the hexacode, which requires at most 34 real operations in the worst case, as compared to 57 such Operations in the best previously known decoder.
Abstract: We present a new bounded-distance decoding algorithm for the hexacode, which requires at most 34 real operations in the worst case, as compared to 57 such operations in the best previously known decoder. The new algorithm is then employed for bounded-distance decoding of the Leech lattice and the Golay code. The error-correction radius of the resulting decoders is equal to that of a maximum-likelihood decoder. The resulting decoding complexity is at most 331 real operations for the Leech lattice and at most 121 operations for the Golay code. For all the three codes-the hexacode, the Golay code, and the Leech lattice-the proposed decoders are considerably more efficient than any decoder presently known. >

42 citations


Proceedings ArticleDOI
17 Sep 1995
TL;DR: The main thesis of the present paper is that, with respect to iterative decoding, the natural way of describing a code is by means of a Tanner graph, which may be viewed as a generalized trellis.
Abstract: Until recently, most known decoding procedures for error-correcting codes were based either on algebraically calculating the error pattern or on some sort of tree or trellis search. With the advent of turbo coding, a third decoding principle has finally had its breakthrough: iterative decoding. With respect to Viterbi decoding, a code is most naturally described by means of a trellis diagram. The main thesis of the present paper is that, with respect to iterative decoding, the natural way of describing a code is by means of a Tanner graph, which may be viewed as a generalized trellis. More precisely, it is the "time axis" of a trellis that is generalized to a Tanner graph.

Journal ArticleDOI
TL;DR: A theoretical foundation is developed from which an improved bound for the minimum distance and results on decoding up to that bound are derived.
Abstract: A decoding algorithm for certain codes from algebraic curves is presented. The dual code, which is used for decoding, is formed by evaluating rational functions having poles at a single point. A theoretical foundation is developed from which an improved bound for the minimum distance and results on decoding up to that bound are derived. The decoding is done by a computationally efficient Berlekamp-Massey type algorithm, which is intrinsic to the curve.

Journal ArticleDOI
TL;DR: The problem of decoding cyclic error correcting codes is one of solving a constrained polynomial congruence, often achieved with solutions to the inequality of the following type:
Abstract: The problem of decoding cyclic error correcting codes is one of solving a constrained polynomial congruence, often achieved

Proceedings ArticleDOI
M. Blaum1, H.C.A. van Tilborg
17 Sep 1995
TL;DR: A different, but similar approach is proposed to deal with bursts in data storage systems by exploiting a dependency in byte-error correcting codes to enhance the decoding performance.
Abstract: A common way to deal with bursts in data storage systems is to interleave byte-error correcting codes. In the decoding of each of these byte-error correcting codes, one normally does not make use of the information obtained from the previous or subsequent code, while the bursty character of the channel indicates a dependency. Berlekamp and Tong (1985) exploited such a dependency to enhance the decoding performance. Here a different, but similar approach is proposed.

Journal ArticleDOI
01 Aug 1995
TL;DR: A new simple encoding technique is introduced which allows the design of a wide variety of linear block codes and it is shown that the trellises of the designed codes are similar to the treller of coset codes and allow low complexity soft maximum likelihood decoding.
Abstract: The authors introduce a new simple encoding technique which allows the design of a wide variety of linear block codes. They present a number of examples in which the most widely used codes (Reed-Muller, Hamming, Golay, optimum etc.) have been designed. They also introduce a novel trellis design procedure for the proposed codes. It is shown that the trellises of the designed codes are similar to the trellises of coset codes and allow low complexity soft maximum likelihood decoding.

Journal ArticleDOI
TL;DR: Maxted and Robinson (1985) used a state model to describe the error recovery of the decoder, and the present paper extends their results in several ways.
Abstract: Variable-length codes (e.g. Huffman codes) are commonly employed to minimize the average codeword length for noiseless encoding of discrete sources. Upon transmission over noisy channels, conflicting views note that such codes "tend to be self-synchronizing" and suffer from the "catastrophic effect of the error's propagation". Maxted and Robinson (1985) used a state model to describe the error recovery of the decoder. The present paper extends their results in several ways.

Proceedings ArticleDOI
15 Feb 1995
TL;DR: A turbo-code is the parallel concatenation of two recursive systematic convolutional codes separated by a non-uniform interleaving, made up of 2 soft-output Viterbi algorithm decoders, interleaved modules, some delay lines, and a synchronization block that also features supervision functions.
Abstract: A turbo-code is the parallel concatenation of two recursive systematic convolutional codes separated by a non-uniform interleaving The decoding process is iterative and error-correcting capacity increases with the number of iterations The coding memory of 2 identical recursive systematic coders is 4 b long and their polynomials are 23,35 The decoding module is made up of 2 soft-output Viterbi algorithm decoders, interleaving and de-interleaving modules, some delay lines, and a synchronization block that also features supervision functions

Patent
14 Dec 1995
Abstract: A decoding device is provided including a code FIFO memory unit for sequentially storing a bit stream, a barrel shifter for shifting and then outputting codes properly, an accumulator for computing the shift amount of the barrel shifter and issuing a request to read data to the code FIFO memory unit, a DCT coefficient decoder for decoding DCT coefficients, a variable-length code decoder for decoding variable-length codes other than DCT coefficients, a fixed-length code decoder for decoding fixed-length codes, a register unit for storing decoded data, a decoding controller for controlling the decoders in accordance with the decoded data stored in the register unit and decoded data output by the decoders and a memory controller for controlling operations to store DCT coefficients in a memory unit A. The DCT coefficient decoder, the variable-length code decoder and the fixed-length code decoder are connected in parallel to the output of the barrel shifter and the memory controller is controlled by the decoding controller. By virtue of this structure, the decoding device is capable decoding a bit stream comprising variable-length codes mixed with fixed-length codes. The decoding device is also capable of changing processing of a next code in accordance with decoded codes. Further, these operations can be implemented by means of a simple and reasonable configuration. On top of that, the process of decoding codes at a required high speed is carried out by an independent circuit, allowing the processing power of the decoding circuit to be enhanced.

Proceedings ArticleDOI
14 Nov 1995
TL;DR: In order to achieve the high performance expected by turbo-codes and to assume a great flexibility for the system, this work combines a multilevel turbo-code with 8PSK modulation and examines in detail the performance of this combination compared with different coding and decoding approaches.
Abstract: Different concepts of iterative decoding algorithms for serial or parallel concatenated coding (called turbo-code) as well as for the special case of the generalized concatenated coding schemes have been introduced. These algorithms being very efficient, are based on soft output decoding, where, the soft decisions are used as feed-forward information for the next steps of iterations. In order to achieve the high performance expected by turbo-codes and to assume a great flexibility for the system, we combine a multilevel turbo-code with 8PSK modulation. This allow us to examine in detail the performance of this combination compared with different coding and decoding approaches: turbo-code with Gray mapping and iterative multistage decoding using convolutional codes or a combination of turbo codes, convolutional and single parity check codes. In all of our studies we consider a super-concatenated coding scheme, where the outer code is based on Reed-Solomon code. The results showed that the coding gain at a BER of 10/sup -4/ (since we employ a powerful outer RS decoder) for the iterative convolutional coding and for the combination of convolutional/turbo coding is quite the same compared to turbo coding with Gray mapping, which is 0.5 dB. However, compared to uncoded QPSK, the coding gain is 4 dB (at a BER of 10/sup -4/), which is quite significant.

Patent
Yasushi Ooi1
08 Mar 1995
TL;DR: In this article, a moving picture coding and decoding circuit is proposed, where a picture to be coded is inputted to a motion detection/prediction section, which outputs a predictive difference signal and a predictive signal.
Abstract: A moving picture coding and decoding circuit which can cope with a plurality of algorithms to reduce the number of components and facilitate extension. A picture to be coded is inputted to a motion detection/prediction section, which outputs a predictive difference signal and a predictive signal. DCT processing and quantization are performed for the predictive difference signal by a conversion coding and decoding section, from which a conversion coefficient signal is outputted to an interface bus. The conversion coding and decoding section also executes dequantization and inverse DCT processing of the conversion coefficient, adds the predictive signal to the conversion coefficient and outputs a result of the picture coding to an image data bus. A programmable architecture as in a digital signal processor is applied to the conversion coding and decoding section. The conversion coefficient outputted from the conversion coding and decoding section is stored into a FIFO memory of a zigzag scan/entropy coding section and then undergoes coding in an entropy coding section. A bit stream thus coded is stored once into and then outputted as codes from another FIFO memory.

Journal ArticleDOI
Henk D. L. Hollmann1
TL;DR: A new technique to construct sliding-block modulation codes with a small decoding window, which involves both state splitting and look-ahead encoding and crucially depends on a new "local" construction method for bounded-delay codes.
Abstract: We present a new technique to construct sliding-block modulation codes with a small decoding window. Our method, which involves both state splitting and look-ahead encoding, crucially depends on a new "local" construction method for bounded-delay codes. We apply our method to construct several new codes, all with a smaller decoding window than previously known codes for the same constraints at the same rate. >

Journal ArticleDOI
TL;DR: Probabilistic algorithms are given for constructing good large constraint length trellis codes for use with sequential decoding that can achieve the channel cutoff rate bound at a bit error rate (BER) of 10/sup -5/-10/Sup -6/.
Abstract: Probabilistic algorithms are given for constructing good large constraint length trellis codes for use with sequential decoding that can achieve the channel cutoff rate bound at a bit error rate (BER) of 10/sup -5/-10/sup -6/. The algorithms are motivated by the random coding principle that an arbitrary selection of code symbols will produce a good code with high probability. One algorithm begins by choosing a relatively small set of codes randomly. The error performance of each of these codes is evaluated using sequential decoding and the code with the best performance among the chosen set is retained. Another algorithm treats the code construction as a combinatorial optimization problem and uses simulated annealing to direct the code search. Trellis codes for 8 PSK and 16 QAM constellations with constraint lengths v up to 20 are obtained. Simulation results with sequential decoding show that these codes reach the channel cutoff rate bound at a BER of 10/sup -5/-10/sup -6/ and achieve 5.0-6.35 dB real coding gains over uncoded systems with the same spectral efficiency and up to 2.0 dB real coding gains over 64 state trellis codes using Viterbi decoding. >

Journal ArticleDOI
TL;DR: A new approach to increase the sum rate for conventional synchronous code-division multiple-access (S-CDMA) systems is presented and it is shown that it can be done by joint processing of the outputs of matched filters.
Abstract: A new approach to increase the sum rate for conventional synchronous code-division multiple-access (S-CDMA) systems is presented. It is shown that it can be done by joint processing of the outputs of matched filters, when one considers the system of codes for S-CDMA to be the codes for the usual adder channel. An example of construction and decoding of such a system is also given. >

Proceedings ArticleDOI
09 May 1995
TL;DR: A Hadamard-based framework for soft decoding in vector quantization over a Rayleigh fading channel is presented and image coding simulations indicate that the soft decoder outperforms its hard decoding counterpart.
Abstract: A Hadamard-based framework for soft decoding in vector quantization over a Rayleigh fading channel is presented. We also provide an efficient algorithm for decoding calculations. The system has relatively low complexity, and gives low transmission rate since no redundant channel coding is used. Our image coding simulations indicate that the soft decoder outperforms its hard decoding counterpart. The relative gain is larger for bad channels. Simulations also indicate that encoder training for hard decoding suffices to get good results with the soft decoder.

Journal ArticleDOI
TL;DR: The proposed scheme can circumvent the degradation of the throughput due to an increase of the number of receivers, which is the most serious defect in the conventional multicast ARQ schemes, at the expense of the transmission delay.
Abstract: An efficient multicast hybrid ARQ scheme is proposed by incorporating the generalized minimum distance (GRID) decoding of maximum distance separable (MDS) codes with Metzner's (1984) scheme. Erroneous frames are stored in the receiver buffer and recovered after receiving one or more redundant frames. The throughput and the average transmission delay of the proposed scheme are analyzed on memoryless symmetric channels. The proposed scheme can circumvent the degradation of the throughput due to an increase of the number of receivers, which is the most serious defect in the conventional multicast ARQ schemes, at the expense of the transmission delay.

Journal ArticleDOI
TL;DR: This work presents an efficient recursive algorithm for accomplishing maximum likelihood (ML) soft syndrome decoding of binary convolutional codes, which achieves substantial reduction in the average computational complexity, particularly for moderately noisy channels.
Abstract: We present an efficient recursive algorithm for accomplishing maximum likelihood (ML) soft syndrome decoding of binary convolutional codes. The algorithm consists of signal-by-signal hard decoding followed by a search for the most likely error sequence. The number of error sequences to be considered is substantially larger than in hard decoding, since the metric applied to the error bits is the magnitude of the log likelihood ratio rather than the Hamming weight. An error-trellis (alternatively, a decoding table) is employed for describing the recursion equations of the decoding procedure. The number of its states is determined by the states indicator, which is a modified version of the constraint length of the check matrix. Methods devised for eliminating error patterns and degenerating error-trellis sections enable accelerated ML decoding. In comparison with the Viterbi algorithm, the syndrome decoding algorithm achieves substantial reduction in the average computational complexity, particularly for moderately noisy channels. >

Proceedings ArticleDOI
17 Sep 1995
TL;DR: The idea of iterative decoding of two-dimensional systematic convolutional codes-so-called turbo-codes-is extended to threshold decoding, which is presented in "soft-in/soft-out" form.
Abstract: The idea of iterative decoding of two-dimensional systematic convolutional codes-so-called turbo-codes-is extended to threshold decoding, which is presented in "soft-in/soft-out" form. The computational complexity of the proposed decoder is very low. Surprisingly good simulation results are shown for the Gaussian channel.

Journal ArticleDOI
TL;DR: A new decoding algorithm for burst errors in Reed-Solomon codes is given and is shown to be more efficient than previously proposed methods.
Abstract: A new decoding algorithm for burst errors in Reed-Solomon codes is given. This algorithm is shown to be more efficient than previously proposed methods.

Patent
Ryu Tadanori1
05 Dec 1995
TL;DR: In this article, an address generating circuit (231) generates addresses and a decoder (23) reads compressed data from a recording medium according to the addresses supplied by the address generator and decodes thus-read data.
Abstract: An address generating circuit (231) generates addresses. A decoder (230) reads compressed data from a recording medium (20) according to the addresses supplied by the address generating circuit and decodes thus-read data. An address switching circuit (239) selects one of an address supplied by the address generating circuit and an address supplied from an external address bus. A data switching circuit (240) selects one of data supplied from the recording medium and data supplied by the decoder. The decoder circuit (23) enters one of a through mode and a decoding mode, input data being output as it is in the through mode, input compressed data being decoded and the thus-decoded data being output in the decoding mode. When an address relevant to the recording medium is given externally, the decoder circuit enters the through mode and data thus read out from the recording medium is output to the data bus, and when an address relevant to the recording medium is given from the address generating circuit, the decoder circuit enters the decoding mode, reads compressed data from the recording medium, decodes it and outputs the decoded data to the data bus.

Journal ArticleDOI
TL;DR: Two new algorithms for decoding the (23, 12) binary Golay code are developed with channel measurement information, one of which requires a very small amount of computation for decoding a received block, and the other is very suitable for hardware implementation owing to its simple and regular operation.
Abstract: Two new algorithms for decoding the (23, 12) binary Golay code are developed with channel measurement information. For a white gaussian noise channel, each of the two algorithms achieves about 0·8 dB of decoding performance over a conventional hard-decision decoding. Furthermore, one of the algorithms requires a very small amount of computation for decoding a received block, and the other is very suitable for hardware implementation owing to its simple and regular operation. The hardware decoder, based on the latter algorithm, is designed with a pipeline such that it decodes one bit of a received block in a single system clock cycle