scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1989"


Journal ArticleDOI
A.R. Calderbank1
TL;DR: The author extends the coding method to coset codes and shows how to calculate minimum squared distance and path multiplicity in terms of the norms and multiplicities of the different cosets.
Abstract: H. Imai and S. Hirakawa have proposed (1977) a multilevel coding method based on binary block codes that admits a staged decoding procedure. The author extends the coding method to coset codes and shows how to calculate minimum squared distance and path multiplicity in terms of the norms and multiplicities of the different cosets. The multilevel structure allows the redundancy in the coset selection procedure to be allocated efficiently among the different levels. It also allows the use of suboptimal multistage decoding procedures that have performance/complexity advantages over maximum-likelihood decoding. >

274 citations


Journal ArticleDOI
TL;DR: A decoding algorithm is constructed which turns out to be a generalization of the Peterson algorithm for decoding BCH decoder codes.
Abstract: A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result is a decoding algorithm which turns out to be a generalization of the Peterson algorithm for decoding BCH decoder codes. >

128 citations


Journal ArticleDOI
TL;DR: A binary multiple-check generalization of the Wagner rule is presented, and two methods for its implementation, one of which resembles the suboptimal Forney-Chase algorithms, are described.
Abstract: Maximum-likelihood soft-decision decoding of linear block codes is addressed. A binary multiple-check generalization of the Wagner rule is presented, and two methods for its implementation, one of which resembles the suboptimal Forney-Chase algorithms, are described. Besides efficient soft decoding of small codes, the generalized rule enables utilization of subspaces of a wide variety, thereby yielding maximum-likelihood decoders with substantially reduced computational complexity for some larger binary codes. More sophisticated choice and exploitation of the structure of both a subspace and the coset representatives are demonstrated for the (24, 12) Golay code, yielding a computational gain factor of about 2 with respect to previous methods. A ternary single-check version of the Wagner rule is applied for efficient soft decoding of the (12, 6) ternary Golay code. >

124 citations


Proceedings ArticleDOI
11 Jun 1989
TL;DR: Three methods of generating inherently unlimited concurrency in Viterbi decoding, for both controllable and uncontrollable shift register processes and Markov processes, are described, and make real-time Vitterbi decoding in the gigabit-per-second range feasible for convolutional and trellis codes.
Abstract: The sequential nature of the Vitterbi algorithm places an inherent upper limit on the decoding throughput of the algorithm in a given integrated circuit technology and thereby restricts its applications. Three methods of generating inherently unlimited concurrency in Viterbi decoding, for both controllable and uncontrollable shift register processes and Markov processes, are described. Concurrent decoders using these methods can apply high-throughput architectures with an overhead of pipeline latches or parallel hardware. A feasible method for bypassing the hardware limit is also proposed for decoding at an arbitrarily high as well as variable throughput. The proposed methods make real-time Viterbi decoding in the gigabit-per-second range feasible for convolutional and trellis codes. >

72 citations


Journal ArticleDOI
TL;DR: A generalized Euclidean algorithm, which is based on a generalized polynomial division algorithm, is presented and it is shown that the algorithm can be applied to the decoding of many cyclic codes for which multiple syndrome sequences are available.
Abstract: The problem of finding a linear-feedback shift register of shortest length capable of generating prescribed multiple sequences is considered. A generalized Euclidean algorithm, which is based on a generalized polynomial division algorithm, is presented. A necessary and sufficient condition for the uniqueness of the solution is given. When the solution is not unique, the set of all possible solutions is also derived. It is shown that the algorithm can be applied to the decoding of many cyclic codes for which multiple syndrome sequences are available. When it is applied to the case of a single sequence, the algorithm reduces to that introduced by Y. Sugiyama et al. (Inf. Control, vol.27, p.87-9, Feb. 1975) in the decoding of BCH codes. >

71 citations


Journal ArticleDOI
TL;DR: An algorithm is given that decodes the Leech lattice with not much more than twice the complexity of soft-decision decoding of the Golay code, and is readily generalized to lattices that can be expressed in terms of binary code formulas.
Abstract: An algorithm is given that decodes the Leech lattice with not much more than twice the complexity of soft-decision decoding of the Golay code. The algorithm has the same effective minimum distance as maximum-likelihood decoding and increases the effective error coefficient by less than a factor or two. The algorithm can be recognized as a member of the class of multistage algorithms that are applicable to hierarchical constructions. It is readily generalized to lattices that can be expressed in terms of binary code formulas, and in particular to construction B lattices. >

68 citations


Journal ArticleDOI
TL;DR: The complexity of the algorithm is shown to be asymptotically equal to that of the Viterbi algorithm and is very close for practical noisy channels and the latter is shown by means of computer simulation.
Abstract: The complexity of the algorithm is shown to be asymptotically equal to that of the Viterbi algorithm and is very close for practical noisy channels. The latter is shown by means of computer simulation. The algorithm can be applied directly in an environment where soft-decision decoding is required or preferred. However, depending on the environment, some simplifications may be possible and/or necessary, resulting in suboptimal algorithms. Codes suitable for use with the algorithm should have short total memory length. >

63 citations


Journal ArticleDOI
TL;DR: The genie-aided algorithm is used as a tool in estimating the asymptotic behavior of the M-algorithm and the results conform closely to experience with convolutional codes due to the similar distance structure of the codes.
Abstract: Comparisons are made of a genie-aided sequential algorithm due to D. Haccoun and M.J. Ferguson (1975), the Viterbi algorithm, the M-algorithm, and the Fano algorithm for rate-1/2 and rate-2/3 trellis modulation codes on rectangular signal sets. The effects of signal-to-noise ratio and decoding-delay constraints on the choice of decoding algorithms for framed data are examined by computer simulation. Additionally, the genie-aided algorithm is used as a tool in estimating the asymptotic behavior of the M-algorithm. In general, the results conform closely to experience with convolutional codes due to the similar distance structure of the codes. The Fano algorithm produces good error performance with a low average number of computations when long decoding delay is permissible. The M-algorithm provides a savings in computation compared to the Viterbi algorithm if a small decoding delay is required. >

57 citations


Journal ArticleDOI
TL;DR: An efficient algorithm is presented for maximum-likelihood soft-decision decoding of the Leech lattice and seems to achieve a computational complexity comparable to that of the equivalent trellis codes.
Abstract: An efficient algorithm is presented for maximum-likelihood soft-decision decoding of the Leech lattice. The superiority of this decoder with respect to both computational and memory complexities is demonstrated in comparison with previously published decoding methods. Gain factors in the range of 2-10 are achieved. The authors conclude with some more advanced ideas for achieving a further reduction of the algorithm complexity based on a generalization of the Wagner decoding method to two parity constraints. A comparison with the complexity of some trellis-coded modulation schemes is discussed. The decoding algorithm presented seems to achieve a computational complexity comparable to that of the equivalent trellis codes. >

41 citations


Journal ArticleDOI
01 Jun 1989
TL;DR: A new hardware decoder for double-error-correcting binary BCH codes of primitive length, based on a modified step-by-step decoding algorithm, which is suitable for long block codes working at high data rates.
Abstract: Presents a new hardware decoder for double-error-correcting binary BCH codes of primitive length, based on a modified step-by-step decoding algorithm. This decoding algorithm can be easily implemented with VLSI circuits. As the clock rate of the decoder is independent of block length and is only twice the data rate, the decoder is suitable for long block codes working at high data rates. The decoder comprises a syndrome calculation circuit, a comparison circuit and a decision circuit, which can be realised by linear feedback shift registers, ROMs and logical gates. The decoding algorithm, circuit design and data processing sequence are described in detail. The circuit complexity, decoding speed and data rate of the new decoder are also discussed and compared with other decoding methods.< >

35 citations


Journal ArticleDOI
TL;DR: A multilevel approach to the design of DC-free line codes is presented, which allows the use of suboptimal staged decoding procedures that have performance/complexity advantages over maximum-likelihood decoding and the improved run-length/accumulated-charge parameters.
Abstract: A multilevel approach to the design of DC-free line codes is presented. The different levels can be used for different purposes, for example, to control the maximum accumulated charge or to guarantee a certain minimum distance. The advantages of codes designed by this method over similar codes are the improved run-length/accumulated-charge parameters, higher transmission rate, and the systematic nature of the code construction. The multilevel structure allows the redundancy in the signal selection procedure to be allocated efficiently among the different levels. It also allows the use of suboptimal staged decoding procedures that have performance/complexity advantages over maximum-likelihood decoding. >

Proceedings ArticleDOI
08 May 1989
TL;DR: A neural net architecture is implemented as a maximum-likelihood decoder, taking advantage of parallel computation for the neural net, and all other NP-complete problems may be solvable by neural nets.
Abstract: A neural net architecture is implemented as a maximum-likelihood decoder. Any block binary code can be easily decoded. In a von Neumann computer, the computation of all the Hamming distances requires exponential time; however, polynomial-time is needed for the neural net, taking advantage of parallel computation. In fact, picking the maximum is a polynomial-time problem. Since the decoding problem is NP-complete, all other NP-complete problems may be solvable by neural nets. >

Journal ArticleDOI
TL;DR: The concept of weighted-output decoding, which enables us to perform many successive decodings without information loss, is introduced, and the conventional minimal distance criterion is criticized and a proximity measure of the weight distribution of the code with respect to the binomial one is proposed.
Abstract: We introduce the concept of weighted-output decoding, which enables us to perform many successive decodings without information loss. The design of a coding system for the additive white Gaussian channel can thus be viewed as a means for combining many simple codes. We illustrate this concept by the example of parity-check codes which are combined according to Elias' iterated product. Examination of the performance thus obtained leads us to criticize the conventional minimal distance criterion and to propose as a criterion a proximity measure of the weight distribution of the code with respect to the binomial one. According to this point of view, the iterated product of parity-check codes appears as a means of decimation of the set of n-tuples.

Proceedings ArticleDOI
15 Oct 1989
TL;DR: The authors study the tradeoff between truncation length and performance loss for the two most common variations of Viterbi's algorithm: best- state decoding (BSD) and fixed-state decoding (FSD).
Abstract: Practical Viterbi decoders often fall significantly short of full maximum likelihood decoding performance because of survivor truncation effects. In the present work the authors study the tradeoff between truncation length and performance loss for the two most common variations of Viterbi's algorithm: best-state decoding (BSD) and fixed-state decoding (FSD). It is found that FSD survivors should be about twice as long as BSD survivors for comparable performance. >

Proceedings ArticleDOI
27 Nov 1989
TL;DR: It is shown that the number of ACS operations is considerably reduced and that, because of the decrements of the incorrect path, the error performance of the proposed decoding is superior to that of conventional hard-decision Viterbi decoding when the error due to correct path elimination is negligible.
Abstract: A new type of Viterbi decoding algorithm is proposed to realize fast decoding by reducing add-compare-select (ACS) operations, which occupy the dominant part of Viterbi decoding. In the decoding trellis, the branches with low probability to be the correct path are eliminated before decoding based on the detected signal level, which leads to the reduction of merge events in which ACS operation should be done. The reduction rate of the number of ACS operations and the bit-error probability with hard decision are derived for the codes with rate 1/2. It is shown that the number of ACS operations is considerably reduced and that, because of the decrements of the incorrect path, the error performance of the proposed decoding is superior to that of conventional hard-decision Viterbi decoding when the error due to correct path elimination is negligible. Reduced Viterbi decoding reduces the number of ACS operations up to 10% with additional coding gain of 0.6 dB for the code k=3 and up to about 20% without performance degradation for k=5. >

Proceedings ArticleDOI
15 Oct 1989
TL;DR: A new acceptance criterion is developed that gives better performance for GMD decoding that can be obtained with previous criteria and will lead to a decoding of many received vectors for which previous criteria will fail to find a code word.
Abstract: Generalized-minimum-distance (GMD) decoding is a soft-decision decoding algorithm that uses an acceptance criterion and a sequence of attempts at errors-and-ensures decoding in order to find the code work that is closest in generalized distance to the received vector. In the present work the authors develop a new acceptance criterion that gives better performance for GMD decoding that can be obtained with previous criteria. This acceptance criterion will lead to a decoding of many received vectors for which previous criteria will fail to find a code word. For M-ary signaling, the authors generalize the weights used in GMD decoding to permit each of the possible M symbol values to have a different weight. >

Proceedings ArticleDOI
A. Ushirokawa1, H. Matsui1
27 Nov 1989
TL;DR: A novel class of multilevel codes composed of two-level codes as component codes is proposed for achieving high coding gain and presents a class of coded modulation superior to trellis-coded modulation in simultaneous realization of high codinggain, low/moderate decoder complexity, and short decoding delay.
Abstract: A novel class of multilevel codes is proposed for achieving high coding gain. Multilevel codes composed of two-level codes are investigated from the standpoint of the systematic design of codes. Two-level codes as component codes are shown to be preferable to a combination of conventional one-level codes in coding gain and decoder delay. For a 19.2 kb/s voiceband modem application, this scheme presents a class of coded modulation superior to trellis-coded modulation in simultaneous realization of high coding gain, low/moderate decoder complexity, and short decoding delay. >

Journal ArticleDOI
TL;DR: A new decoding method is proposed, in which the error-correcting capability is improved by controlling the order of error decisions in APP decoding, which exhibits better characteristics than APP decoding and approximate APP decoding both in bit error rate and block error rate.
Abstract: Soft decision decoding is a decoding method which tries to improve the error-correcting capability by utilizing the information concerning the reliability of the received symbol. One of the soft decision decoding methods for practically useful block code, is APP (a posteriori probability) decoding applicable to the majority-logic de-codable code. In APP decoding, the error is decided for each symbol as in majority-logic decoding. This paper proposes a new decoding method (variable threshold APP decoding), in which the error-correcting capability is improved by controlling the order of error decisions in APP decoding. the performance is evaluated by simulation. As a result, it is shown that the variable threshold APP decoding exhibits better characteristics than APP decoding and approximate APP decoding both in bit error rate and block error rate. When, for example, the proposed decoding is applied to (73, 45) difference set cyclic code, a coding gain of 1.0 dB is obtained compared with the approximate APP decoding. In the proposed decoding method, the time required for decoding is increased, while the hardware scale is almost the same as that of the approximate APP decoding.

Proceedings ArticleDOI
07 Mar 1989
TL;DR: A Reed- Solomon generator matrix which possesses a certain inherent structure in GF(2) is derived and a structure representation of the code as a union of cosets, each coset being an interleaver of several binary BCH codes, is obtained.
Abstract: In this paper we present a Reed-Solomon decoder that makes use of bit soft decision information. A Reed- Solomon generator matrix which possesses a certain inherent structure in GF(2) is derived. Using this structure representation of the code as a union of cosets, each coset being an interleaver of several binary BCH codes, is obtained. Such partition into cosets provides a clue for efficient bit level soft decision decoding. The proposed decoding algorithms are in many cases orders of magnitude more efficient than conventional techniques.

Book
01 Nov 1989
TL;DR: The future pan European mobile radiotelephone system a short overview of direct-sequence spread-spectrum digital mobile radio transmission and linear recurrence relations and an extended subresultant algorithm.
Abstract: Codes and character sums.- Codes from some Artin-Schreier curves.- Families of codes exceeding the Varshamov-Gilbert bound.- Polynomial factorization using Brill-Noether algorithm.- New bounds on cyclic codes from algebraic curves.- Exponential sums and the Carlitz-Uchiyama bound.- Bounds on the use of the postal channel.- A new authentication algorithm.- A method for finding codewords of small weight.- Generating codewords in real space: Application to decoding.- Weighted decoding of linear block codes by solving a system of implicit equations.- Suboptimum weighted-output symbol-by-symbol decoding of block codes.- Results of generalized minimum distance decoding for block code of rate 1/2.- An overview of recent results in the theory of burst-correcting codes.- Note for computing the minimun polynomial of elements in large finite fields.- A quaternary cyclic code, and a family of quadriphase sequences with low correlation properties.- A simple description of Kerdock codes.- Relation between the minimum weight of a linear code over GF(qm) and its q-ary image over GF(q).- Covering in hypercubes.- More on the plane of order 10.- Linear recurrence relations and an extended subresultant algorithm.- The future pan European mobile radiotelephone system a short overview.- Experimental direct-sequence spread-spectrum digital mobile radio transmission.- Concatenated coding schemes for H.F. (High Frequency) channels.- Bandwidth efficient coding on channels disturbed by jamming and impulse noise.- Error correcting coding scheme against partial time jammers.- Evaluation of a coding design for a very noisy channel / Evaluation d'une Configuration de Codage Pour une Ligne tres Bruitee.- Design and implementation of an asynchronous transmission process by code division multiple access.- Is minimal distance a good criterion?.- Open problem 1: Covering radius of the circuit code.- Open problem 2: Cyclic codes over rings and p-adic fields.


15 Feb 1989
TL;DR: A simplified procedure is developed to decode the three possible erors in a (23,12) Golay codeword and it is shown that this algorithm is modular, regular and naturally suitable for both Very Large Scale Integration (VLSI) and software implementation.
Abstract: A simplified procedure is developed to decode the three possible erors in a (23,12) Golay codeword A computer simulation shows that this algorithm is modular, regular and naturally suitable for both Very Large Scale Integration (VLSI) and software implementation An extension of this new decoding procedure is used also to decode the 1/2-rate (24,12) Golay code, thereby correcting three and detecting four errors

Journal ArticleDOI
TL;DR: A parallelisation algorithm of decoding convolutional codes with the Viterbi algorithm that suits the VLSI realisation very well and allows high-speed decoding.
Abstract: A parallelisation algorithm of decoding convolutional codes with the Viterbi algorithm is presented. The architecture of parallel decoding analysed here suits the VLSI realisation very well and allows high-speed decoding.

Patent
18 Aug 1989
TL;DR: In this article, a shortest code group decoding logic circuit is proposed to decode a code group with high appearance frequency and short code length through the combination of AND and OR circuits and has a faster decoding speed than that of a decoding table employing a ROM.
Abstract: PURPOSE:To quicken the decoding without increasing the circuit scale by providing a shortest code group decoding logic circuit decoding a code group with high appearance frequency and short code length through the combination of AND and OR. CONSTITUTION:A shortest code group decoding section has a function of receiving 7 kinds of code groups whose designated code length is 4 or below, for example, and of outputting a decoded data, that is, a fixed length data (EXD1) and a code length (CDS1) and consists of a logic circuit comprising the combination of AND and OR circuits only and has a faster decoding speed than that of a decoding table employing a ROM. For example, the decoding speed of the shortest code group decoding section 2 using the logic circuit is 50ns, which is faster than the access speed of 200ns of the decode table employing a ROM. Thus, when a code data whose code length is less than the designated length is decoded, multiplexers 4, 5 select the decoding result of the shortest code group decoding section 2 and the decoding data (EXD1) is outputted to a latch 7 and the code length (CDS1) is outputted to a code data input section 1.

Journal ArticleDOI
TL;DR: The union and upper bounds of the probability of error Pc for soft decision (unquantised) optimum threshold decoding of binary phase-shift keying (PSK) with the Golay (23, 12) code are found.
Abstract: The union and upper bounds of the probability of error Pc for soft decision (unquantised) optimum threshold decoding of binary phase-shift keying (PSK) with the Golay (23, 12) code are found. Even though soft decision decoding performs better, as expected, than hard decision-bounded distance decoding, the difference in performance between the two decoding schemes decreases as the data sample size N becomes large, while the difference between the upper and union bounds remains almost constant For example, for Pc˜ 10−5 improvement greater than 3dB can be achieved when going from N = 100 to 400. This result is in direct contrast with the equivalent result in white Gaussian noise where a 2dB difference exists between hard and soft-decision decoding for all coded digital communications.

Journal ArticleDOI
TL;DR: A simple complete decoding algorithm for the (11,6,5) perfect ternary Golay code is presented, based on a step-by-step method and requires only 17 shift operations for decoding one received word.
Abstract: A simple complete decoding algorithm for the (11,6,5) perfect ternary Golay code is presented. This algorithm is based on a step-by-step method and requires only 17 shift operations for decoding one received word.

Journal ArticleDOI
TL;DR: An error-trapping (ET) decoder is described which effectively uses softly-quantised demodulator output levels as an integral part of the decoding process, which differs from other approaches to soft-decision decoding.
Abstract: An error-trapping (ET) decoder is described which effectively uses softly-quantised demodulator output levels as an integral part of the decoding process This soft ET decoder operates on multilevel symbols, which belong to a Galois field and represent the quantised demodulator output levels of a communication system The procedure differs from other approaches to soft-decision decoding which employ conventional hard-decision decoders where, as separate items, the digit reliability measures are used in an ad-hoc fashion

Dissertation
01 Jan 1989
TL;DR: It is shown that any linear code which is truly random, in the sense that there is no concise way of specifying the code, is good, and that a generalization of information set decoding gives more efficient algorithms than any other approach known.
Abstract: A central paradox of coding theory has been noted for many years, and concerns the existence and construction of the best codes. Virtually every linear code is "good" in the sense that it meets the Gilbert-Varshamov bound on distance versus redundancy. Despite the sophisticated constructions for codes derived over the years, however, no one has succeeded in demonstrating a constructive procedure which yields such codes over arbitrary symbol fields. A quarter of a century ago, Wozencraft and Reiffen, in discussing this problem, stated that "we are tempted to infer that any code of which we cannot think is good." Using the theory of Kolmogorov complexity, we show the remarkable fact that this statement holds true in a rigorous mathematical sense: any linear code which is truly random, in the sense that there is no concise way of specifying the code, is good. Furthermore, random selection of a code which does contain some constructive pattern results, with probability bounded away from zero, in a code which does not meet the Gilbert-Varshamov bound regardless of the block length of the code. In contrast to the situation for linear codes, we show that there are effectively random non-linear codes which have no guarantee on distance, and that over all rates, the average non-linear code has much lower distance than the average linear code. These techniques are used to derive original results on the performance of various classes of codes, including shortened cyclic, generalized Reed-Solomon, and general non-linear codes, under a variety of decoding strategies involving mixed burst- and random-error correction. The second part of the thesis deals with the problem of finding decoding algorithms for general linear codes. These algorithms are capable of full hard decision decoding or bounded soft decision decoding, and do not rely on any rare structure for their effectiveness. After a brief discussion of some aspects of the theory of NP-completeness as it relates to coding theory, we propose a simple model of a general decoding algorithm which is sufficiently powerful to be able to describe most of the known approaches to the problem. We provide asymptotic analysis of the complexity of various approaches to the problem under various decoding strategies (full hard decision decoding and bounded hard- and soft-decision decoding) and show that a generalization of information set decoding gives more efficient algorithms than any other approach known. Finally, we propose a new type of algorithm that synthesizes some of the advantages of information set decoding and other algorithms that exploit the weight structure of the code, such as the zero neighbours algorithm, and discuss its effectiveness.

15 May 1989
TL;DR: It is shown that the method can be extended to any decoding method which can correct three errors in the (23,12) Golay code.
Abstract: A decoding method for a (23,12) Golay code is extended to the important 1/2-rate (24,12) Golay code so that three errors can be corrected and four errors can be detected. It is shown that the method can be extended to any decoding method which can correct three errors in the (23,12) Golay code.

Journal ArticleDOI
TL;DR: Recently proposed techniques for constructing nonlinear distance-invariant codes from combinatorial designs are generalised, of particular interest among nonlinear codes because their decoding error probabilities can be readily calculated.
Abstract: Recently proposed techniques for constructing nonlinear distance-invariant codes from combinatorial designs are generalised. Such codes are of particular interest among nonlinear codes because their decoding error probabilities can be readily calculated.