scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1979"


Journal ArticleDOI
TL;DR: Two specific problems are discussed: the use of previous decisions, which leads to a weighted generalization of feedback decoding, and the extension of replication decoding to nonsystematic codes.
Abstract: Any symbol in a redundant code can be recovered when it belongs to certain erasure patterns. Several alternative expressions of a given symbol, to be referred to as its replicas, can therefore be computed in terms of other ones. Decoding is interpreted as decoding upon a received symbol, given itself and a number of such replicas, expressed in terms of other received symbols. For linear q-ary (n,k) block codes, soft-decision demodulation and memoryless channels, the maximum-likelihood decision rule on a given symbol is formulated in terms of r \leqn - k linearly independent replicas from the parity-check equations. All replicas deriving from the r selected replicas by linear combination are actually taken into account in this decision rule. Its implementation can be direct; use transformations or a sequential circuit implementing a trellis representation of the parity-check matrix. If r = n - k , decoding is optimum, in the sense of symbol-by-symbol maximum-likelihoed. Simplification results in the transformed and sequential implementations when r . If the selected replicas are disjoint, generalized ( q -ary, weighted) threshold decoding results. The decoding process can easily be modffied in order to provide word-by-word maximum-likelihood decoding. Convolutional codes are briefly considered. Two specific problems are discussed: the use of previous decisions, which leads to a weighted generalization of feedback decoding, and the extension of replication decoding to nonsystematic codes.

102 citations


ReportDOI
01 Dec 1979
TL;DR: Simulation results for binary, 8- ARY PM, and 16-QASK symbol sets transmitted over random walk and sinusoidal jitter channels are presented, and compared with results one may obtain with a decision-directed algorithm, or with the binary Viterbi algorithm introduced by Ungerboeck.
Abstract: : The problem of simultaneously estimating phase and decoding data symbols from baseband data is posed. The phase sequence is assumed to be a random sequence on the circle and the symbols are assumed to be equally-likely symbols transmitted over a perfectly equalized channel. A dynamic programming algorithm (Viterbi algorithm) is derived for decoding a maximum a posteriori (MAP) phase-symbol sequence on a finite dimensional phase-symbol trellis. A new and interesting principle of optimality for simultaneously estimating phase and decoding phase-amplitude coded symbols leads to an efficient two step decoding procedure for decoding phase-symbol sequences. Simulation results for binary, 8- ARY PM, and 16-QASK symbol sets transmitted over random walk and sinusoidal jitter channels are presented, and compared with results one may obtain with a decision-directed algorithm, or with the binary Viterbi algorithm introduced by Ungerboeck. When phase fluctuations are severe, and the symbol set is rich (as in 16-QASK), MAP phase-symbol sequence decoding on circles is superior to Underboeck's technique, which in turn is superior to decision-directed techniques.

60 citations


Journal ArticleDOI
TL;DR: Two modifications of the basic correlation decoding approach are presented, one of them yields a nonexhaustive optimum word decoding algorithm whose complexity depends upon the "projecting" structure of the code.
Abstract: Two modifications of the basic correlation decoding approach are presented. One of them yields a nonexhaustive optimum word decoding algorithm whose complexity depends upon the "projecting" structure of the code. This algorithm is then modified to yield a second decoding algorithm which, while not optimum, has simpler complexity. Applications to the AWGN channel are discussed and performance curves are given for (24, 12) and (31, 15) codes.

56 citations


Journal ArticleDOI
TL;DR: A generalized and unified method of interpolation and transformation is used to generate all known maximal distance codes and important subfield subcodes and some further generalizations of Srivastava codes are constructed.
Abstract: A generalized and unified method of interpolation and transformation is used to generate all known maximal distance codes and important subfield subcodes. Some powerful tools for the analysis and synthesis of maximal distance codes are presented, as well as a generalization of the Mattson-Solomon polynomial and Lagrange and Fourier transforms to more general functions. In certain cases new codes can be obtained by differentiating a kernel function. Some further generalizations of Srivastava codes are constructed. A general method of decoding is given which can be used for complete decoding of ali coset leaders.

23 citations


Journal ArticleDOI
TL;DR: An iterative extension of the basic algebraic analog decoding scheme is discussed, and performance curves are given for the (17,9), (21,11), and (73,45) codes on the AWGN channel.
Abstract: Bit-by-bit soft-decision decoding of binary cyclic codes is considered. A significant reduction in decoder complexity can be achieved by requiring only that the decoder correct all analog error patterns which fall within a Euclidean sphere whose radius is equal to half the minimum Euclidean distance of the code. Such a "maximum-radius" scheme is asymptotically optimum for the additive white Gaussian noise (AWGN) channel. An iterative extension of the basic algebraic analog decoding scheme is discussed, and performance curves are given for the (17,9), (21,11), and (73,45) codes on the AWGN channel.

23 citations


Journal ArticleDOI
TL;DR: A new scheme for reducing the numerical complexity of the standard B.C.H. and Reed–Solomon (R.S.) decoding algorithms is developed and the process of calculating syndromes over GF(2m) is shown to require only a small fraction of the number of multiplications and additions that is required by using standard methods.
Abstract: A new scheme for reducing the numerical complexity of the standard B.C.H. and Reed–Solomon (R.S.) decoding algorithms is developed. Specifically, the process of calculating syndromes over GF(2m) is shown to require only a small fraction of the number of multiplications and additions that is required by using standard methods. As an example, the calculation of the 32 syndromes of the (255, 223, 33) Reed–Solomon code (NASA standard for concatenation with convolutional codes) is shown to require 90% fewer multiplications and 78% fewer additions than the conventional method of computation. A computer simulation also verifies these results.

7 citations


Journal ArticleDOI
TL;DR: Using continued fractions, a simplified algorithm for decoding B.C.H. and R.S. codes is developed that corrects both erasures and errors on a finite field GF(qm) that is both simpler to understand and to implement than more conventional algorithms.
Abstract: Using continued fractions, a simplified algorithm for decoding B.C.H. and R.S. codes is developed that corrects both erasures and errors on a finite field GF(qm). The decoding method is a modification of the Forney-Belekamp technique. It is believed that the present scheme is both simpler to understand and to implement than more conventional algorithms.

7 citations


01 Feb 1979
TL;DR: By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation.
Abstract: Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

5 citations


Journal ArticleDOI
TL;DR: A decoding scheme called w -distance decoding (or weighted erasure decoding) has been studied for decoding binary block codes on Q -ary output channels by computer simulation and analytic derivation of the probabiliiy of error bound.
Abstract: A decoding scheme called w -distance decoding (or weighted erasure decoding) has been studied for decoding binary block codes on Q -ary output channels by computer simulation and analytic derivation of the probabiliiy of error bound. Optimum distribution of w -weights and the optimum threshold level of quantization are obtained by both simulation and minimization of the probability of error bound. The asymptotic behavior (signal-to-noise ratio \rightarrow \infty) of the error bound is determined by numerical methods with the help of a digital computer.

4 citations


Journal ArticleDOI
TL;DR: A soft-decision decoding algorithm is proposed as a means of improving the displayed-character error rate of teletext transmissions by simple modifications to the decoder only.
Abstract: This letter proposes a soft-decision decoding algorithm as a means of improving the displayed-character error rate of teletext transmissions by simple modifications to the decoder only. The expected improvement is theoretically assessed, performance curves are given and implementation of the scheme is discussed.

3 citations


Journal ArticleDOI
TL;DR: It is shown how complete decoding of maximum distance separable codes can be accomplished by a vote-taking algorithm or an equivalent distance correlation method.
Abstract: It is shown how complete decoding of maximum distance separable codes can be accomplished by a vote-taking algorithm or an equivalent distance correlation method. It is also indicated where this method of decoding might find application.

15 Jun 1979
TL;DR: Decoding schemes are proposed for the tracking systems of the galileo project and quick look decoding schemes requiring only shift registers are given for the DSN (7, 1/2) and convolutional codes.
Abstract: Decoding schemes are proposed for the tracking systems of the galileo project. Quick look decoding schemes requiring only shift registers are given for the DSN (7, 1/2) and (7, 1/3) convolutional codes. These schemes are used when the communication channel is error free. The schemes decode the data, symbol errors, and the lack of node syncronization.

Book ChapterDOI
01 Jan 1979
TL;DR: By extending the approach used in the paper to the effective utilisation of soft-decision decoding, the algorithm offers the possibility of maximum-likelihood decoding long convolutional codes.
Abstract: Minimum distance decoding of convolutional codes has generally been considered impractical for other then relatively short constraint length codes, because of the exponential growth in complexity with increasing constraint length. The minimum distance decoding algorithm proposed in the paper, however, uses a sequential decoding approach to avoid an exponential growth in complexity with increasing constraint length, and also utilises the distance and structural properties of convolutional codes to considerably reduce the amount of tree searching needed to find the minimum distance path. In this way the algorithm achieves a complexity that does not grow exponentially with increasing constraint length, and is efficient for both long and short constraint length codes. The algorithm consists of two main processes. Firstly, a direct mapping scheme which automatically finds the minimum distance path in a single mapping operation, is used to eliminate the need fcr all short back-up tree searches. Secondly, when a longer back-up search is required, an efficient tree searching scheme is used to minimise the required search effort. By extending the approach used in the paper to the effective utilisation of soft-decision decoding, the algorithm offers the possibility of maximum-likelihood decoding long convolutional codes.

Journal ArticleDOI
01 Jan 1979
TL;DR: By deriving upper bounds for the number of decoding operations required to advance one code segment, it is shown that many less operations are required than in the case of sequential decoding, implying a significant reduction in the severity of the buffer-overflow problem.
Abstract: In this paper we present the analytical results of the computational requirement for the minimum-distance decoding of convolutional codes. By deriving upper bounds for the number of decoding operations required to advance one code segment, we show that many less operations are required than in the case of sequential decoding This implies a significant reduction in the severity of the buffer-overflow problem. Then, we propose several modifications which could further reduce the computational effort required at long back-up distance. Finally we investigate the trade-off between coding-parameters selection and storage requirement as an aid to quantitative decoder design. Examples and future aspects are also presented and discussed.

Journal ArticleDOI
TL;DR: Coding theorems for list codes for compound channels and for codes within codes, which are list codes where one is the maximum decoding list length permitted to the decoder, are proved.
Abstract: We prove coding theorems for list codes for compound channels and for codes within codes. The theorems imply corresponding results for what are usually called simply “codes”, which are list codes where one is the maximum decoding list length permitted to the decoder. The Bergmans coding theorem for degraded channels ([4], Theorem 15.2.1) and the positive part of the Wyner-Ziv theorem ([4], Theorem 13.2.1) are easy consequences of our results.

Proceedings ArticleDOI
01 Jan 1979
TL;DR: The encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems is described, using a hardware shift register for both high-speed encoding and syndrome calculation.
Abstract: This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps—(1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for de-coding Steps 2, 3 and 4.

Journal ArticleDOI
TL;DR: An algorithm for maximum likelihood decoding of terminated rate-1/ N convolutional codes with hard decisions is presented which is based upon, but is simpler than, the Viterbi algorithm.
Abstract: An algorithm for maximum likelihood decoding of terminated rate-1/ N convolutional codes with hard decisions is presented which is based upon, but is simpler than, the Viterbi algorithm. The algorithm makes use of an algebraic description of convolutional codes introduced by Massey et al. For reasonable values of the probability of error, the algorithm is shown to produce substantial savings in decoding complexity compared with the Viterbi algorithm.