scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1967"


Journal ArticleDOI
TL;DR: The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.
Abstract: The probability of error in decoding an optimal convolutional code transmitted over a memoryless channel is bounded from above and below as a function of the constraint length of the code. For all but pathological channels the bounds are asymptotically (exponentially) tight for rates above R_{0} , the computational cutoff rate of sequential decoding. As a function of constraint length the performance of optimal convolutional codes is shown to be superior to that of block codes of the same length, the relative improvement increasing with rate. The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.

6,804 citations



Journal ArticleDOI
TL;DR: A class of binary recurrent codes for correcting independent errors is given which has guaranteed error-limiting properties and the results of a computer simulation indicate that these codes perform better in some situations than other codes using threshold decoding.
Abstract: A class of binary recurrent codes for correcting independent errors is given which has guaranteed error-limiting properties. These codes can be simply decoded using threshold decoding, and will recover from any decoding error caused by either an uncorrectable transmission error or a temporary malfunction of the encoder or decoder. A number of such codes are given along with a synthesis procedure. The results of a computer simulation are given which indicate that these codes perform better in some situations than other codes using threshold decoding.

167 citations


Journal ArticleDOI
TL;DR: The performance of systems using sequential decoding is limited by the computational and buffer capabilities of the decoder, not by the probability of making a decoding error.
Abstract: In sequential decoding, the number of computations which the decoder must perform to decode the received digits is a random variable. In this paper, we derive a Paretian lower bound to the distribution of this random variable. We show that P [C > L] L^{-\rho} , where C is the number of computations which the sequential decoder must perform to decode a block of \Lambda transmitted bits, and is a parameter which depends on the channel and the rate of the code. Our bound is valid for all sequential decoding schemes and all discrete memoryless channels. In Section II we give an example of a special channel for which a Paretian bound can be easily derived. In Sections III and IV we treat the general channel. In Section V we relate this bound to the memory buffer requirements of real-time sequential decoders. In Section VI, we show that this bound implies that certain moments of the distribution of the computation per digit are infinite, and we determine lower bounds to the rates above which these moments diverge. In most cases, our bounds coincide with previously known upper bounds to rates above which the moments converge. We conclude that the performance of systems using sequential decoding is limited by the computational and buffer capabilities of the decoder, not by the probability of making a decoding error. We further note that our bound applies only to sequential decoding, and that, in certain special cases (Section II), algebraic decoding methods prove superior.

165 citations



07 Feb 1967
TL;DR: By a proper selection of distance metric, it is possible to show that the average number of computations for the Wozencraft sequential decoding algorithm can be bounded independently of the code constraint length for transmission rates below a computation rate Rcomp.
Abstract: : By a proper selection of distance metric, it is possible to show that the average number of computations for the Wozencraft sequential decoding algorithm can be bounded independently of the code constraint length for transmission rates below a computation rate Rcomp. The bound on the probability of decoding error is proved to be similar to the bound for Fano's algorithm. A modification of the Wozencraft algorithm is presented. Use of a multiple-threshold test (MTT) enables the decoder to adjust its operation to the noise conditions. A modified search procedure is also presented. Analytical results show that this modified algorithm is comparable with Fano's algorithm in terms of the average number of computations and the probability of error. (Author)

47 citations


ReportDOI
01 Jun 1967
TL;DR: The existence of good codes in a small class of codes is proved, and a class of optimal cyclic codes is constructed, which includes PSL2(47) on minimum-weight vectors of such a code.
Abstract: : A new bound on minimum distance for a class of cyclic codes of type (3p,p) is found. For p = 29 this and a small computer search yield new 5-designs on 30 points. Automorphism groups of quadratic-residue codes are discussed, and the orbit-structure of PSL2(47) on minimum-weight vectors of such a code is determined. Some graphs related to codes are discussed, and some bounds on Ramsey numbers are obtained. Some remarks on the covering radius and a decoding method are presented. The existence of good codes in a small class of codes is proved, and a class of optimal cyclic codes is constructed. Some comments on the literature are made. (Author)

39 citations


Journal ArticleDOI
TL;DR: The generator matrices of (n, 2) codes which are opt imum for the BSC with sufficiently small channel error probability p are given and it can be seen that the two CE classes obtained have representative generatorMatrices of the form all K= b10 or L= pl.
Abstract: The generator matrices of (n, 2) codes which are opt imum for the BSC with sufficiently small channel error probability p are given. The generator matrix of an (n, 2) code can have at most three types of columns where the useless column of all zeros has been excluded. Combinatorially equivalent (hereafter abbreviated CE) (n, 2) codes will have the same number of columns of each type and, consequently, a class of CE codes will be specified by three integers h, i, j giving the number of columns of each type. A permutation of h, i, j does not change the GE class and, furthermore, the class specified does not depend on the correspondence between h, i, j, and the column type. If (n, 2) codes are placed in three groups, n = 3r 1, n = 3r, and n = 3r + 1, then, on the basis of minimum distance, the numbers h, i, j can be found for all CE classes that correct all errors of weight T 1 or less. A straightforward calculation can then be used to find the number of errors of weight T corrected for each of these classes. For m = 3r 1 only one of the classes corrects the maxim u m numbers of errors of weight r and therefore this class is optim u m for the BSC writh sufficiently small p. For n = 3r and n = 3r + 1 two different classes (i.e., two choices of h, i, j) correct the maximum number of errors of weight T (see Table I). Whether n = 3r or 72 = 3r + 1, it can be seen that the two CE classes obtained have representative generator matrices of the form all K= b10 ( > I210 or L= pl ( ) Manuscript received August 16, 1966. This work w&s supported by NSG Grant GP-3012.

16 citations


Journal ArticleDOI
TL;DR: The Lincoln Experimental Terminal (LET) uses convolutional encoding and sequential decoding matched to a modulation system employing a 16-ary orthogonal alphabet and matched filter envelope detectors, followed by an ordered list of the filters containing the 7 largest outputs.
Abstract: Probabilistic codes when suitably matched to modulation-demodulation systems allow communications which realize the theoretical performance predicted by the coding theorem. Sequential decoding is a form of probabilistic coding which allows realization in practical equipment to achieve this end. The Lincoln Experimental Terminal (LET) uses convolutional encoding and sequential decoding matched to a modulation system employing a 16-ary orthogonal alphabet and matched filter envelope detectors, followed by an ordered list of the filters containing the 7 largest outputs. This coding system employing a constraint length of 60 bits and rates of 1 and 2 bits per orthogonal symbol achieves operation at an energy to noise ratio of 6 dB per information bit on an active satellite (Gaussian) channel. The Fano decoding algorithm is employed. After a brief description of this algorithm, the realization of the LET encoder-decoder is presented. The machine using a commercial magnetic core memory, together with about 2000 integrated circuit elements, occupies about 20 inches of 19-inch rack space.

11 citations


Patent
15 Jun 1967

11 citations



Journal ArticleDOI
R. Heller1
TL;DR: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is presented and discussed and several new results presented.
Abstract: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is presented and discussed. This type of decoding combines the implementation simplicity of digital decoding with the performance enhancement expected of correlation decoding techniques. Adaptive versions, with and without interleaving for fading channels, are described. An exact expression for word-error probability in white Gaussian noise is derived. The concept of a code erasure reconstruction spectrum is introduced and several new results presented, along with a review of applicable, known ones.

Journal ArticleDOI
H. White1
TL;DR: FCD operation is evaluated for short block codes and the Rayleigh-fading, the Gaussian- noise, and the atmospheric-noise channels, where significantly lower error probabilities are obtained and only minor improvement is obtained.
Abstract: Failure-correction decoding (FCD) is a decoding method that improves the performance of error-correction block codes by using reliability estimates of the received digit decisions. The FCD logic produces F failures by erasing the F digits of a received code character that are most probably in error. Since the number F is the code's erasure-correction capability, the decoding is always possible and will yield a correct character if the failures include all errors. FCD operation is evaluated for short block codes and the Rayleigh-fading, the Gaussian-noise, and the atmospheric-noise channels. Significantly lower error probabilities, in comparison to error-correction decoding, are obtained for the Rayleigh-fading and the atmospheric-noise channels, but only minor improvement is obtained for the Gaussian-noise channel. The optimum failure-selection criterion for the fast Rayeligh-fading channel and noncoherent detection is to select as failures the F digits with the least difference between the squared-envelope amplitudes from the two matched-filter outputs. For the atmospheric-noise channel, a good failure-selection criterion is to choose the F digits with the largest noise-envelope amplitude as measured in an adjacent frequency band.

Journal ArticleDOI
TL;DR: The simulation of representative channels is described, and the performances of forced-erasure decoding, correlation decoding, and digital decoding on these channels are determined and compared.
Abstract: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is described and evaluated. This type of decoding combines the implementation simplicity of digital decoding with the performance enhancement expected of correlation decoding techniques. Adaptive versions, with and without interleaving for fading channels, are also treated. The simulation of representative channels is described, and the performances of forced-erasure decoding, correlation decoding, and digital decoding on these channels are determined and compared.

01 Nov 1967
TL;DR: The final report summarizes research for a three-year project on generalizations of Reed-Muller codes, Reed-decoding algorithms, majority logic decoding, cyclic product codes, and the MacWilliams identity.
Abstract: : The final report summarizes research for a three-year project. Abstracts for the fourteen scientific reports are given, and some new results are included on generalizations of Reed-Muller codes, Reed-decoding algorithms, majority logic decoding, cyclic product codes, and the MacWilliams identity. (Author)

01 Jan 1967
TL;DR: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is described and evaluated and the simulation of representative channels is described, and the results are determined and compared.
Abstract: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is described and evaluated. This type of decoding combines the implementation simplicity of digital decoding with the performance enhancement ex- pected of correlation decoding techniques. Adaptive versions, with and without interleaving for fading channels, are also treated. The simulation of representative channels is described, and the per- formances of forced-erasure decoding, correlation decoding, and digital decoding on these channels are determined and compared.