scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1967"


Journal ArticleDOI
TL;DR: The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.
Abstract: The probability of error in decoding an optimal convolutional code transmitted over a memoryless channel is bounded from above and below as a function of the constraint length of the code. For all but pathological channels the bounds are asymptotically (exponentially) tight for rates above R_{0} , the computational cutoff rate of sequential decoding. As a function of constraint length the performance of optimal convolutional codes is shown to be superior to that of block codes of the same length, the relative improvement increasing with rate. The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.

6,804 citations



Journal ArticleDOI
TL;DR: A class of binary recurrent codes for correcting independent errors is given which has guaranteed error-limiting properties and the results of a computer simulation indicate that these codes perform better in some situations than other codes using threshold decoding.
Abstract: A class of binary recurrent codes for correcting independent errors is given which has guaranteed error-limiting properties. These codes can be simply decoded using threshold decoding, and will recover from any decoding error caused by either an uncorrectable transmission error or a temporary malfunction of the encoder or decoder. A number of such codes are given along with a synthesis procedure. The results of a computer simulation are given which indicate that these codes perform better in some situations than other codes using threshold decoding.

167 citations


Journal ArticleDOI
TL;DR: The performance of systems using sequential decoding is limited by the computational and buffer capabilities of the decoder, not by the probability of making a decoding error.
Abstract: In sequential decoding, the number of computations which the decoder must perform to decode the received digits is a random variable. In this paper, we derive a Paretian lower bound to the distribution of this random variable. We show that P [C > L] L^{-\rho} , where C is the number of computations which the sequential decoder must perform to decode a block of \Lambda transmitted bits, and is a parameter which depends on the channel and the rate of the code. Our bound is valid for all sequential decoding schemes and all discrete memoryless channels. In Section II we give an example of a special channel for which a Paretian bound can be easily derived. In Sections III and IV we treat the general channel. In Section V we relate this bound to the memory buffer requirements of real-time sequential decoders. In Section VI, we show that this bound implies that certain moments of the distribution of the computation per digit are infinite, and we determine lower bounds to the rates above which these moments diverge. In most cases, our bounds coincide with previously known upper bounds to rates above which the moments converge. We conclude that the performance of systems using sequential decoding is limited by the computational and buffer capabilities of the decoder, not by the probability of making a decoding error. We further note that our bound applies only to sequential decoding, and that, in certain special cases (Section II), algebraic decoding methods prove superior.

165 citations


07 Feb 1967
TL;DR: By a proper selection of distance metric, it is possible to show that the average number of computations for the Wozencraft sequential decoding algorithm can be bounded independently of the code constraint length for transmission rates below a computation rate Rcomp.
Abstract: : By a proper selection of distance metric, it is possible to show that the average number of computations for the Wozencraft sequential decoding algorithm can be bounded independently of the code constraint length for transmission rates below a computation rate Rcomp. The bound on the probability of decoding error is proved to be similar to the bound for Fano's algorithm. A modification of the Wozencraft algorithm is presented. Use of a multiple-threshold test (MTT) enables the decoder to adjust its operation to the noise conditions. A modified search procedure is also presented. Analytical results show that this modified algorithm is comparable with Fano's algorithm in terms of the average number of computations and the probability of error. (Author)

47 citations


Journal ArticleDOI
TL;DR: A system based on sequential decoding and utilizing binary phase-shift keying and 8-level quantized decisions is proposed for deep-space communication and an analysis included of the required phase reference signal-to-noise ratio.
Abstract: A system based on sequential decoding and utilizing binary phase-shift keying and 8-level quantized decisions is proposed for deep-space communication. Theoretical analyses augmented by a program of computer Simulation promise operation within 3-4 dB of the channel capacity of an infinite bandwidth additive white Gaussian noise channel. A low probability of erasure is achieved by the suggested use of occasional off-line decoding. A negligible probability of error is readily achieved. Channel coherence is examined and quadratic and decision-directed methods of achieving a phase reference are compared. Extensive symbol interleaving is suggested and an analysis included of the required phase reference signal-to-noise ratio.

44 citations


Journal ArticleDOI
TL;DR: The generator matrices of (n, 2) codes which are opt imum for the BSC with sufficiently small channel error probability p are given and it can be seen that the two CE classes obtained have representative generatorMatrices of the form all K= b10 or L= pl.
Abstract: The generator matrices of (n, 2) codes which are opt imum for the BSC with sufficiently small channel error probability p are given. The generator matrix of an (n, 2) code can have at most three types of columns where the useless column of all zeros has been excluded. Combinatorially equivalent (hereafter abbreviated CE) (n, 2) codes will have the same number of columns of each type and, consequently, a class of CE codes will be specified by three integers h, i, j giving the number of columns of each type. A permutation of h, i, j does not change the GE class and, furthermore, the class specified does not depend on the correspondence between h, i, j, and the column type. If (n, 2) codes are placed in three groups, n = 3r 1, n = 3r, and n = 3r + 1, then, on the basis of minimum distance, the numbers h, i, j can be found for all CE classes that correct all errors of weight T 1 or less. A straightforward calculation can then be used to find the number of errors of weight T corrected for each of these classes. For m = 3r 1 only one of the classes corrects the maxim u m numbers of errors of weight r and therefore this class is optim u m for the BSC writh sufficiently small p. For n = 3r and n = 3r + 1 two different classes (i.e., two choices of h, i, j) correct the maximum number of errors of weight T (see Table I). Whether n = 3r or 72 = 3r + 1, it can be seen that the two CE classes obtained have representative generator matrices of the form all K= b10 ( > I210 or L= pl ( ) Manuscript received August 16, 1966. This work w&s supported by NSG Grant GP-3012.

16 citations


Journal ArticleDOI
TL;DR: The Lincoln Experimental Terminal (LET) uses convolutional encoding and sequential decoding matched to a modulation system employing a 16-ary orthogonal alphabet and matched filter envelope detectors, followed by an ordered list of the filters containing the 7 largest outputs.
Abstract: Probabilistic codes when suitably matched to modulation-demodulation systems allow communications which realize the theoretical performance predicted by the coding theorem. Sequential decoding is a form of probabilistic coding which allows realization in practical equipment to achieve this end. The Lincoln Experimental Terminal (LET) uses convolutional encoding and sequential decoding matched to a modulation system employing a 16-ary orthogonal alphabet and matched filter envelope detectors, followed by an ordered list of the filters containing the 7 largest outputs. This coding system employing a constraint length of 60 bits and rates of 1 and 2 bits per orthogonal symbol achieves operation at an energy to noise ratio of 6 dB per information bit on an active satellite (Gaussian) channel. The Fano decoding algorithm is employed. After a brief description of this algorithm, the realization of the LET encoder-decoder is presented. The machine using a commercial magnetic core memory, together with about 2000 integrated circuit elements, occupies about 20 inches of 19-inch rack space.

11 citations


Patent
19 Jul 1967

9 citations


Journal ArticleDOI
R. Heller1
TL;DR: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is presented and discussed and several new results presented.
Abstract: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is presented and discussed. This type of decoding combines the implementation simplicity of digital decoding with the performance enhancement expected of correlation decoding techniques. Adaptive versions, with and without interleaving for fading channels, are described. An exact expression for word-error probability in white Gaussian noise is derived. The concept of a code erasure reconstruction spectrum is introduced and several new results presented, along with a review of applicable, known ones.

6 citations


Journal ArticleDOI
H. White1
TL;DR: FCD operation is evaluated for short block codes and the Rayleigh-fading, the Gaussian- noise, and the atmospheric-noise channels, where significantly lower error probabilities are obtained and only minor improvement is obtained.
Abstract: Failure-correction decoding (FCD) is a decoding method that improves the performance of error-correction block codes by using reliability estimates of the received digit decisions. The FCD logic produces F failures by erasing the F digits of a received code character that are most probably in error. Since the number F is the code's erasure-correction capability, the decoding is always possible and will yield a correct character if the failures include all errors. FCD operation is evaluated for short block codes and the Rayleigh-fading, the Gaussian-noise, and the atmospheric-noise channels. Significantly lower error probabilities, in comparison to error-correction decoding, are obtained for the Rayleigh-fading and the atmospheric-noise channels, but only minor improvement is obtained for the Gaussian-noise channel. The optimum failure-selection criterion for the fast Rayeligh-fading channel and noncoherent detection is to select as failures the F digits with the least difference between the squared-envelope amplitudes from the two matched-filter outputs. For the atmospheric-noise channel, a good failure-selection criterion is to choose the F digits with the largest noise-envelope amplitude as measured in an adjacent frequency band.

Journal ArticleDOI
TL;DR: It is shown that an improved Gilbert bound for convolutional codes can be obtained for the class of codes derivable from a good short code.
Abstract: It is shown that an improved Gilbert bound for convolutional codes can be obtained for the class of codes derivable from a good short code. Numerical results for the binary case illustrate the improvement.

Journal ArticleDOI
TL;DR: The simulation of representative channels is described, and the performances of forced-erasure decoding, correlation decoding, and digital decoding on these channels are determined and compared.
Abstract: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is described and evaluated. This type of decoding combines the implementation simplicity of digital decoding with the performance enhancement expected of correlation decoding techniques. Adaptive versions, with and without interleaving for fading channels, are also treated. The simulation of representative channels is described, and the performances of forced-erasure decoding, correlation decoding, and digital decoding on these channels are determined and compared.

01 Aug 1967
TL;DR: One-half rate convolutional encoding with sequential decoding for deep space probe telemetry links with application to Pioneer missions.
Abstract: One-half rate convolutional encoding with sequential decoding for deep space probe telemetry links with application to Pioneer missions

Journal ArticleDOI
01 Sep 1967
TL;DR: This letter derives the performance bounds for convolutional encoding and sequential decoding of binary antipodal signals with a noisy phase reference from theoretical coherent performance.
Abstract: For coherent digital communication in a noisy environment, it may not be possible to provide a perfect phase reference. The consequence is a degradation from theoretical coherent performance. This letter derives the performance bounds for convolutional encoding and sequential decoding of binary antipodal signals with a noisy phase reference.

01 Jan 1967
TL;DR: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is described and evaluated and the simulation of representative channels is described, and the results are determined and compared.
Abstract: Forced-erasure decoding, an easily implementable technique for realizing the fullest capability of a code, is described and evaluated. This type of decoding combines the implementation simplicity of digital decoding with the performance enhancement ex- pected of correlation decoding techniques. Adaptive versions, with and without interleaving for fading channels, are also treated. The simulation of representative channels is described, and the per- formances of forced-erasure decoding, correlation decoding, and digital decoding on these channels are determined and compared.

01 Mar 1967
TL;DR: In this paper, invertible convolutional transformations of binary sequences are examined from the point of view of performance, when the inverse transformation (decoding) is performed by a finite feed-forward transducer, which represents an approximation to the perfect feedback transducers.
Abstract: In this paper, invertible convolutional transformations of binary sequences are examined from the point of view of performance, when the inverse transformation (decoding) is performed by a finite feed-forward transducer, which represents an approximation to the perfect feedback transducer. While this eliminates the error pro­ pagation effect, it introduces a restriction on the acceptable input sequences. The encoder-decoder system, i.e. the cascade of the direct and the inverse transducers, appears as an input-restricted noiseless channel, and a measure of performance is given by the resulting channel capacity. It is shown that as the number r of decoder stages £ increases, the channel capacity has an expression C ~ 1-Ab where the parameters b < 1 and A depend solely upon the structure of the set of resynchronizing states (RS-cluster) possessed by the given trans f ormat ion.

Dissertation
01 Jan 1967
TL;DR: A JUMP-SEARCH PROCEDURE for sequential decoding systems and its applications in agriculture, materials science and engineering.
Abstract: A JUMP-SEARCH PROCEDURE FOR SEQUENTIAL DECODING SYSTEMS