scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1973"


Journal ArticleDOI
01 Mar 1973
TL;DR: This paper gives a tutorial exposition of the Viterbi algorithm and of how it is implemented and analyzed, and increasing use of the algorithm in a widening variety of areas is foreseen.
Abstract: The Viterbi algorithm (VA) is a recursive optimal solution to the problem of estimating the state sequence of a discrete-time finite-state Markov process observed in memoryless noise. Many problems in areas such as digital communications can be cast in this form. This paper gives a tutorial exposition of the algorithm and of how it is implemented and analyzed. Applications to date are reviewed. Increasing use of the algorithm in a widening variety of areas is foreseen.

5,995 citations


Journal ArticleDOI
01 Jul 1973
TL;DR: A variable-word-length minimum-redundant code is described that has the advantages of both the Huffman and Shannon-Fano codes in that it reduces transmission time, storage space, translation table space, and encoding and decoding times.
Abstract: A variable-word-length minimum-redundant code is described. It has the advantages of both the Huffman and Shannon-Fano codes in that it reduces transmission time, storage space, translation table space, and encoding and decoding times.

67 citations


Journal ArticleDOI
TL;DR: The use of majority logic decoding as a pseudonoise code acquisition technique is considered and it is shown that the probability of acquiring an 8191 code in one attempt can be made nearly one at -10-dB SNR.
Abstract: This paper considers the use of majority logic decoding as a pseudonoise code acquisition technique A bound on the probability of code acquisition is derived and it is shown that the probability of acquiring an 8191 code in one attempt can be made nearly one at -10-dB SNR

51 citations


Journal ArticleDOI
TL;DR: The method of sequential encoding and decoding is generalized to the case of a source with redundancy and a computational entropy analogous to the computational cutoff rate of the channel is introduced.
Abstract: The method of sequential encoding and decoding is generalized to the case of a source with redundancy. A computational entropy of the source analogous to the computational cutoff rate of the channel is introduced. A range of transmission rates is found for which the average number of decoding computations is finite.

20 citations


Journal ArticleDOI
TL;DR: It is shown that the stack algorithm introduced by Zigangirov and by Jelinek is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different.
Abstract: Sequential decoding procedures are studied in the context of selecting a path through a tree. Several algorithms are considered and their properties compared. It is shown that the stack algorithm introduced by Zigangirov and by Jelinek is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different. A modified Fano algorithm is introduced, in which the quantizing parameter \Delta is eliminated. It can be inferred from limited simulation results that, at least in some applications, the new algorithm is computationally inferior to the old; however, it is of some theoretical interest since the conventional Fano algorithm may be considered to be a quantized version of it.

19 citations


Journal ArticleDOI
TL;DR: The results of an evaluation of a particular concatenation system, structurally similar to the hybrid system of Falconer, employing a Reed-Solomon outer code and an inner convolutional code, finding the three outer decoders to provide approximately the same performance.
Abstract: Straightforward implementation of a maximum likelihood decoder implies a complexity that grows algebraically with the inverse of error probability. Forney has suggested an approach, concatenation, for which error probability decreases exponentially with increasing complexity. This paper presents the results of an evaluation of a particular concatenation system, structurally similar to the hybrid system of Falconer, employing a Reed-Solomon outer code and an inner convolutional code. The inner decoder is a Viterbi decoder of constraint length less than the corresponding encoding constraint length (nonmaximum likelihood). The outer decoder assumes one of three possible forms, all employing the likelihood information developed by the inner decoder to assist in outer decoding. Error corrections and erasure fill-ins achieved by the outer decoder are fed back to the inner decoder. Performance is evaluated through computer simulation. The three outer decoders are found to provide approximately the same performance, all yielding low error probabilities at rates somewhat above R comp of sequential decoding and at signal energy to noise density ratios per information bit around 1.7 dB.

17 citations


Journal ArticleDOI
TL;DR: Certain classes of low-rate binary codes that have simple decoding algorithms can be used as underlying codes in the construction of high-rate easily decodable i -compressed codes, which have higher rates than binary codes of comparable length and number of correctable errors.
Abstract: In this paper we present a new error-control technique intended for use in 2^l -level data-transmission systems that employ Gray coding to transform a binary source sequence into the 2^l -ary transmitted sequence. The codes, which we call i -compressed codes, make use of the structure of binary codes and have the property that for some integer i , 1 \leq i \leq l , transmission errors can be corrected if the erroneously received signals lie less than 2^{i-1} levels from the corresponding correct, or nominal signal levels. The number of such errors that can be corrected is related to the error-correcting capability of the underlying binary code used in the construction. In return for this restriction on the magnitude of correctable errors in the received signal, these codes have higher rates than binary codes of comparable length (in bits) and number of correctable errors. Hence in applications where it can be assumed that the fraction of errors exceeding a certain magnitude is negligible (or at least tolerable), this technique is more efficient than the conventional practice of placing a binary encoder between the data source and modulator and a binary decoder between the demodulator and data sink. Furthermore, although the i -compressed codes are nonbinary, the decoding algorithm is that of the underlying binary code plus a small amount of additional processing; hence it is generally simpler to implement than other nonbinary decoding algorithms. It is also observed that the rate of an i -compressed code is always greater than that of the underlying binary code. Thus certain classes of low-rate binary codes that have simple decoding algorithms can be used as underlying codes in the construction of high-rate easily decodable i -compressed codes. Finally, for the case i = 1 , encoding and decoding becomes exceptionally simple and for this case it is possible to make use of "soft decisions" at the receiver to improve the performance.

17 citations


Journal ArticleDOI
TL;DR: A graph search procedure, based on the Fano algorithm, is used to convert machine-contaminated phonetic descriptions of speaker performance into standard orthography and preliminary results are presented and discussed.
Abstract: Following segmentation and phonetic classification in automatic recognition of continuous speech (ARCS), it is necessary to provide methods for linguistic decoding, In this work a graph search procedure, based on the Fano algorithm, is used to convert machine-contaminated phonetic descriptions of speaker performance into standard orthography. The information utilized by the decoder consists of a syntax, a lexicon containing transcription variation for each word, and performance-based statistics from acoustic analysis. The latter contain information related to automatic segmentation and classification accuracy and certainty (anchor-point) data. A distinction is made between speaker- and machine-dependent corruption of phonetic input strings. Preliminary results are presented and discussed, together with some considerations for evaluation.

16 citations


Patent
21 Nov 1973
TL;DR: In this paper, a binary sequential decoding system for decoding instructional information prerecorded on magnetic tape, such as in a casette, in the form of binary sequential code, was presented.
Abstract: A visual and aural apparatus for teaching certain musical instruments, including keyboard instruments and fretted instruments having lamps contained within certain keys or frets of such instruments and programmed instructional information arranged to instruct the student and to illuminate the keys or frets to be played. The invention includes a novel binary sequential decoding system for decoding instructional information prerecorded on magnetic tape, such as in a casette, in the form of a binary sequential code, such system being economically implemented in a preferred embodiment using logic elements and therefore adaptable to large scale integration (LSI). The decoding system operates control circuitry having matrices and multiplexing devices to allow a very large number of lamps and other devices to be controlled from a smaller number of code words, and to minimize the number of output terminals required to control such lamps. This latter feature makes possible the implementation of the entire decoding and substantial portions of the control circuits within a single LSI package.

15 citations


Journal ArticleDOI
TL;DR: A general decoding method for cyclic codes is presented which gives promise of substantially reducing the complexity of decoders at the cost of a modest increase in decoding time (or delay).
Abstract: A general decoding method for cyclic codes is presented which gives promise of substantially reducing the complexity of decoders at the cost of a modest increase in decoding time (or delay). Significant reductions in decoder complexity for binary cyclic finite-geometry codes are demonstrated.

12 citations



Journal ArticleDOI
TL;DR: A Pareto fit to the computation distributions for various tracking loop bandwidths and demodulator quantization (soft decisions) is presented to predict probability of error performance for typical decoder parameters (buffer size, speed factor).
Abstract: Convolutional encoding with biphase modulation is an attractive combination of modulation/coding for the Gaussian channel, but a means of tracking carrier phase must be provided. The performance of combined sequential decoding and suppressed carrier (Costas loop) tracking in the receiver is difficult to analyze but can be obtained by computer simulations. Results of such simulations are presented in the form of a Pareto fit to the computation distributions for various tracking loop bandwidths and demodulator quantization (soft decisions). These distributions are applied to predict probability of error performance for typical decoder parameters (buffer size, speed factor). The anticipated advantage of quantized demodulation is observed. However, narrow tracking bandwidths are required, with sufficient interleaving to overcome effects of correlated phase perturbations due to noise.

09 Jul 1973
TL;DR: The properties of linear cyclic codes are considered in detail and new results on construction and analysis of the properties of such codes are presented, as well as basic methods of decoding, using the algebraic properties of codes.
Abstract: : The report is a Russian translation devoted to methods of construction and decoding of cyclic correction codes. The properties of linear cyclic codes are considered in detail and new results on construction and analysis of the properties of such codes are presented. Basic methods of decoding, using the algebraic properties of codes, are outlined as follows: A method based on use of linear multistage filters and a selector; A method based on the properties of symmetry of linear codes; A method of direct and step decoding for Bose-Chaudhuri-Hockquenghem codes. Majority decoding, which has a very simple technical realization, is considered in detail. Corresponding codes and decoding schemes based on the apparatus of finite projective geometries are described. The book is intended for scientific researchers, engineers and students, working in the field of digital data transmission through noisy communications channels, as well as mathematicians who are interested in the use of algebraic methods.

Journal ArticleDOI
TL;DR: A technique is presented for the algebraic decoding of block codes over a, q -ary input, Q -ary output channel ( Q > q ) where Hamming distance, Lee distance, or a burst distance can be assumed.
Abstract: A technique is presented for the algebraic decoding of block codes over a, q -ary input, Q -ary output channel ( Q > q ). It is assumed that an algebraic decoding algorithm is known for a simple channel such as a channel where the input alphabet is identical to the output alphabet. This decoding algorithm is then adapted for use over the actual channel. The technique can be used in conjunction with an arbitrary distance measure between input and output vectors. Thus, Hamming distance, Lee distance, or a burst distance can be assumed. Examples are presented for each of these distances.

Book ChapterDOI
01 Jan 1973
TL;DR: In this paper, a finite system of distinct words in the alphabet A, a free semigroup over A, [U] a subsemigroup of ǫ, generated by the set U, and λ the empty word.
Abstract: Let U be a finite system of distinct words in the alphabet A, 𝔄 a free semigroup over A, [U] a subsemigroup of 𝔄, generated by the set U, and λ the empty word. By ||X|| we shall denote the number of elements of the set X, and by |x| the length of the word x.

ReportDOI
01 Jan 1973
TL;DR: The Viterbi decoding algorithm yields minimum probability of error when applied to a memoryless channel provided that all input sequences are equally likely and it was shown that the generalized algorithm is also maximum likelihood decoding.
Abstract: : The Viterbi decoding algorithm yields minimum probability of error when applied to a memoryless channel provided that all input sequences are equally likely. In this report, the algorithm was generalized for application to channels with finite memory and it was shown that the generalized algorithm is also maximum likelihood decoding. It was also shown that the generalized Viterbi algortithm on a simple memory channel performs better than the original Viterbi algorithm with the same decoding complexity. The M-state Markov model was reviewed in this report. The process of identifying the parameters of the M- state model from the coefficients A sub i and A sub i (n sub j, n sub j+1) of the gap model was determined to be more complicated than was anticipated. An alternative, the simple partitioned Markov model was examined to determine the effect of the second order statistics, namely the interdependence of the gaps, on the error burst distribution. An alternative definition of the burst was adopted to speed up this investigation. The difference or similarity between these two definitions will be determined.


Journal ArticleDOI
TL;DR: Rate \frac{3}{4} optimal type- B1 burst-error-correcting convolutional codes have been discovered and a method of decoding is described.
Abstract: Rate \frac{3}{4} optimal type- B1 burst-error-correcting convolutional codes have been discovered. Optimal codes of rate 1/n_o and \frac{2}{3} are also given. A method of decoding is described.

Journal ArticleDOI
TL;DR: It is shown that a majority-logic decoding algorithm proposed by Lin and Weldon for the product of an L -step and a one-step orthogonalizable code is incomplete when L is greater than unity, and an improvement is presented to overcome this disadvantage.
Abstract: It is shown that a majority-logic decoding algorithm proposed by Lin and Weldon for the product of an L -step and a one-step orthogonalizable code is incomplete when L is greater than unity. An improvement is presented to overcome this disadvantage in the binary case.