scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1969"


Journal ArticleDOI
TL;DR: It is shown in this paper that the iterative algorithm introduced by Berlekamp for decoding BCH codes actually provides a general solution to the problem of synthesizing the shortest linear feedback shift register capable of generating a prescribed finite sequence of digits.
Abstract: It is shown in this paper that the iterative algorithm introduced by Berlekamp for decoding BCH codes actually provides a general solution to the problem of synthesizing the shortest linear feedback shift register capable of generating a prescribed finite sequence of digits. The shift-register approach leads to a simple proof of the validity of the algorithm as well as providing additional insight into its properties. The equivalence of the decoding problem for BCH codes to a shift-register synthesis problem is demonstrated, and other applications for the algorithm are suggested.

2,269 citations


Journal ArticleDOI
TL;DR: A new sequential decoding algorithm is introduced that uses stack storage at the receiver that is much simpler to describe and analyze than the Fano algorithm, and is about six times faster than the latter at transmission rates equal to Rcomp.
Abstract: In this paper a new sequential decoding algorithm is introduced that uses stack storage at the receiver It is much simpler to describe and analyze than the Fano algorithm, and is about six times faster than the latter at transmission rates equal to Rcomp the rate below which the average number of decoding steps is bounded by a constant Practical problems connected with implementing the stack algorithm are discussed and a scheme is described that facilitates satisfactory performance even with limited stack storage capacity Preliminary simulation results estimating the decoding effort and the needed stack siazree presented

635 citations


Journal ArticleDOI
TL;DR: A new interpretation of the Viterbi decoding algorithm based on the state-space approach to dyamical systems is presented, in which the optimum decoder solves a generalized regulator control problem by dynamic programming techniques.
Abstract: A new interpretation of the Viterbi decoding algorithm based on the state-space approach to dyamical systems is presented. In this interpretation the optimum decoder solves a generalized regulator control problem by dynamic programming techniques.

240 citations


Journal ArticleDOI
TL;DR: It is shown how the above algorithm can be modified slightly to produce codes with known free distance, and a comparison of probability of error with sequential decoding is made among the best known constructive codes of constraint length.
Abstract: A simple algorithm is presented for finding rate 1/n random-error-correcting convolutional codes. Good codes considerably longer than any now known are obtained. A discussion of a new distance measure for convolutional codes, called the free distance, is included. Free distance is particularly useful when considering decoding schemes, such as sequential decoding, which are not restricted to a fixed constraint length. It is shown how the above algorithm can be modified slightly to produce codes with known free distance. A comparison of probability of error with sequential decoding is made among the best known constructive codes of constraint length 36 .

75 citations


Journal ArticleDOI
TL;DR: An infinite tree code ensemble upper bound is derived on the moments of the computational effort connected with sequential decoding governed by the Fano algorithm, which agrees qualitatively with the lower bounds of Jacobs and Berlekamp.
Abstract: In this paper we derive an infinite tree code ensemble upper bound on the u th ( u \leq 1) moments of the computational effort connected with sequential decoding governed by the Fano \footnote[1]{algorithm}. It is shown that the u th moment of the effort per decoded branch is hounded by a constant, provided the transmission rate R_{0} satisfies inequality (2), This result, although often conjectured, has previously been shown to hold only for positive integral values of u . For a wide class of discrete memoryless channels (that includes all symmetric channels), our bounds agree qualitatively with the lower bounds of Jacobs and Berlekamp [8].

65 citations


Journal ArticleDOI
TL;DR: This paper describes a technique for high-speed decoding of burst-error-correcting codes and a class of codes most suitable for this purpose and with a small amount of additional circuitry the decoders proposed are capable of decoding speeds several orders of magnitude higher than those of conventional decoder.
Abstract: This paper describes a technique for high-speed decoding of burst-error-correcting codes and a class of codes most suitable for this purpose. With a small amount of additional circuitry the decoders proposed in this paper are capable of decoding speeds several orders of magnitude higher than those of conventional decoders.

49 citations



Journal ArticleDOI
TL;DR: Algorithm for optimal decoding of convolutional codes confirmed using error probability upper bound using LaSalle's inequality.
Abstract: Algorithm for optimal decoding of convolutional codes confirmed using error probability upper bound

25 citations


Journal ArticleDOI
TL;DR: An adaptive decoding technique called burst trapping is presented to correct both random and burst errors, and results indicate that the performance of such codes, when compared with interleaved block codes, offers better results at significantly lower cost.
Abstract: An adaptive decoding technique called burst trapping is presented to correct both random and burst errors. Two decoding algorithms are used, one for random errors, and the other for bursts. The former is based on a conventional correction technique, the latter utilizes an encoding procedure for which each information digit appears twice in the data stream, first unchanged, and second combined with (addition modulo 2 ) a check digit of a widely separated later block. Whenever the number of errors within a code block are detected to be too large to correct with the random-error-correcting algorithm, the burst-correcting algorithm corrects these errors by recovering the information from later blocks where it appears in combination with check digits. It is shown that the scheme requires very limited guard space and has limited error propagation. Furthermore, the storage requirement is even smaller than the guard space. This is the only known coding system that has this desirable feature. Results of simulation of such codes over telephone channels indicate that the performance of such codes, when compared with interleaved block codes, offers better results at significantly lower cost.

25 citations


Journal ArticleDOI
TL;DR: It is shown that the hybrid technique reduces the variability of the amount of sequential decoding computation, and asymptotic results for the probabilities of error and buffer overflow as functions of the system complexity are derived.
Abstract: We consider a coding-decoding scheme which can permit reliable data communication at rates up to the capacity of a discrete memoryless channel, and which offers a reasonable trade off between performance and complexity. The new scheme embodies algebraic and sequential coding-decoding stages. Data is initially coded by an algebraic (Reed-Solomon) encoder into blocks of N symbols, each symbol represented by n binary digits. The N n-bit symbols in a block are transmitted separately and independently through N parallel subsystems, each consisting of a sequential coder, an independent discrete memoryless channel, and a sequential decoder in tandem. Those coded n-bit symbols which would require the most sequential decoding computations are treated as erasures and decoded by a Reed-Solomon decoder. We show that the hybrid technique reduces the variability of the amount of sequential decoding computation. We also derive asymptotic results for the probabilities of error and buffer overflow as functions of the system complexity.

24 citations



Journal ArticleDOI
L. Rudolph1
TL;DR: It is shown that every cyclic code over GF(p) can be decoded up to its minimum distance by a threshold decoder employing general parity checks and a single threshold element.
Abstract: It is shown that every cyclic code over GF(p) can be decoded up to its minimum distance by a threshold decoder employing general parity checks and a single threshold element. This result is obtained through the application of a general decomposition theorem for complex-valued functions defined on the space of all n -tuples with elements from the ring of integers modulo p .

01 Nov 1969
TL;DR: In this paper, the optimal decoding of convolutional codes is confirmed using error probability upper bound (EBP upper bound) in the presence of error probability lower bound (EPB).
Abstract: Algorithm for optimal decoding of convolutional codes confirmed using error probability upper bound

Journal ArticleDOI
TL;DR: Simple tests are derived that allow easy determination of the performance on the BSC of a given binary convolutional code decoded with a modified version of the Fano algorithm.
Abstract: In the past, criteria for predicting the performance of individual codes with sequential decoding have been intuitive. In this paper, simple tests are derived that allow easy determination of the performance on the BSC (binary symmetric channel) of a given binary convolutional code decoded with a modified version of the Fano algorithm. A "distance-guaranteed computational cutoff rate," R_{dgcomp} , is defined in terms of the BSC crossover probability and the "uniform minimum distance" of the code. The latter is a measure of the minimum distance between codewords of all lengths up to and including the constraint length of the code. A bound is derived on the average number of decoding computations and is shown to be small and insensitive to constraint length if the code rate, R , satisfies the test R . Also, the probability of a decoding error is overbounded and the bound decreases exponentially with constraint length with exponent (R_{dgcomp} - R) . Consequently, the probability of error is small if (R_{dgcom} - R) is large. The existence of binary convolutional codes with a uniform minimum distance which meets the Gilbert bound is demonstrated. This result is combined with the condition R to show the existence of codes of rate less than a rate R_{D} for which the average number of decoding computations is small. The rate R_{D} is approximately one half of the true computational cutoff rate R_{comp} on the BSC with crossover probability of 10^{-4} .

Journal ArticleDOI
TL;DR: It is shown that minimum-distance and other decoders for parity-check codes can be realized with complexity proportional to the square of block length, although at the possible expense of a large decoding time.
Abstract: Several classes of decoding rules are considered here including block decoding rules, tree decoding rules, and bounded-distance and minimum-distance decoding rules for binary parity-check codes. Under the assumption that these rules are implemented with combinational circuits and sequential machines constructed with AND gates, OR gates, INVERTERS, and binary memory cells, bounds are derived on their complexity. Complexity is measured by the number of logic elements and memory cells, and it is shown that minimum-distance and other decoders for parity-check codes can be realized with complexity proportional to the square of block length, although at the possible expense of a large decoding time. We examine tradeoffs between probability of error and complexity for the several classes of rules.

Journal ArticleDOI
TL;DR: This paper presents an alternate approach to this problem based on the direct product of matrices, easily understood by anyone familiar with matrix theory, and it yields results in a form convenient for implementation and generalization.
Abstract: Posner (1968) has recently discussed a decoding scheme for certain orthogonal and biorthogonal codes which is based on the fast Fourier transform on a finite abelian group. In this paper, we present an alternate approach to this problem based on the direct product of matrices. This approach is easily understood by anyone familiar with matrix theory, and it yields results in a form convenient for implementation and generalization.

Journal ArticleDOI
TL;DR: A new family of Convolutional character-error-correcting codes which are a convolutional form of the Reed-Solomon block codes and as such have nonbinary symbols and are shown to be more powerful and simpler to implement than the equivalent Hagelbarger code.
Abstract: We derive a new family of convolutional character-error-correcting codes which are a convolutional form of the Reed-Solomon block codes and as such have nonbinary symbols. We also derive a bound on the error correcting capabilities of these codes in which the error-correcting capability per constraint length grows approximately with the square root of the constraint length. When these codes are used on a binary channel they are effective for both random and burst error correction because a single character spans several channel digits. These codes have greater error-correcting capabilities than the Robinson-Bernstein self-orthogonal codes but are harder to decode. The single-character-error-correcting codes, when interleaved, are shown to be more powerful than the equivalent Hagelbarger code and appear to be simpler to implement. They are also slightly better than the interleaved version of Berlekamp's code. We discuss encoding and decoding algorithms and illustrate a simple decoding algorithm for some of the codes. These codes are closely related to the Bose-Chaudhuri-Hocquenghem block codes and share with them the decoding simplification for character erasures in place of errors. Any Bose-Chaudhuri-Hocquenghem decoding algorithm can be used to decode these codes.

Journal ArticleDOI
K. Levitt1, W. Kautz1
TL;DR: A cellular array is shown to be applicable for the encoding and decoding of binary error-correcting codes, and also for identifying the possibilities of tradeoffs between decoding time and equipment complexity.
Abstract: A cellular array is a logical network of identical or almost identical cells, each of which contains a small amount of logic and storage, and, except for a few buses to the edge of the array, is connected only to its immediate neighbors. The cellular approach offers special advantages for realization by the forthcoming large-scale-integrated (LSI) technology. Such arrays are shown to be applicable for the encoding and decoding of binary error-correcting codes, and also for identifying the possibilities of tradeoffs between decoding time and equipment complexity. Arrays are presented for the decoding of single errors, burst errors, and erasures; the decoding of erasures is accomplished by the equation-solution approach, and it is shown for several code families that the Gauss elimination procedure is not required.

Journal ArticleDOI
TL;DR: This correspondence shows the formal equivalence between Massey's decode scheme called threshold decoding involving L -step orthogonalizable codes and Reed's decoding scheme originally conceived for the Muller codes.
Abstract: This correspondence shows the formal equivalence between Massey's decoding scheme called threshold decoding involving L -step orthogonalizable codes and Reed's decoding scheme originally conceived for the Muller codes. Upon examining these two decoding algorithms it is shown that each can be described in terms of a decoding logic circuit. The formal equivalence of the algorithms is proved by showing the formal equivalence of their respective decoding circuits.

Journal ArticleDOI
TL;DR: Comparison of the performance of an ordinary feedback decoder with a genie-aided feedbackDecoder, which never propagates errors, indicates that error propagation with uniform codes is a minor problem if the optimum orthogonalization rules are used, but that the situation is somewhat worse with nonoptimum orthogonaization.
Abstract: The problem of error propagation in uniform codes is investigated using the concept of parity-parallelogram submatrices and the threshold-decoding algorithm. A set of optimum orthogonalization rules is presented and it is shown that if these rules are incorporated into the decoder, then sufficient conditions can be found for the return of the decoder to correct operation following a decoding error. These conditions are considerably less stringent than the requirement that the channel be completely free of errors following a decoding error. However, this is not the case if the prescribed orthogonalization rules are not followed, as is demonstrated with a simple example. It is also shown that the syndrome memory required with Massey's orthogonalization procedure for definite decoding of uniform codes is the lowest possible. The results of simulation of the rate \frac{1}{4} and \frac{1}{8} uniform codes are presented, and these codes are seen to make fewer decoding errors with feedback decoding than with definite decoding. Comparison of the performance of an ordinary feedback decoder with a genie-aided feedback decoder, which never propagates errors, indicates that error propagation with uniform codes is a minor problem if the optimum orthogonalization rules are used, but that the situation is somewhat worse with nonoptimum orthogonalization.

Journal ArticleDOI
TL;DR: It is proved that every linear code of dimension k can be decoded by a threshold decoding circuit that is guaranteed to correct e errors if e \leq (d - 1)/2 where d is the minimum distance of the code.
Abstract: It is proved that every linear code of dimension k can be decoded by a threshold decoding circuit that is guaranteed to correct e errors if e \leq (d - 1)/2 where d is the minimum distance of the code. Moreover it is demonstrated that the number of levels of threshold logic is less than or equal to k by giving an algorithm for generating the decoding logic employing k levels.

Patent
08 Apr 1969
TL;DR: In this paper, a decoding network establishes a direct link from the user to control circuitry for the intended subsystem by sequentially decoding call or address signals initiated by the user and transmitted over a single pair of wires.
Abstract: A system is disclosed for selecting and controlling any one of a number of remotely-located, controllable subsystems. A decoding network establishes a direct link from the user to control circuitry for the intended subsystem by sequentially decoding call or address signals initiated by the user and transmitted over a single pair of wires. Once the link from the user to a particular control unit selected has been established, the connection remains secure until broken by the user; and subsequent control signals actuated by the user select and control the desired function of the remote subsystem.

Proceedings Article
01 Jan 1969
TL;DR: Pioneer 9 deep space probe with telemetry link operated in coded mode with sequential decoding, discussing ground and flight test data.
Abstract: Pioneer 9 deep space probe with telemetry link operated in coded mode with sequential decoding, discussing ground and flight test data

Journal ArticleDOI
W. Kautz1, K. Levitt1
TL;DR: The most noteworthy Soviet contributions have occurred in those areas that deal with codes for the noiseless channel, codes that correct asymmetric errors, decoding for cyclic codes, randomcoding bounds on the amount of computation required, and various application criteria.
Abstract: Described in this report are the results of a comprehensive technical survey of all published Soviet literature in coding theory and its applications--over 400 papers and books appearing before March 1967. The purpose of this report is to draw attention to this important collection of technical results, which are not well known in the West, and to summarize the significant contributions. Particular emphasis is placed upon those results that fill gaps in the body of knowledge about coding theory and practice as familiar to non-Soviet The most noteworthy Soviet contributions have occurred in those areas that deal with codes for the noiseless channel, codes that correct asymmetric errors, decoding for cyclic codes, randomcoding bounds on the amount of computation required, and various application criteria--that is, when to use which code, and how well it performs. Other important but isolated results have been reported on the construction of optimal low-rate codes, bounds on nonrandom codes, linear (continuous) coding, codes for checking arithmetic operations, properties of code polynomials, linear transformations of codes, multiple-burst-correcting codes, special synchronization codes, and certain broad generalizations of the conventional coding problem. Little or no significant work has been done on pseudorandom sequences, unit-distance codes (with one exception), the application of codes to the design of redundant computers and memories, the search for good cyclic codes, and the physical realization of sequential decoding algorithms. Section II of this report is directed to the nonspecialist, and describes the status of the field of coding theory in the Soviet Union, summarizes the major technical results, and compares these with corresponding work in the West. Section III discusses in detail for the coding specialist new theoretical results, details of coding procedures, and analytical tools described in the Soviet literature. A complete bibliography is included.

01 Jan 1969
TL;DR: Viterbi decoding error probability for convolutional codes using finite Markov chain modeling using finiteMarkov chain Modeling for Convolutional Codes.
Abstract: Viterbi decoding error probability for convolutional codes using finite Markov chain modeling

Journal ArticleDOI
TL;DR: It is shown that the proposed decoding scheme can be applied to several BCH codes making it possible to correct many errors beyond the ones guaranteed by the known minimum distance and also the codes will be "effectively" majority decodable.
Abstract: A decoding scheme is given for some block codes for which it is known how to decode a subcode. It is shown that the proposed decoding scheme can be applied to several BCH codes making it possible to correct many errors beyond the ones guaranteed by the known minimum distance and also the codes will be "effectively" majority decodable.

01 Oct 1969
TL;DR: The upper bound on the rate of the arithmetic code is derived and a simple decoding method is presented for a general multiple error correction.
Abstract: : The upper bound on the rate of the arithmetic code is derived. Comparisons to actual rates are presented. Some codes have rates very close to this bound. A simple decoding method is presented for a general multiple error correction. The time required for the decoding depends on the decoding index k. For a small decoding index, the decoding can be much faster by using some parallel hardwares. (Author)

Journal ArticleDOI
TL;DR: A general method is proposed for decoding any cyclic binary code at extremely high speed using only modulo 2 adders and threshold elements, and the decoders may be designed for maximum-likelihood decoding.
Abstract: A general method is proposed for decoding any cyclic binary code at extremely high speed using only modulo 2 adders and threshold elements, and the decoders may be designed for maximum-likelihood decoding. The number of decoding cycles is a fraction of the number of digits in the code word.

Journal ArticleDOI
TL;DR: Although the performance improvement resulting from application of these techniques to the additive white Gaussian noise channel is not significant, implementation and rate advantages make iterative sequential decoding techniques worth pursuing.
Abstract: It is shown that use of a two-stage decoding procedure consisting simply of an inner stage of block decoding and an outer stage employing a single sequential decoder does not result in an improvement in the computational overflow problem for the sequential decoder. Improvement can, however, result from use of multiple sequential decoders or use of a single sequential decoder with appropriate scrambling. Although the performance improvement resulting from application of these techniques to the additive white Gaussian noise channel is not significant, implementation and rate advantages make iterative sequential decoding techniques worth pursuing. Ia particular, with these techniques, a sequential decoder for a binary symmetric channel can be used regardless of the physical channel characteristics. Such a "universal" decoder is expected to be both simple and capable of high rate operation.