scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1971"


Journal ArticleDOI
TL;DR: This tutorial paper begins with an elementary presentation of the fundamental properties and structure of convolutional codes and proceeds with the development of the maximum likelihood decoder, which yields for arbitrary codes both the distance properties and upper bounds on the bit error probability.
Abstract: This tutorial paper begins with an elementary presentation of the fundamental properties and structure of convolutional codes and proceeds with the development of the maximum likelihood decoder. The powerful tool of generating function analysis is demonstrated to yield for arbitrary codes both the distance properties and upper bounds on the bit error probability for communication over any memoryless channel. Previous results on code ensemble average error probabilities are also derived and extended by these techniques. Finally, practical considerations concerning finite decoding memory, metric representation, and synchronization are discussed.

1,040 citations


Journal ArticleDOI
TL;DR: Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels.
Abstract: Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost is examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/s constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.

442 citations


Journal ArticleDOI
TL;DR: The method is to define an idealized model, called the classic bursty channel, toward which most burst-correcting schemes are explicitly or implicitly aimed, and to bound the best possible performance on this channel to exhibit classes of schemes which are asymptotically optimum.
Abstract: The purpose of this paper is to organize and clarify the work of the past decade on burst-correcting codes. Our method is, first, to define an idealized model, called the classic bursty channel, toward which most burst-correcting schemes are explicitly or implicitly aimed; next, to bound the best possible performance on this channel; and, finally, to exhibit classes of schemes which are asymptotically optimum and serve as archetypes of the burstcorrecting codes actually in use. In this light we survey and categorize previous work on burst-correcting codes. Finally, we discuss qualitatively the ways in which real channels fail to satisfy the assumptions of the classic bursty channel, and the effects of such failures on the various types of burst-correcting schemes. We conclude by comparing forward-error-correction to the popular alternative of automatic repeat-request (ARQ).

295 citations


Journal ArticleDOI
TL;DR: It is shown that a significant improvement in the performance with respect to other methods is achievable by the maximum likelihood decoding method and can reduce raw error ratehse in 10-3 to 10-4 range by a factor of 50 to 300.
Abstract: A digital magnetic recording system is viewed in this paper as a linear system that inherently includes a correlative level encoder. This encoder can be regarded aas linear finite-state machine like a convolutional encoder. The maximum likelihood decoding method recently devised by Viterbi to decode convolutional codes is then applietdo digital magnetic recording systems. The decoding algorithm and its implementation are discussed in detail. Expressions for the decoding error probability are obtained and confirmed by computer simulations. It is shown that a significant improvement in the performance with respect to other methods is achievable by the maximum likelihood decoding method. For example, under the Gaussian noise assumption the proposed technique can reduce raw error ratehse in 10-3 to 10-4 range by a factor of 50 to 300. These results indicate that the maximum likelihood decoding method gains as much as 2.5 dB in signal-to-noise ratio over the conventional bit-by-bit detection method.

175 citations


Journal ArticleDOI
TL;DR: An easily instrumented scheme is proposed for use with binary sources and the Hamming distortion metric, using tree codes to encode time-discrete memoryless sources with respect to a fidelity criterion.
Abstract: We study here the use of tree codes to encode time-discrete memoryless sources with respect to a fidelity criterion. An easily instrumented scheme is proposed for use with binary sources and the Hamming distortion metric. Results of simulation with random and convolutional codes are given.

148 citations


Journal ArticleDOI
Hisashi Kobayashi1
TL;DR: An application of the maximum-likelihood decoding (MLD) algorithm, which was originally proposed by Viterbi in decoding convolutional codes, is discussed and it is shown that a substantial performance gain is attainable by this probabilistic decoding method.
Abstract: Modems for digital communication often adopt the so-called correlative level coding or the partial-response signaling, which attains a desired spectral shaping by introducing controlled intersymbol interference terms. In this paper, a correlative level encoder is treated as a linear finite-state machine and an application of the maximum-likelihood decoding (MLD) algorithm, which was originally proposed by Viterbi in decoding convolutional codes, is discussed. Asymptotic expressions for the probability of decoding error are obtained for a class of correlative level coding systems, and the results are confirmed by computer simulations. It is shown that a substantial performance gain is attainable by this probabilistic decoding method.

141 citations


Journal ArticleDOI
H. Burton1
TL;DR: The iterative algorithm for decoding binary BCH codes presented by Berlekamp and, in an alternative form, by Massey is modified to eliminate inversion.
Abstract: The iterative algorithm for decoding binary BCH codes presented by Berlekamp and, in an alternative form, by Massey is modified to eliminate inversion. Because inversion in a finite field is time consuming and requires relatively complex circuitry, this new algorithm should he useful in practical applications of multiple-error-correcting binary BCH codes.

109 citations


Journal ArticleDOI
TL;DR: A class of rate 1/2 nonsystematic convolutional codes with an undetected decoding error probability verified by simulation to be much smaller than for the best systematic codes of the same constraint length, and a "quick-look-in" feature that permits recovery of the information sequence from the hard-decisioned received data without decoding simply by modulo-two addition of the received sequences.
Abstract: Previous space applications of sequential decoding have all employed convolutional codes of the systematic type where the information sequence itself is used as one of the encoded sequences. This paper describes a class of rate 1/2 nonsystematic convolutional codes with the following desirable properties: 1) an undetected decoding error probability verified by simulation to be much smaller than for the best systematic codes of the same constraint length; 2) computation behavior with sequential decoding verified by simulation to be virtually identical to that of the best systematic codes; 3) a "quick-look-in" feature that permits recovery of the information sequence from the hard-decisioned received data without decoding simply by modulo-two addition of the received sequences; and 4) suitability for encoding by simple circuitry requiring less hardware than encoders for the best systematic codes of the same constraint length. Theoretical analyses are given to show 1) that with these codes the information sequence is extracted as reliably as possible without decoding for nonsystematic codes and 2) that the constraints imposed to achieve the quicklook-in feature do not significantly limit the error-correcting ability of the codes in the sense that the Gilbert bound on minimum distance can still be attained under these constraints. These codes have been adopted for use in several forthcoming space missions.

79 citations


Journal ArticleDOI
TL;DR: It is shown how nonsystematic Reed-Solomon (RS) codes encoded by means of the Chinese remainder theorem can be decoded using the Berlekamp algorithm.
Abstract: It is shown how nonsystematic Reed-Solomon (RS) codes encoded by means of the Chinese remainder theorem can be decoded using the Berlekamp algorithm. The Chien search and calculation of error values are not needed but are replaced by a polynomial division and added calculation in determining the syndrome. It is shown that for certain cases of low-rate RS codes, the total decoding computation may be less than the usual method used with cyclic codes. Encoding and decoding for shorter length codes is presented.

50 citations


Journal ArticleDOI
TL;DR: This paper presents an algebraic technique for decoding binary block codes in situations where the demodulator quantizes the received signal space into Q > 2 regions, applicable in principle to any block code for which a binary decoding procedure is known.
Abstract: This paper presents an algebraic technique for decoding binary block codes in situations where the demodulator quantizes the received signal space into Q > 2 regions. The method, referred to as weighted erasure decoding (WED), is applicable in principle to any block code for which a binary decoding procedure is known.

47 citations


Proceedings Article
01 Jan 1971

Journal ArticleDOI
Hisashi Kobayashi1, D. Tang
TL;DR: Analytical and simulation results on the performance of the proposed decoding scheme are presented and an asymptotic expression for the decoding error rate is derived in closed form as a function of the channel signal-to-noise ratio.
Abstract: Decoding of a correlative level coding or partialresponse signaling system is discussed in an algebraic framework. A correction scheme in which the quantizer Output includes ambiguity levels is proposed. The implementation and algorithm of error correction is discussed in some detail. An optimum design of the quantizer based on Chow's earlier work is discussed. Both analytical and simulation results on the performance of the proposed decoding scheme are presented. An asymptotic expression for the decoding error rate is derived in closed form as a function of the channel signal-to-noise ratio. This is also compared with the conventional bit-by-bit detection method and the maximumlikelihood decoding method recently studied.

Journal ArticleDOI
TL;DR: A new method of decoding is presented that utilizes algebraic constraints across streams of convolutionally encoded information sequences to improve the performance of ordinary sequential decoding and over the older hybrid scheme developed by Falconer.
Abstract: Hybrid decoding technique for symmetrical binary input channels, using bootstrap algorithm across convolutionally encoded information streams

Journal ArticleDOI
TL;DR: The computational work and the time required to decode with reliability E at code rate R on noisy channels are defined, and bounds on the size of these measures are developed.
Abstract: The computational work and the time required to decode with reliability E at code rate R on noisy channels are defined, and bounds on the size of these measures are developed. A number of ad hoc decoding procedures are ranked on the basis of the computational work they require.

Journal ArticleDOI
TL;DR: It is shown that, for any but pathological discrete memoryless channels with noiseless feedback, there exists a variable-length convolutional code such that the reliability function of the channel can be bounded below by the channel capacity C for all transmission rates less than C.
Abstract: A variable-length, nonsystematic, convolutional encoding, and successive-decoding scheme is devised to establish significant improvements in the reliability functions of memoryless channels with noiseless decision feedback. It is shown that, for any but pathological discrete memoryless channels with noiseless feedback, there exists a variable-length convolutional code such that the reliability function of the channel can be bounded below by the channel capacity C for all transmission rates less than C . By employing a modified version of this scheme, it is also constructively shown that, for an additive-white-Gaussian-noise (AWGN) channel with noiseless feedback it is possible to find a variable-length convolutional code such that the channel reliability function can be bounded below by \alpha_0 c_{infty} for all rates less than the channel capacity C_{\infty} , where \alpha_0 = max (1, \gamma/2) and \gamma is the maximum allowable expected-peak-to-expected-average-power ratio at the transmitter.

Journal ArticleDOI
TL;DR: In this paper, the authors introduced generalized shifted linear codes (SLC) which can achieve a positive rate on arbitrary discrete memoryless channels if Shannon's capacity is positive and the length of the alphabet is less or equal to 5.
Abstract: Summary It was proved by Ahlswede (1971) that codes whose codewords form a group or even a linear space do not achieve Shannon's capacity for discrete memoryless channels even if the decoding procedure is arbitrary Sharper results were obtained in Part I of this paper For practical purposes, one is interested not only in codes which allow a short encoding procedure but also an efficient decoding procedure Linear codes—the codewords form a linear space and the decoding is done by coset leader decoding — have a fairly efficient decoding procedure But in order to achieve high rates the following slight generalization turns out to be very useful: We allow the encoder to use a coset of a linear space as a set of codewords We call these codes shifted linear codes or coset codes They were implicitly used by Dobrushin (1963) This new code concept has all the advantages of the previous one with respect to encoding and decoding efficiency and enables us to achieve positive rate on discrete memoryless channels whenever Shannon's channel capacity is positive and the length of the alphabet is less or equal to 5 (Theorem 311) (The result holds very likely also for all alphabets with a length a = ps, p prime, s positive integer) A disadvantage of the concepts of linear codes and of shifted linear codes is that they can be defined only for alphabets whose length is a prime power In order to overcome this difficulty, we introduce generalized shifted linear codes With these codes we can achieve a positive rate on arbitrary discrete memoryless channels if Shannon's capacity is positive (Theorem 321)

Journal ArticleDOI
TL;DR: In this article, the authors describe a sequential decoding machine built at the Jet Propulsion Laboratory (JPL), which uses a 3-bit quantization of the code symbols and achieves a computation rate of MHz.
Abstract: This paper describes a sequential decoding machine built at the Jet Propulsion Laboratory (JPL), which uses a 3-bit quantization of the code symbols and achieves a computation rate of MHz. This machine is flexible and can be programmed to decode any complementary convolutional code with rates down to 1/4 and contraint lengths up to 32. In addition, metric programmability is provided for optimization of decoder performance with respect to channel parameter variations.

Journal ArticleDOI
TL;DR: An improved decoding algorithm for codes that are constructed from finite geometries is introduced and it is shown that these codes can be orthogonalized in less than or equal to three steps, ensuring majority-logic decodable codes.
Abstract: In this paper, an improved decoding algorithm for codes that are constructed from finite geometries is introduced. The application of this decoding algorithm to Euclidean geometry (EG) and projective geometry (PG) codes is further discussed. It is shown that these codes can be orthogonalized in less than or equal to three steps. Thus, these codes are majority-logic decodable in no more than three steps. Our results greatly reduce the decoding complexity of EG and PG codes in most cases. They should make these codes very attractive for practical use in error-control systems.

Journal ArticleDOI
TL;DR: The results indicate that for rates near R comp the stack algorithm offers a considerable improvement in decoder speed over the Fano algorithm, provided that fairly large storage capacity is available for use by the decoder.
Abstract: The results of comparison of the conventional Fano algorithm and a new stack algorithm proposed by Zigangirov and Jelinek, by computer simulation of two sequential decoding algorithms are reported. The results indicate that for rates near R comp the stack algorithm offers a considerable improvement in decoder speed over the Fano algorithm, provided that fairly large storage capacity is available for use by the decoder.

Journal ArticleDOI
TL;DR: In this correspondence a complete decoding algorithm for double-error-correcting binary BCH codes of length n = 2^m - 1 is introduced, based on the step-by-step decoding algorithm introduced by Prange and the decoding algorithms introduced by Meggitt, which makes use of the cyclic properties of the code.
Abstract: In this correspondence a complete decoding algorithm for double-error-correcting binary BCH codes of length n = 2^m - 1 is introduced. It corrects all patterns of one and two errors and all patterns of three errors that belong to cosets that have a coset leader of weight three. This algorithm is based on the step-by-step decoding algorithm introduced by Prange and the decoding algorithm introduced by Meggitt, which makes use of the cyclic properties of the code. A comparison between this method and previously existing ones is also given.

Journal ArticleDOI
F. Huband1, F. Jelinek
TL;DR: In this paper a simple coding scheme utilizing both sequential and algebraic coding is proposed, and bounds on its performance are derived using theoretical bounds on the performance of sequential decoding.
Abstract: In this paper a simple coding scheme utilizing both sequential and algebraic coding is proposed, and bounds on its performance are derived using theoretical bounds on the performance of sequential decoding. These bounds are compared with bounds on a similar, though more complex, scheme proposed by Falconer [2]. Except for sequential rates in a range strictly above R_{comp}, the bounds on the present scheme are shown to be superior.

Journal ArticleDOI
Jr. W. Pehlert1
TL;DR: In this paper a brief description of generalized burst-trapping codes is given, the design and implementation of the decoder is discussed, and the performance evaluation is described.
Abstract: An experimental error control system utilizing a generalized burst-trapping error control technique has been designed, built, and evaluated. Code parameters were chosen such that bursts as long as 1000 bits are likely to be corrected in the presence of a background random bit error rate as large as 3 \times 10^{-3} . In this paper we give a brief description of generalized burst-trapping codes, we discuss the design and implementation of the decoder, and we describe the performance evaluation.

Journal ArticleDOI
TL;DR: Applying these decoding algorithms to known classes of maximum-distance separable linear codes, the amount of hardware required for implementation is only a small fraction of those required by the existing decoding algorithms.
Abstract: In this paper, some properties of maximum-distance separable linear codes are presented. Based on these properties, a decoding algorithm for correcting random errors is established. A simpler decoding algorithm for correcting burst errors is also given. Applying these decoding algorithms to known classes of maximum-distance separable linear codes, the amount of hardware required for implementation is only a small fraction of those required by the existing decoding algorithms.

Journal ArticleDOI
TL;DR: A suboptimum decision-directed block decoder for a binary symmetric channel that makes use of past decoding decisions to update its estimate of the channel's initially unknown crossover probability is considered.
Abstract: We consider a suboptimum decision-directed block decoder for a binary symmetric channel that makes use of past decoding decisions to update its estimate of the channel's initially unknown crossover probability. The decoder has a threshold list decoding rule that uses the current estimated crossover probability. The estimate is updated by means of a stochastic approximation algorithm. It is shown to converge toward the true crossover probability with a bias that decreases exponentially with the code's block length, provided it never "runs away" toward zero after dropping below a certain critical value. The probability that this runaway phenomenon ever occurs is bounded by an expression that is exponentially decreasing in the code's block length and in the weight assigned to the initial estimate.

Journal ArticleDOI
TL;DR: Curves of E_{b} / N_{0} for code rate R = R_{comp} versus 1/R are presented so that the efficiency of convolutional encoding and sequential decoding with various other modems can be compared for fixed rate codes.
Abstract: R comp versus E_{b} / N_{0} is calculated for a coherent white Gaussian noise channel and biorthogonal signal sets with quantized and continuous correlator outputs. The results are compared with list of L detection and it is observed that the quantized scheme is more efficient. Curves of E_{b} / N_{0} for code rate R = R_{comp} versus 1/R are presented so that the efficiency of convolutional encoding and sequential decoding with various other modems can be compared for fixed rate codes.

Journal ArticleDOI
S.Y. Tong1
TL;DR: It is shown that, given a t -error-correcting BCSOC of rate b —1/ b, a character-error correcting convolutional self-orthogonal code (CCSOC) can be constructed for any integer k, the rate expansion factor, and the CCSOC so constructed corrects t character errors, and also possesses large simultaneous burst-error- correcting capabilities.
Abstract: A class of convolutional, character-error-correcting codes with limited error propagation is presented. This class of codes is derived from binary convolutional self-orthogonal codes (BCSOC). By character-error-correcting, we mean that the code is character oriented, where each character can be thought of as a string of binary or higher base symbols of fixed length or as a single nonbinary symbol of correspondingly higher base. It is shown that, given a t -error-correcting BCSOC of rate b —1/ b , a character-error correcting convolutional self-orthogonal code (CCSOC) of rate k ( b —1)/( k ( b —1) + 1) can be constructed for any integer k , the rate expansion factor. The CCSOC so constructed corrects t character errors, and also possesses large simultaneous burst-error-correcting capabilities. Lower bounds on the burst-error-correcting capability for both BCSOC and CCSOC are found. Decoding consists of a mixture of majority logic decoding and algebraic computation. The decoding algorithm seems practical if either the rate expansion factor k or the number of errors corrected t are not large. Such codes are most suitable for channels with both random and burst noise, and also effect a compromise between the cost of terminal equipment and the efficient use of channels.

Journal ArticleDOI
TL;DR: A new algorithm is given for the decoding of double-error-correcting binary b.h.c. codes that can be rather simply implemented and is particularly suitable for parallel implementation.
Abstract: A new algorithm is given for the decoding of double-error-correcting binary b.c.h. codes. It can be rather simply implemented and is particularly suitable for parallel implementation.

01 Jan 1971
TL;DR: Results reveal that the track length may be reduced to 500 information bits with small degradation in performance and that a practical bootstrap decoding configuration has a computational performance about 1.0 dB better than sequential decoding and an output bit error rate about .0000025 near the R sub comp point.
Abstract: Results of computer simulation studies of the hybrid pull-up bootstrap decoding algorithm, using a constraint length 24, nonsystematic, rate 1/2 convolutional code for the symmetric channel with both binary and eight-level quantized outputs. Computational performance was used to measure the effect of several decoder parameters and determine practical operating constraints. Results reveal that the track length may be reduced to 500 information bits with small degradation in performance. The optimum number of tracks per block was found to be in the range from 7 to 11. An effective technique was devised to efficiently allocate computational effort and identify reliably decoded data sections. Long simulations indicate that a practical bootstrap decoding configuration has a computational performance about 1.0 dB better than sequential decoding and an output bit error rate about .0000025 near the R sub comp point.

10 Jun 1971
TL;DR: In this article, a comparison of concatenated and sequential decoding systems and convolutional code structural properties is made. But the comparison is restricted to concatenation and decoding systems.
Abstract: Comparison of concatenated and sequential decoding systems and convolutional code structural properties