scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1972"


Journal ArticleDOI
TL;DR: It is shown that as the signal-to-noise ratio (SNR) increases, the asymptotic behavior of these decoding algorithms cannot be improved, and computer simulations indicate that even for SNR the performance of a correlation decoder can be approached by relatively simple decoding procedures.
Abstract: A class of decoding algorithms that utilizes channel measurement information, in addition to the conventional use of the algebraic properties of the code, is presented. The maximum number of errors that can, with high probability, be corrected is equal to one less than d , the minimum Hamming distance of the code. This two-fold increase over the error-correcting capability of a conventional binary decoder is achieved by using channel measurement (soft-decision) information to provide a measure of the relative reliability of each of the received binary digits. An upper bound on these decoding algorithms is derived, which is proportional to the probability of an error for d th order diversity, an expression that has been evaluated for a wide range of communication channels and modulation techniques. With the aid of a lower bound on these algorithms, which is also a lower bound on a correlation (maximum-likelihood) decoder, we show for both the Gaussian and Rayleigh fading channels, that as the signal-to-noise ratio (SNR) increases, the asymptotic behavior of these decoding algorithms cannot be improved. Computer simulations indicate that even for !ow SNR the performance of a correlation decoder can be approached by relatively simple decoding procedures. In addition, we study the effect on the performance of these decoding algorithms when a threshold is used to simplify the decoding process.

1,165 citations


Journal ArticleDOI
TL;DR: The analysis shows further that the "natural" choice of bias in the metric is the code rate and gives insight into why the Fano metric has proved to be the best practical choice in sequential decoding.
Abstract: It is shown that the metric proposed originally by Fano for sequential decoding is precisely the required statistic for minimum-error-probability decoding of variable-length codes. The analysis shows further that the "natural" choice of bias in the metric is the code rate and gives insight into why the Fano metric has proved to be the best practical choice in sequential decoding. The recently devised Jelinek-Zigangirov "stack algorithm" is shown to be a natural consequence of this interpretation of the Fano metric. Finally, it is shown that the elimination of the bias in the "truncated" portion of the code tree gives a slight reduction in average computation at the sacrifice of increased error probability.

134 citations


Journal ArticleDOI
TL;DR: A decoding algorithm is given that can correct up to the number of errors guaranteed by the product minimum distance, rather than about half that number when the iterated codes are decoded independently.
Abstract: We give a decoding algorithm for iterated codes that can correct up to the number of errors guaranteed by the product minimum distance, rather than about half that number when the iterated codes are decoded independently. This result is achieved by adapting Forney's generalized minimum distance decoding for use with iterated codes. We derive results on the simultaneous burst- and random-error-correction capability of iterated codes that improve considerably on known results.

69 citations


Journal ArticleDOI
TL;DR: An efficient bidirectional search algorithm for computing the free distance of convolutional codes is described.
Abstract: An efficient bidirectional search algorithm for computing the free distance of convolutional codes is described.

61 citations


Journal ArticleDOI
TL;DR: In this correspondence, a decoding algorithm to decode beyond the BCH bound is introduced and gives a complete minimum distance decoding for any cyclic code.
Abstract: In this correspondence, a decoding algorithm to decode beyond the BCH bound is introduced. It gives a complete minimum distance decoding for any cyclic code. A comparison between this decoding algorithm and previously existing ones is also given.

44 citations


Journal ArticleDOI
TL;DR: In this article, the burst-b distance between two binary vectors is defined and shown to be a metric for binary-input, Q-ary output channels where errors occur in bursts.
Abstract: The burst- b distance between two binary vectors is defined and shown to be a metric. This definition is applied to a binary-input, Q -ary output channel where errors occur in bursts. A decoding algorithm is presented for such a channel that is an extension of Weldon's [2] weighted erasure decoding. Examples are presented illustrating the techniques.

21 citations


Journal ArticleDOI
TL;DR: In this correspondence some classes of unequal protection codes are constructed utilizing difference sets or triangles using threshold decoding, and these codes use threshold decoding.
Abstract: In this correspondence some classes of unequal protection codes are constructed utilizing difference sets or triangles. These codes use threshold decoding.

20 citations


Journal ArticleDOI
TL;DR: Two error-erasure decoding algorithms for product codes that correct all the error- erasure patterns guaranteed correctable by the minimum Hamming distance of the product code are given.
Abstract: Two error-erasure decoding algorithms for product codes that correct all the error-erasure patterns guaranteed correctable by the minimum Hamming distance of the product code are given. The first algorithm works when at least one of the component codes is majority-logic decodable. The second algorithm works for any product code. Both algorithms use the decoders of the component codes.

15 citations


Journal ArticleDOI
TL;DR: This correspondence presents a decoding procedure for finite geometry codes that requires as few decoding steps as possible and it is shown that the minimum number of steps is a logarithmic function of the dimension of the associated geometry.
Abstract: In a recent paper [1], techniques for reducing the number of majority-logic decoding steps for finite geometry codes have been proposed. However, the lower bound of [1, lemma 4] is incorrect; finite geometry codes, in general, cannot be decoded in less than or equal to three steps of orthogonalization, as was claimed. This correspondence presents a decoding procedure for finite geometry codes that requires as few decoding steps as possible. It is shown that the minimum number of steps is a logarithmic function of the dimension of the associated geometry.

15 citations


Journal Article
TL;DR: The burst-b distance between two binary vectors is defined and shown to be a metric and a decoding algorithm is presented for such a channel that is an extension of Weldon's weighted erasure decoding.
Abstract: The burst-b distance between two binary vectors is defined and shown to be a metric. This definition is applied to a binary-input, Q-ary output channel where errors occur in bursts. A decoding algorithm is presented for such a channel that is an extension of Weldon's (1971) weighted erasure decoding. Examples are presented illustrating the techniques.

15 citations


Journal ArticleDOI
TL;DR: Upper bounds are derived on the error probability that can be achieved by using the maximum-likelihood algorithm of sequential decoding for the binary symmetric channel using constraint length and backsearch limit.
Abstract: Upper bounds are derived on the error probability that can be achieved by using the maximum-likelihood algorithm of sequential decoding for the binary symmetric channel. The bounds are functions of constraint length and backsearch limit.

Journal ArticleDOI
TL;DR: It is shown that the sequential decoding of rate one-half convolutional codes leads to a special type of infinite Markov chain in which only one transition of one step toward the origin state is permitted but any number of transitions away from theorigin state are permitted.

Journal Article
S. Wainberg1
TL;DR: Two error-erasure decoding algorithms for product codes that correct all the error- erasure patterns guaranteed correctable by the minimum Hamming distance of the product code are given.
Abstract: Two error-erasure decoding algorithms for product codes that correct all the error-erasure patterns guaranteed correctable by the minimum Hamming distance of the product code are given. The first algorithm works when at least one of the component codes is majority-logic decodable. The second algorithm works for any product code. Both algorithms use the decoders of the component codes.

Journal ArticleDOI
TL;DR: It is shown that any decoding function for a linear binary code can be realized as a weighted majority of nonorthogonal parity checks.
Abstract: It is shown that any decoding function for a linear binary code can be realized as a weighted majority of nonorthogonal parity checks An example is given of a four-error-correcting code that is neither L-step orthogonalizable nor one-step majority decodable using non-orthogonal parity checks and yet is one-step weighted-majority decodable using only ten nonorthogonal parity checks

24 Jan 1972
TL;DR: These simulations prove that this type of computer has sufficient memory, sufficient speed, and sufficient flexibility to perform sequential decoding at useful data rates and efficient methods for ensuring a very low probability of error at any signal-to-noise ratio are discussed.
Abstract: : Extensive simulations of a sequential decoder using the Zigangirov- Jelinek algorithm have been conducted on a small, general-purpose digital computer. These simulations prove that this type of computer has sufficient memory, sufficient speed, and sufficient flexibility to perform sequential decoding at useful data rates. In the report, the memory and computational requirements of the algorithm are presented, and efficient methods for ensuring a very low probability of error at any signal-to-noise ratio (at the expense of an increase in the failure-to-decode probability) are discussed. The equations necessary to set up a decoder are given, and a number of possible computer implementations are suggested.

01 Nov 1972
TL;DR: Results of parametric studies of the Viterbi decoding algorithm are summarized, and the effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.
Abstract: Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.


Journal ArticleDOI
TL;DR: This correspondence presents a slight rephrasing of the algorithm for binary codes, which results in a very simple circuit for controlling the branching process and performs all necessary tests on the validity of the resulting error-locator polynomial.
Abstract: At each step in Berlekamp's iterative algorithm for BCH codes, the decoder follows one of two possible branches. This correspondence presents a slight rephrasing of the algorithm for binary codes, which results in a very simple circuit for controlling the branching process. This circuit also performs all necessary tests on the validity of the resulting error-locator polynomial.

Journal ArticleDOI
TL;DR: The classical approach of considering intersymbol interference as a modulation problem is replaced by less conventional information decoding considerations and the concept of viewing time-dispersive channels as natural convolutional encoders is presented.
Abstract: The classical approach of considering intersymbol interference as a modulation problem is replaced by less conventional information decoding considerations. We present the concept of viewing time-dispersive channels as natural convolutional encoders. Several channels are presented as examples and computer simulations exhibit some of the benefits.

Journal ArticleDOI
TL;DR: The error-propagation problem that is encountered in decoding noncatastrophic codes with feedback decoders is analyzed and it is shown that if thc decoding depth is too short, unlimited error propagation will occur and that the depth can be chosen long enough so that unlimited error propagate does not occur.
Abstract: In this correspondence we analyze the error-propagation problem that is encountered in decoding noncatastrophic codes with feedback decoders. We show that if thc decoding depth is too short, unlimited error propagation will occur and that the depth can be chosen long enough so that unlimited error propagation does not occur when minimum-distance feedback decoders are used.

Journal ArticleDOI
TL;DR: An algorithm is presented for the decoding of triple-error-correcting binary b.c.h that is particularly suitable for parallel implementation and requires no invertors.
Abstract: An algorithm is presented for the decoding of triple-error-correcting binary b.c.h, codes. The algorithm is particularly suitable for parallel implementation and requires no invertors.

Proceedings ArticleDOI
01 Jan 1972
TL;DR: Experiments with actual transmitted data indicate that the receiver can achieve the predicted performance and the structure of the receiver is described, design tradeoffs are discussed, and the receiver performance is presented.
Abstract: The Sanguine receiver must include sophisticated design features to permit extremely reliable operation at low signal-to-noise ratios. These features include nonlinear signal processing, compensation for the effects of the ocean channel, phase tracking at low signal-to-noise ratios, and sequential decoding. The entire receiver has been implemented on a small-digital computer, and extensive tests were conducted in order to optimize the design and to gather statistical data. Experiments with actual transmitted data indicate that the receiver can achieve the predicted performance. In this paper, the structure of the receiver is described, design tradeoffs are discussed, and the receiver performance is presented.

Journal ArticleDOI
TL;DR: A stochastic model is described for the decoder of an optimal burst-correcting convolutional code and an upper bound is obtained for \bar{p} , the error probability per word after decoding.
Abstract: A stochastic model is described for the decoder of an optimal burst-correcting convolutional code. From this model, an upper bound is obtained for \bar{p} , the error probability per word after decoding.

Journal ArticleDOI
TL;DR: Optimum majority-decodable block codes with up to five information bits per block are given, and from these codes several majority- decodable convolutional codes that are "optimum" with respect to the proposed construction are obtained.
Abstract: Two convolutional-code construction schemes that utilize block codes are given. In the first method the generators of a self-orthogonal convolutional code (SOCC) are expanded. The generators of a block code whose block length is longer than that of the SOCC code replace the nonzero blocks of the convolutional code. The zero blocks are extended to the longer block length. There results a convolutional code whose blocks are self-orthogonal and which has a lower transmission rate. In the second scheme the parity constraints of an SOCC are expanded. The parity constraints of a block code replace some of the individual nonzero elements of the SOCC parity-check matrix, so that the convolutional code rate is greater than the block code rate. The resulting codes retain the SOCC advantages of simple implementation and limited error propagation. Both the encoding and the decoding can be based on the underlying block code. If a block code is majority decodable, then the resulting "hybrid" codes are majority decodable. Optimum majority-decodable block codes with up to five information bits per block are given, and from these codes several majority-decodable convolutional codes that are "optimum" with respect to the proposed construction are obtained.

Journal ArticleDOI
TL;DR: This class of codes is obtained by modifying burst-correcting convolutional codes into block codes and does not require any cyclic shifts in the decoding process, and can approximate minimum-redundancy codes.
Abstract: A class of high-speed decodable burst-correcting codes is presented. This class of codes is obtained by modifying burst-correcting convolutional codes into block codes and does not require any cyclic shifts in the decoding process. With the appropriate choices of parameters, the codes can approximate minimum-redundancy codes. The high-speed decodability is expected to make these codes suitable for application to computer systems.

Journal ArticleDOI
TL;DR: A new decoding algorithm is deduced by dividing the set of all received words in non‐disjoint fuzzy sets instead of disjoint ordinary sets, which leads to quite simple decoders as regards the technical complexity.
Abstract: A new decoding algorithm is deduced by dividing the set of all received words in non‐disjoint fuzzy sets instead of disjoint ordinary sets. This algorithm leads to quite simple decoders as regards the technical complexity, by choosing a convenient variation law for the received word's membership function. Moreover, this algorithm allows the decoder to adapt to the channel‐noise statistics in a certain sense. The algorithm may be used for both block and convolutional codes, but in this paper it is only applied to block codes, the following notations being used: k—information symbols number, n—word lengths, vi—code word, wj—received word.

01 Jan 1972
TL;DR: In this paper, performance versus complexity is compared for the Viterbi maximum likelihood decoder and the sequential decoder of convolutional codes for the additive white Gaussian noise memoryless channel.
Abstract: Performance versus complexity is compared for the Viterbi maximum likelihood decoder and the sequential decoder of convolutional codes for the additive white Gaussian noise memoryless channel. It is found that sequential decoders outperform Viterbi decoders at low data rates (less than 100 kbps) for the same complexity. However, Viterbi decoders are less complex than sequential decoders at high data rates (10 Mbps and greater) for the same performance.

01 Jan 1972
TL;DR: Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications show concatenation coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively.
Abstract: Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.

Journal ArticleDOI
TL;DR: If one of the component codes is decodable using Rudolph's nonorthogonal algorithm, and the other using Reed-Massey's L -step orthogonalization, then the resulting product code is majority-logic decodability in L steps, where nonorthogsonal checks are used in the first step and orthogomatic checks in the remaining steps.
Abstract: In this correspondence additional results on majority-logic decoding of product codes are presented. If one of the component codes is decodable using Rudolph's nonorthogonal algorithm, and the other using Reed-Massey's L -step orthogonalization, then the resulting product code is majority-logic decodable in L steps, where nonorthogonal checks are used in the first step and orthogonal checks in the remaining steps. If the component codes are both decodable using Rudolph's algorithm, the product code is majority-logic decodable using the same procedure.

Proceedings ArticleDOI
01 Jan 1972
TL;DR: One suitable combination is the use of minimum shift keyed (MSK) modulation for bandspreading with binary convolutional encoding and antipodal channel symbol modulation, which requires a bit-energy to effective noise-density ratio of less than 2 dB and hence is exceedingly efficient.
Abstract: Modulation and coding techniques yielding signaling waveforms suitable for use with the Sanguine ELF Communication system must simultaneously satisfy a number of objectives. These objectives include: (1) providing a very low probability of incorrect message reception, (2) minimizing the required transmitter power by minimizing the required signal-to-noise ratio at a distant receiver, and (3) creating a signal that is difficult to spoof or jam. One suitable combination is the use of minimum shift keyed (MSK) modulation for bandspreading with binary convolutional encoding and antipodal channel symbol modulation. This signal structure in combination with sequential decoding at the receiver requires a bit-energy to effective noise-density ratio of less than 2 dB and hence is exceedingly efficient.