scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1976"


Journal ArticleDOI
TL;DR: A decoding rule is presented which minimizes the probability of symbol error over a time-discrete memory]ess channel for any linear error-correcting code when the codewords are equiprobable.
Abstract: A decoding rule is presented which minimizes the probability of symbol error over a time-discrete memory]ess channel for any linear error-correcting code when the codewords are equiprobable. The complexity of this rule varies inversely with code rate, making the technique particularly attractive for high rate codes. Examples are given for both block and convolutional codes.

198 citations


Journal ArticleDOI
TL;DR: A simple algorithm is given for correcting multiple errors which makes use of continued fractions and is based on residue encoding.
Abstract: Multiple-error-correcting arithmetic codes which are nonlinear are constructed by residue encoding. A simple algorithm is given for correcting multiple errors which makes use of continued fractions.

75 citations


Journal ArticleDOI
TL;DR: The last part of a three-part series on threshold decoding new convolutional codes concludes the list with new rate 1/2 codes and suggests the concatenation of such codes based upon the uncorrectable error statistics of the decoder.
Abstract: This paper is the last part of a three-part series on threshold decoding new convolutional codes. The first part of this paper concludes the list with new rate 1/2 codes. For these codes a characteristic equation, in terms of the number of correctable errors and code constraint length, is derived by least square approximations. The second part of the paper is concerned with the usefulness of the codes derived including those in Parts I and II. Based upon the uncorrectable error statistics of the decoder, the concatenation of such codes is suggested. The characteristic of this class of codes with the property of not producing additional and bursty errors at the output of the decoder when the capability of the decoder is exceeded, is not shared by most powerful decoders such as Reed-Solomon, BCH, and Viterbi.

54 citations


Journal ArticleDOI
TL;DR: An erasures-and-errors decoding algorithm for Goppa codes is presented, where a modified key equation is solved using Euclid's algorithm to determine the error locator polynomial and the errata evaluatorPolynomial.
Abstract: An erasures-and-errors decoding algorithm for Goppa codes is presented. Given the Goppa polynomial and the modified syndrome polynomial, a modified key equation is solved using Euclid's algorithm to determine the error locator polynomial and the errata evaluator polynomial.

51 citations


Journal ArticleDOI
TL;DR: The classical Viterbi decoder recursively finds the trellis path (code word) closest to the received data, and the syndrome decoder first forms a syndrome, which determines the noise sequence of minimum Hamming weight that can be a possible cause of this syndrome.
Abstract: The classical Viterbi decoder recursively finds the trellis path (code word) closest to the received data. Given the received data, the syndrome decoder first forms a syndrome instead. Having found the syndrome, that only depends on the channel noise, a recursive algorithm like Viterbi's determines the noise sequence of minimum Hamming weight that can he a possible cause of this syndrome. Given the estimate of the noise sequence, one derives an estimate of the original data sequence. The bit error probability of the syndrome decoder is no different from that of the classical Viterbi decoder. However, for short constraint length codes the syndrome decoder can be implemented using a read-only memory (ROM), thus obtaining a considerable saving in hardware. The syndrome decoder has at most \frac{3}{4} as many path registers as does the Viterbi decoder. There exist convolutional codes for which the number of path registers can be even further reduced.

36 citations


Journal ArticleDOI
TL;DR: Soft decision by successive erasures for binary transmission has properties not present in the nonbinary case and the procedure is asymptotically optimum for increasing SNR's.
Abstract: This paper deals with soft decision as a means to bridge the gap in performance between a receiver using hard decision symbol estimation followed by an algebraic decoder and a maximum-likelihood receiver. A measure of the reliability of the code symbol estimates is introduced to facilitate the decoding process. The decoding operation studied erases the least reliable received symbols and then applies an algorithm capable of correcting errors and erasures. This procedure, termed successive-erasure decoding (SED), was introduced by G. D. Forney in connection with general minimum-distance decoding (GMD). It is studied for binary and nonbinary transmission using polyphase signals on the additive white Gaussian noise (AWGN) channel. The exponential behavior of the error probability at high signal-to-noise ratio (SNR) is calculated and is supplemented by computer simulations. The results indicate that soft decision by successive erasures for binary transmission has properties not present in the nonbinary case. In the binary case the procedure is asymptotically optimum for increasing SNR's. On the nonbinary channel, however, the procedure is only capable of bridging part of the gap in performance between maximum-likelihood decoding (MLD) and hard decision decoding (HDD).

34 citations


Patent
01 Nov 1976
TL;DR: Improved three-level to binary decoding was proposed in this paper for use in decoding a threelevel digital data signal received from a transmission line in a serial digital data transmission channel, where uniform loading and controlled timing of the decoding were provided.
Abstract: Improved three-level to binary decoding means for use in decoding a three-level digital data signal received from a transmission line in a serial digital data transmission channel. Uniform loading and controlled timing of the decoding are provided.

33 citations



Journal ArticleDOI
TL;DR: The relationship between the column distance function and the computational effort of sequential decoding is studied and the results of computer simulations are reported.
Abstract: The relationship between the column distance function and the computational effort of sequential decoding is studied and the results of computer simulations are reported. A table of R = 1/2 codes having good free distance and optimum average column distance function (CDF) is presented.

29 citations


Journal ArticleDOI
TL;DR: This work constructs phase codes for use in phase- and frequency-shift modulation using a trellis structure and shows that even short codes give a large improvement in error performance.
Abstract: We construct phase codes for use in phase- and frequency-shift modulation. A trellis structure allows simple decoding using the Viterbi algorithm. Even short codes give a large improvement in error performance.

22 citations


Journal ArticleDOI
TL;DR: The main properties of the generalized t -designs introduced by Delsarte (1973) are studied and used in a majority decoding method which slightly differs from Massey's threshold decoding.
Abstract: The main properties of the generalized t -designs introduced by Delsarte (1973) are studied and used in a majority decoding method which slightly differs from Massey's threshold decoding. The paper also contains a number of results concerning the existence of such designs in codes and a list of some codes which can be decoded by our method.

Journal ArticleDOI
Po Chen1
TL;DR: This concise paper demonstrates the decoding of BCH codes by using a multisequence linear feedback shift register (MLFSR) synthesis algorithm and finds a class of good codes has been found with errorcorrecting capability at least as good as the BCH bound.
Abstract: This concise paper demonstrates the decoding of BCH codes by using a multisequence linear feedback shift register (MLFSR) synthesis algorithm. The algorithm is investigated and developed. With this algorithm, a class of good codes has been found with errorcorrecting capability at least as good as the BCH bound. The application of this algorithm to BCH decoding is discussed.

Journal ArticleDOI
TL;DR: This correspondence gives a tabulation of long systematic, and long quick-look-in (QLI) nonsystematic, rate R = 1/2 binary convolutional codes with an optimum distance profile (ODP) that appear attractive for use with sequential decoders.
Abstract: This correspondence gives a tabulation of long systematic, and long quick-look-in (QLI) nonsystematic, rate R = 1/2 binary convolutional codes with an optimum distance profile (ODP). These codes appear attractive for use with sequential decoders.

09 Apr 1976
TL;DR: A new method is given for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
Abstract: Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

Journal ArticleDOI
TL;DR: It is shown that the same solution still obtains when this restriction is removed on the problem of minimizing mean-square error when digital data is encoded into the elements of a group code via an encoding map a and the received elements of the group code are decoded via a decoding map \beta.
Abstract: The problem of minimizing mean-square error when digital data is encoded into the elements of a group code via an encoding map a and the received elements of the group code are decoded via a decoding map \beta is considered. This problem has been solved under the restriction that \beta = \alpha^{-1} . It is shown that the same solution still obtains when this restriction is removed.

15 Aug 1976
TL;DR: A general technique, called decoding with multipliers, is presented that can be used to decode any linear code and is applied to the (48,24) quadratic residue code and yields the first known practical decoding algorithm for this powerful code.
Abstract: A general technique, called decoding with multipliers, is presented that can be used to decode any linear code. The technique is applied to the (48,24) quadratic residue code and yields the first known practical decoding algorithm for this powerful code.

Journal ArticleDOI
TL;DR: The techniques used provide an enumeration of canonical encoders whose realizations are not isomorphic, and conditions for a realization to be a canonical oncoder and for two canonical encodes to generate the same code are given.
Abstract: A realization of a linear sequential circuit is called a canonical convolutional encoder if its state space has the smallest dimension among all realizations generating the same code. We give conditions for a realization to be a canonical oncoder and for two canonical encoders to generate the same code. The techniques used provide an enumeration of canonical encoders whose realizations are not isomorphic.

Journal ArticleDOI
TL;DR: By using distance properties of convolutional codes, an upper bound on the back-up depth for maximum likelihood decoding is derived and several constraints for performing theBack-up search are introduced which eliminate the necessity for many of the subsearches.
Abstract: By using distance properties of convolutional codes, an upper bound on the back-up depth for maximum likelihood decoding is derived. Then several constraints for performing the back-up search are introduced which eliminate the necessity for many of the subsearches. Examples are given in each step to explain the approach.



01 Jan 1976
TL;DR: A natural extension of the notion of a 'confidence interval' is made and applied to determinations of error probability by simulation to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
Abstract: The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

Journal ArticleDOI
TL;DR: A new bound on the error-correcting capability of majority decoding using nonorthogonal parity checks is derived and the new bound is applied to a class of Euclidean geometry codes.
Abstract: A new bound on the error-correcting capability of majority decoding using nonorthogonal parity checks is derived. The new bound is then applied to a class of Euclidean geometry codes.



Journal ArticleDOI
TL;DR: In this article, the capacity of a binary channel with fission probability was derived and the resulting lower bound was within 0.014 nats of the fourteenth-order upper bound to capacity.
Abstract: We derive sequences of upper and lower bounds that converge to the capacity of a binary channel in which a one takes twice as long to send as does a zero and may be received either as a one or as a pair of zeros. Such a fission mechanism can occur, for example, in the use of Morse code over a noisy channel. Next we present a sequential decoding algorithm for the channel which is particularly easy to implement. By means of the Perron-Frobenius theorem and an extension of Zigangirov's analysis of sequential decoding, we overbound error probability and thereby again underbound capacity. The resulting lower bound turns out to be within 0.014 nats of the fourteenth-order upper bound to capacity, uniformly in the fission probability. By extending an analytical method due in part to Jelinek, we overbound expected decoding computation and thereby lowerbound R_{comp} .

Journal ArticleDOI
TL;DR: For the single error correcting convolutional codes introduced by Wyner and Ash, it is shown that if sufficiently few errors occur in an appropriate neighborhood of a block, the probability of correctly decoding that block is independent of errors outside that neighborhood.
Abstract: For the single error correcting convolutional codes introduced by Wyner and Ash, it is shown that if sufficiently few errors occur in an appropriate neighborhood of a block, the probability of correctly decoding that block is independent of errors outside that neighborhood. This fact is used to derive bounds on the bit error probability and the mean time to first error.

Journal Article
TL;DR: The paper contains multiple decoding schemes for multiple access channels by modification of Forney's maximum likelihood decoding scheme with erasures by obtaining exponential bounds on the probability of error and erasure.
Abstract: The output sequences may be partitioned into different subsets and different decoders may be operating on these subsets. This is the idea of multiple decoding. The paper contains multiple decoding schemes for multiple access channels by modification of Forney's maximum likelihood decoding scheme with erasures. Exponential bounds on the probability of error and erasure are obtained.


Journal ArticleDOI
TL;DR: The first moment of the decoding effort for stack sequential decoding is overbounded by a relatively simple technique and the resulting lower bound to R_{comp} is shown to be equivalent to the best known lower bound.
Abstract: The first moment of the decoding effort for stack sequential decoding is overbounded by a relatively simple technique. The resulting lower bound to R_{comp} is shown to be equivalent to the best known lower bound.

J. L. Massey1, T. Ancheta, R. Johannesson, G. Lauer, L. Lee 
14 Jun 1976
TL;DR: A weight-and-error-locations scheme was developed that is closely related to LDSC coding and a heuristic selection rule based on a water-filling argument was considered for optimum modulation signal sets for a non-white Gaussian channel.
Abstract: The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.