scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1975"


Journal ArticleDOI
TL;DR: Algebraic decoding algorithms for the Goppa codes are presented, which are only a little more complex than Berlekamp's well-known algorithm for BCH codes and, in fact, make essential use of his procedure.
Abstract: An interesting class of linear error-correcting codes has been found by Goppa [3], [4]. This paper presents algebraic decoding algorithms for the Goppa codes. These algorithms are only a little more complex than Berlekamp's well-known algorithm for BCH codes and, in fact, make essential use of his procedure. Hence the cost of decoding a Goppa code is similar to the cost of decoding a BCH code of comparable block length.

221 citations


Journal ArticleDOI
TL;DR: The results of computer searches for rate one-half binary convolutional codes that are "robustly optimal" in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria are reported.
Abstract: Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are "robustly optimal" in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.

83 citations


Journal ArticleDOI
TL;DR: These algorithms fill the gap between the one-path sequential decoding nad the all-path Viterbi decoding and are shown and verified from simulation with short constraint length codes that the variability of the number of computations per decoded bit and the maximum computational effort are both reduced at the cost of a modest increase in the average decoding effort.
Abstract: A new class of generalized stack algorithms for decoding convolutional codes is presented. It is based on the Zigangirov-Jelinek (Z-J) algorithm but, instead of extending just the top node of the stack at all times, a number of the most likely paths are simultaneously extended. This number of paths may be constant or may be varied to match the current decoding effort with the prevalent noise conditions of the channel. Moreover, the trellis structure of the convolutional code is used by recognizing and exploiting the reconvergence of the paths. As a result the variability of the computation can be reduced up to a limit set by the "ideal" stack algorithm. Although the tail of the computational distribution is still Pareto, it is shown and verified from simulation with short constraint length codes (K \leq 9) of rate \frac{1}{2} that, compared to sequential decoding, the variability of the number of computations per decoded bit and the maximum computational effort are both reduced at the cost of a modest increase in the average decoding effort. Moreover, some of the error events of sequential decoding are corrected. These algorithms fill the gap between the one-path sequential decoding nad the all-path Viterbi decoding.

78 citations


Journal ArticleDOI
TL;DR: This work proposes using a convolutional encoder for joint source and channel encoding, and reduces to a Convolutional source code that is simpler to encode than any other optimal noiseless source code known to date.
Abstract: In certain communications problems, such as remote telemetry, it is important that any operations performed at the transmitter be of a simple nature, while operations performed at the receiver can frequently be orders of magnitude more complex. Channel coding is well matched to such situations while conventional source coding is not. To overcome this difficulty of usual source coding, we propose using a convolutional encoder for joint source and channel encoding. When the channel is noiseless this scheme reduces to a convolutional source code that is simpler to encode than any other optimal noiseless source code known to date. In either case, decoding can be a minor variation on sequential decoding.

34 citations


Journal ArticleDOI
TL;DR: If the vectors of some constant weight in the dual of a binary linear code support a ( u,b,r,k,\lambda) balanced incomplete block design (BIBD), then it is possible to correct [(r + 2 - 1)/2\lambda] errors with one-step majority logic decoding.
Abstract: If the vectors of some constant weight in the dual of a binary linear code support a ( u,b,r,k,\lambda) balanced incomplete block design (BIBD), then it is possible to correct [(r + 2 - 1)/2\lambda] errors with one-step majority logic decoding. This bound is generalized to the case when the vectors of certain constant weight in the dual code support a t -design. With the aid of this bound, the one-step majority logic decoding of the first, second, and third order Reed-Muller codes is examined.

15 citations


Journal ArticleDOI
TL;DR: A method is proposed that utilizes punctured Reed-Solomon block codes for adaptive coding that enables some codewords to use more redundancy for correcting errors, while other adjacentcodewords use less redundancy.
Abstract: A method is proposed that utilizes punctured Reed-Solomon (RS) block codes for adaptive coding. Part of the redundancy of the RS codewords is used in a convolutional coding framework. This enables some codewords to use more redundancy for correcting errors, while other adjacent codewords use less redundancy.

15 citations


Book ChapterDOI
01 Jan 1975
TL;DR: This chapter focuses on the feedback decoding of the convolutional codes, which is capable of providing error correction performance superior to that of block codes for the same level of equipment complexity.
Abstract: Publisher Summary In recent years, convolutional coding-decoding techniques have become increasingly popular in digital communication systems where there is a requirement to provide error correction and to improve communication efficiency This chapter focuses on the feedback decoding of the convolutional codes Convolutional encoding with feedback decoding is capable of providing error correction performance superior to that of block codes for the same level of equipment complexity Convolutional encoding-decoding is more desirable than competing block encoding-decoding techniques in most of these applications because, for a given error correction capability or improvement in communication efficiency, the systems based on convolutional codes are less complex and hence less costly This has been shown theoretically and in practical equipment designs and implementations In general, feedback decoder implementations have the added attraction that they can be made effective on burst error channels Interleaving of data in the encoder and deinterleaving in the decoder can be performed in a straightforward manner, effectively breaking up error bursts and making the channel appear memory less to the decoder Feedback decoders are simple to implement Feedback decoding is especially attractive on burst error channels as very effective interleaving, to break up long bursts, can be implemented simply with no increase in code synchronization requirement

15 citations


Journal ArticleDOI
TL;DR: It is shown that Goppa codes can be decoded by a simple modification of a decoder for a Reed-Solomon code.
Abstract: It is shown that Goppa codes can be decoded by a simple modification of a decoder for a Reed-Solomon code.

12 citations


Patent
18 Jun 1975
TL;DR: In this article, the authors describe a multiple decoding system which includes a means for encoding digital data and decoding by three different methods, referred to as Parity Decoding which includes Parity I and Parity II Decoding, Sequential Decoding by rank, and a combination of these two methods.
Abstract: This disclosure describes a Multiple Decoding System which includes a means for encoding digital data and decoding by three different methods. These methods are referred to as Parity Decoding which includes Parity I and Parity II Decoding, Sequential Decoding by rank, and a combination of these two methods. This third method is Multiple Decoding and consists of a combination of Parity and Sequential Decoding.

8 citations


Journal ArticleDOI
TL;DR: A new error-locating polynomial for BCH codes is developed and a decoding procedure similar to Massey's step-by-step decoding is suggested.
Abstract: We develop a new error-locating polynomial for BCH codes. This polynomial has a simple form and suggests a decoding procedure similar to Massey's step-by-step decoding.

8 citations


Journal ArticleDOI
TL;DR: A new hybrid coding scheme is introduced that bears the same relation to Viterbi decoding as bootstrap hybrid decoding [3] bears to sequential decoding.
Abstract: A new hybrid coding scheme is introduced that bears the same relation to Viterbi decoding as bootstrap hybrid decoding [3] bears to sequential decoding. Bounds on the probability of error are developed and evaluated for some examples. In high-rate regions of interest, the computed exponents are more than three times as large as those for Viterbi decoding. Results of simulations are also presented.

Book ChapterDOI
01 Jan 1975
TL;DR: This course is devoted to the foundations of sequential decoding, first suggested by J. Wozencraft and essentially improved by American and Soviet scientists.
Abstract: This course is devoted to the foundations of sequential decoding. Sequential decoding was first suggested by J. Wozencraft [1]. Then it has been essentially improved by American and Soviet scientists.

Journal ArticleDOI
TL;DR: The average synchronization-error-correcting capability of Tavares' subset codes may be improved with no additional cost in rate and with only a small increase in the complexity of encoding and decoding.
Abstract: In this correspondence a method is presented whereby the average synchronization-error-correcting capability of Tavares' subset codes may be improved with no additional cost in rate and with only a small increase in the complexity of encoding and decoding. The method consists simply in shifting every word of the subset codes in such a way so that the shifted versions have a maximum number of leading and trailing zeros. A lower bound on the increase in synchronization-error-correcting capability provided by this method is derived.

Book ChapterDOI
01 Jan 1975
TL;DR: Let Ψ:X→Y be some Boolean function, where X and Y are sets of binary words of length n1 and n2, respectively, and encoding and decoding can be viewed as such a Boolean function.
Abstract: Let Ψ:X→Y be some Boolean function, where X and Y are sets of binary words of length n1 and n2, respectively. It is obvious that encoding and decoding can be viewed as such a Boolean function.

Journal ArticleDOI
TL;DR: A class of burst-error-correcting binoid codes derived from Samoylenko's codes, at high code rate, seem to be very useful for a not-too-noisy transmission channel when the encoding-decoding operations are performed by means of a general purpose computer.
Abstract: In this paper we present a class of burst-error-correcting binoid codes derived from Samoylenko's codes. These codes, at high code rate, seem to be very useful for a not-too-noisy transmission channel when the encoding-decoding operations are performed by means of a general purpose computer.


19 Aug 1975
TL;DR: Simulations for two of the new codes confirm Massey's conjecture that systematic and non-systematic codes of the same rate yield nearly identical computational and error probability performance with sequential decoding when the number of digits transmitted in the tail of the encoded frame is the same for both codes.
Abstract: A tabulation is given of long systematic and long quick-look-in (QLI) nonsystematic rate R = 1/2 binary convolutional codes with an optimum distance profile (ODP). These codes appear attractive for use with sequential decoders. Simulations for two of the new codes are reported and confirm Massey's conjecture that systematic and non-systematic codes of the same rate yield nearly identical computational and error probability performance with sequential decoding when the number of digits transmitted in the tail of the encoded frame is the same for both codes.


01 Jan 1975
TL;DR: A new algorithm, baaed on the sequential compound detector, is derived in this thesis for the joint detection and decoding of information transmitted at high speed through intersymbol interference channels using convolutional encoding.
Abstract: A new algorithm, baaed on the sequential compound detector, is derived in this thesis for the joint detection and decoding of information transmitted at high speed through intersymbol interference channels using convolutional encoding. Performance results, obtained using computer simulation, show the 'joint' algorithm to considerably outperform the separate detection and decoding procedure, without a significant increase in computational complexity. In addition, the hardware requirements for the joint detector-decoder are substantially less than those for the separate detector-decoder for short constraint lengths.