scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1975"


Journal ArticleDOI
TL;DR: Algebraic decoding algorithms for the Goppa codes are presented, which are only a little more complex than Berlekamp's well-known algorithm for BCH codes and, in fact, make essential use of his procedure.
Abstract: An interesting class of linear error-correcting codes has been found by Goppa [3], [4]. This paper presents algebraic decoding algorithms for the Goppa codes. These algorithms are only a little more complex than Berlekamp's well-known algorithm for BCH codes and, in fact, make essential use of his procedure. Hence the cost of decoding a Goppa code is similar to the cost of decoding a BCH code of comparable block length.

221 citations


Journal ArticleDOI
TL;DR: These algorithms fill the gap between the one-path sequential decoding nad the all-path Viterbi decoding and are shown and verified from simulation with short constraint length codes that the variability of the number of computations per decoded bit and the maximum computational effort are both reduced at the cost of a modest increase in the average decoding effort.
Abstract: A new class of generalized stack algorithms for decoding convolutional codes is presented. It is based on the Zigangirov-Jelinek (Z-J) algorithm but, instead of extending just the top node of the stack at all times, a number of the most likely paths are simultaneously extended. This number of paths may be constant or may be varied to match the current decoding effort with the prevalent noise conditions of the channel. Moreover, the trellis structure of the convolutional code is used by recognizing and exploiting the reconvergence of the paths. As a result the variability of the computation can be reduced up to a limit set by the "ideal" stack algorithm. Although the tail of the computational distribution is still Pareto, it is shown and verified from simulation with short constraint length codes (K \leq 9) of rate \frac{1}{2} that, compared to sequential decoding, the variability of the number of computations per decoded bit and the maximum computational effort are both reduced at the cost of a modest increase in the average decoding effort. Moreover, some of the error events of sequential decoding are corrected. These algorithms fill the gap between the one-path sequential decoding nad the all-path Viterbi decoding.

78 citations


Journal ArticleDOI
TL;DR: If the vectors of some constant weight in the dual of a binary linear code support a ( u,b,r,k,\lambda) balanced incomplete block design (BIBD), then it is possible to correct [(r + 2 - 1)/2\lambda] errors with one-step majority logic decoding.
Abstract: If the vectors of some constant weight in the dual of a binary linear code support a ( u,b,r,k,\lambda) balanced incomplete block design (BIBD), then it is possible to correct [(r + 2 - 1)/2\lambda] errors with one-step majority logic decoding. This bound is generalized to the case when the vectors of certain constant weight in the dual code support a t -design. With the aid of this bound, the one-step majority logic decoding of the first, second, and third order Reed-Muller codes is examined.

15 citations


Journal ArticleDOI
TL;DR: With this method the error correction capability of the decoder is extended for large signal-to-noise ratios (SNR's) and different decoding algorithms are used when the number of orthogonal parity-check sums are even and odd, respectively.
Abstract: A method of using reliability information in one-step majority-logic decoders is presented. The idea is, basically, that the received binary digits are corrected in an order such that the least reliable digit is first corrected. With this method the error correction capability of the decoder is extended for large signal-to-noise ratios (SNR's). Different decoding algorithms are used when the number of orthogonal parity-check sums are even and odd, respectively. Computer simulations are presented for some short codes with binary antipodal signals on the additive white Gaussian noise channel.

13 citations


Patent
18 Jun 1975
TL;DR: In this article, the authors describe a multiple decoding system which includes a means for encoding digital data and decoding by three different methods, referred to as Parity Decoding which includes Parity I and Parity II Decoding, Sequential Decoding by rank, and a combination of these two methods.
Abstract: This disclosure describes a Multiple Decoding System which includes a means for encoding digital data and decoding by three different methods. These methods are referred to as Parity Decoding which includes Parity I and Parity II Decoding, Sequential Decoding by rank, and a combination of these two methods. This third method is Multiple Decoding and consists of a combination of Parity and Sequential Decoding.

8 citations


Journal ArticleDOI
TL;DR: A new error-locating polynomial for BCH codes is developed and a decoding procedure similar to Massey's step-by-step decoding is suggested.
Abstract: We develop a new error-locating polynomial for BCH codes. This polynomial has a simple form and suggests a decoding procedure similar to Massey's step-by-step decoding.

8 citations


Journal ArticleDOI
TL;DR: A new hybrid coding scheme is introduced that bears the same relation to Viterbi decoding as bootstrap hybrid decoding [3] bears to sequential decoding.
Abstract: A new hybrid coding scheme is introduced that bears the same relation to Viterbi decoding as bootstrap hybrid decoding [3] bears to sequential decoding. Bounds on the probability of error are developed and evaluated for some examples. In high-rate regions of interest, the computed exponents are more than three times as large as those for Viterbi decoding. Results of simulations are also presented.

7 citations


Journal ArticleDOI
TL;DR: The average synchronization-error-correcting capability of Tavares' subset codes may be improved with no additional cost in rate and with only a small increase in the complexity of encoding and decoding.
Abstract: In this correspondence a method is presented whereby the average synchronization-error-correcting capability of Tavares' subset codes may be improved with no additional cost in rate and with only a small increase in the complexity of encoding and decoding. The method consists simply in shifting every word of the subset codes in such a way so that the shifted versions have a maximum number of leading and trailing zeros. A lower bound on the increase in synchronization-error-correcting capability provided by this method is derived.

4 citations


Book ChapterDOI
01 Jan 1975
TL;DR: Let Ψ:X→Y be some Boolean function, where X and Y are sets of binary words of length n1 and n2, respectively, and encoding and decoding can be viewed as such a Boolean function.
Abstract: Let Ψ:X→Y be some Boolean function, where X and Y are sets of binary words of length n1 and n2, respectively. It is obvious that encoding and decoding can be viewed as such a Boolean function.

3 citations



Journal ArticleDOI
TL;DR: A generalized Kraft inequality and related results for decoding with respect to a fidelity criterion are given, however, restricted to "row-balanced" distortion matrices for finite-alphabet sources.
Abstract: Campbell has given a generalized Kraft inequality and related results for decoding with respect to a fidelity criterion. His results are, however, restricted to "row-balanced" distortion matrices for finite-alphabet sources. We generalize these results to any distortion matrix.

Journal ArticleDOI
TL;DR: The decoders described in the present paper are distinguished by regularity and replicability of their structure, as a result of which they satisfy the principal requirements imposed by modern microelectronics.
Abstract: 1. The paper proposes methods of synthesizing a new class of decoding devices for cyclic codes, which combine the function of decoding and the correction of different types of errors in the code combinations. 2. The volume of equipment in decoders with error detection and correction of erasures and asymmetrical errors is determined solely by the number of code combinations and in no way depends on the code redundancy; as a result, the transition to codes having greater redundancy (for a constant number of information symbols) does not require any increase in equipment. The proposed devices are expediently used for codes having a number of information symbols k≤10 ÷ 15 (this condition corresponds to the greatest extent to remote control systems in which the decoding operation is mandatory). 3. The volume of equipment in decoders with correction of multiple independent and correlated errors is a linear function of the number of corrected errors (error bursts). Cascade decoders are characterized by a considerably lower complexity. 4. The decoders described in the present paper are distinguished by regularity and replicability of their structure, as a result of which they satisfy the principal requirements imposed by modern microelectronics.

Book ChapterDOI
01 Jan 1975
TL;DR: Two central problems of coding for noisy channels are found: to find high-performance codes and to devise efficient but practical decoding methods.
Abstract: Two central problems of coding for noisy channels are: 1.) Find high-performance codes 2.) Devise efficient but practical decoding methods.