scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1969"


Journal ArticleDOI
TL;DR: This paper describes a technique for high-speed decoding of burst-error-correcting codes and a class of codes most suitable for this purpose and with a small amount of additional circuitry the decoders proposed are capable of decoding speeds several orders of magnitude higher than those of conventional decoder.
Abstract: This paper describes a technique for high-speed decoding of burst-error-correcting codes and a class of codes most suitable for this purpose. With a small amount of additional circuitry the decoders proposed in this paper are capable of decoding speeds several orders of magnitude higher than those of conventional decoders.

49 citations


Journal ArticleDOI
TL;DR: Asymptotically tight upper and lower error bounds are obtained for orthogonal signals in additive white Gaussian noise channels for a class of generalized decision strategies, which afford the possibility of erasure or variable-size list decoding.
Abstract: For a class of generalized decision strategies, which afford the possibility of erasure or variable-size list decoding, asymptotically tight upper and lower error bounds are obtained for orthogonal signals in additive white Gaussian noise channels. Under the hypothesis that a unique signal set is asymptotically optimal for the entire class of strategies, these bounds are shown to hold for the optimal set in both the white Gaussian channel and the class of input-discrete very noisy memoryless channels.

31 citations



Journal ArticleDOI
TL;DR: An adaptive decoding technique called burst trapping is presented to correct both random and burst errors, and results indicate that the performance of such codes, when compared with interleaved block codes, offers better results at significantly lower cost.
Abstract: An adaptive decoding technique called burst trapping is presented to correct both random and burst errors. Two decoding algorithms are used, one for random errors, and the other for bursts. The former is based on a conventional correction technique, the latter utilizes an encoding procedure for which each information digit appears twice in the data stream, first unchanged, and second combined with (addition modulo 2 ) a check digit of a widely separated later block. Whenever the number of errors within a code block are detected to be too large to correct with the random-error-correcting algorithm, the burst-correcting algorithm corrects these errors by recovering the information from later blocks where it appears in combination with check digits. It is shown that the scheme requires very limited guard space and has limited error propagation. Furthermore, the storage requirement is even smaller than the guard space. This is the only known coding system that has this desirable feature. Results of simulation of such codes over telephone channels indicate that the performance of such codes, when compared with interleaved block codes, offers better results at significantly lower cost.

25 citations


01 Nov 1969
TL;DR: In this paper, the optimal decoding of convolutional codes is confirmed using error probability upper bound (EBP upper bound) in the presence of error probability lower bound (EPB).
Abstract: Algorithm for optimal decoding of convolutional codes confirmed using error probability upper bound

20 citations


Journal ArticleDOI
TL;DR: Tutorially presented are theoretical and practical concepts that underlie error-control coding for data computing, storage, and transmission systems, with emphasis on cyclic codes, the most deeply studied and widely used of the many available codes.
Abstract: Tutorially presented are theoretical and practical concepts that underlie error-control coding for data computing, storage, and transmission systems. Emphasis is on cyclic codes, the most deeply studied and widely used of the many available codes. Operations of typical binary shift registers illustrate the encoding and decoding processes. Strategic considerations for applying coding to computer-communication systems are discussed. Actual applications further exemplify the basis for code selection.

19 citations


Journal ArticleDOI
TL;DR: It is shown that minimum-distance and other decoders for parity-check codes can be realized with complexity proportional to the square of block length, although at the possible expense of a large decoding time.
Abstract: Several classes of decoding rules are considered here including block decoding rules, tree decoding rules, and bounded-distance and minimum-distance decoding rules for binary parity-check codes. Under the assumption that these rules are implemented with combinational circuits and sequential machines constructed with AND gates, OR gates, INVERTERS, and binary memory cells, bounds are derived on their complexity. Complexity is measured by the number of logic elements and memory cells, and it is shown that minimum-distance and other decoders for parity-check codes can be realized with complexity proportional to the square of block length, although at the possible expense of a large decoding time. We examine tradeoffs between probability of error and complexity for the several classes of rules.

18 citations


Journal ArticleDOI
K. Levitt1, W. Kautz1
TL;DR: A cellular array is shown to be applicable for the encoding and decoding of binary error-correcting codes, and also for identifying the possibilities of tradeoffs between decoding time and equipment complexity.
Abstract: A cellular array is a logical network of identical or almost identical cells, each of which contains a small amount of logic and storage, and, except for a few buses to the edge of the array, is connected only to its immediate neighbors. The cellular approach offers special advantages for realization by the forthcoming large-scale-integrated (LSI) technology. Such arrays are shown to be applicable for the encoding and decoding of binary error-correcting codes, and also for identifying the possibilities of tradeoffs between decoding time and equipment complexity. Arrays are presented for the decoding of single errors, burst errors, and erasures; the decoding of erasures is accomplished by the equation-solution approach, and it is shown for several code families that the Gauss elimination procedure is not required.

12 citations


Journal ArticleDOI
TL;DR: This correspondence shows the formal equivalence between Massey's decode scheme called threshold decoding involving L -step orthogonalizable codes and Reed's decoding scheme originally conceived for the Muller codes.
Abstract: This correspondence shows the formal equivalence between Massey's decoding scheme called threshold decoding involving L -step orthogonalizable codes and Reed's decoding scheme originally conceived for the Muller codes. Upon examining these two decoding algorithms it is shown that each can be described in terms of a decoding logic circuit. The formal equivalence of the algorithms is proved by showing the formal equivalence of their respective decoding circuits.

11 citations


Journal ArticleDOI
TL;DR: Comparison of the performance of an ordinary feedback decoder with a genie-aided feedbackDecoder, which never propagates errors, indicates that error propagation with uniform codes is a minor problem if the optimum orthogonalization rules are used, but that the situation is somewhat worse with nonoptimum orthogonaization.
Abstract: The problem of error propagation in uniform codes is investigated using the concept of parity-parallelogram submatrices and the threshold-decoding algorithm. A set of optimum orthogonalization rules is presented and it is shown that if these rules are incorporated into the decoder, then sufficient conditions can be found for the return of the decoder to correct operation following a decoding error. These conditions are considerably less stringent than the requirement that the channel be completely free of errors following a decoding error. However, this is not the case if the prescribed orthogonalization rules are not followed, as is demonstrated with a simple example. It is also shown that the syndrome memory required with Massey's orthogonalization procedure for definite decoding of uniform codes is the lowest possible. The results of simulation of the rate \frac{1}{4} and \frac{1}{8} uniform codes are presented, and these codes are seen to make fewer decoding errors with feedback decoding than with definite decoding. Comparison of the performance of an ordinary feedback decoder with a genie-aided feedback decoder, which never propagates errors, indicates that error propagation with uniform codes is a minor problem if the optimum orthogonalization rules are used, but that the situation is somewhat worse with nonoptimum orthogonalization.

6 citations


Journal ArticleDOI
TL;DR: It is proved that every linear code of dimension k can be decoded by a threshold decoding circuit that is guaranteed to correct e errors if e \leq (d - 1)/2 where d is the minimum distance of the code.
Abstract: It is proved that every linear code of dimension k can be decoded by a threshold decoding circuit that is guaranteed to correct e errors if e \leq (d - 1)/2 where d is the minimum distance of the code. Moreover it is demonstrated that the number of levels of threshold logic is less than or equal to k by giving an algorithm for generating the decoding logic employing k levels.

Journal ArticleDOI
TL;DR: The techniques of coding theory are used to improve the reliability of digital devices by introducing majority voting and parity bit checking, and computations are made for several binary addition circuits.
Abstract: The techniques of coding theory are used to improve the reliability of digital devices. Redundancy is added to the device by the addition of extra digits which are independently computed from the input digits. A decoding device examines the original outputs along with the redundant outputs. The decoder may correct any errors it detects, not correct but locate the defective logic gate or subsystem, or only issue a general error warning. Majority voting and parity bit checking are introduced, and computations are made for several binary addition circuits. A detailed summary of coding theory is presented. This includes a discussion of algebraic codes, binary group codes, nonbinary linear codes, and error locating codes.

Journal ArticleDOI
W. Kautz1, K. Levitt1
TL;DR: The most noteworthy Soviet contributions have occurred in those areas that deal with codes for the noiseless channel, codes that correct asymmetric errors, decoding for cyclic codes, randomcoding bounds on the amount of computation required, and various application criteria.
Abstract: Described in this report are the results of a comprehensive technical survey of all published Soviet literature in coding theory and its applications--over 400 papers and books appearing before March 1967. The purpose of this report is to draw attention to this important collection of technical results, which are not well known in the West, and to summarize the significant contributions. Particular emphasis is placed upon those results that fill gaps in the body of knowledge about coding theory and practice as familiar to non-Soviet The most noteworthy Soviet contributions have occurred in those areas that deal with codes for the noiseless channel, codes that correct asymmetric errors, decoding for cyclic codes, randomcoding bounds on the amount of computation required, and various application criteria--that is, when to use which code, and how well it performs. Other important but isolated results have been reported on the construction of optimal low-rate codes, bounds on nonrandom codes, linear (continuous) coding, codes for checking arithmetic operations, properties of code polynomials, linear transformations of codes, multiple-burst-correcting codes, special synchronization codes, and certain broad generalizations of the conventional coding problem. Little or no significant work has been done on pseudorandom sequences, unit-distance codes (with one exception), the application of codes to the design of redundant computers and memories, the search for good cyclic codes, and the physical realization of sequential decoding algorithms. Section II of this report is directed to the nonspecialist, and describes the status of the field of coding theory in the Soviet Union, summarizes the major technical results, and compares these with corresponding work in the West. Section III discusses in detail for the coding specialist new theoretical results, details of coding procedures, and analytical tools described in the Soviet literature. A complete bibliography is included.

Journal ArticleDOI
TL;DR: It is shown that the proposed decoding scheme can be applied to several BCH codes making it possible to correct many errors beyond the ones guaranteed by the known minimum distance and also the codes will be "effectively" majority decodable.
Abstract: A decoding scheme is given for some block codes for which it is known how to decode a subcode. It is shown that the proposed decoding scheme can be applied to several BCH codes making it possible to correct many errors beyond the ones guaranteed by the known minimum distance and also the codes will be "effectively" majority decodable.

01 Oct 1969
TL;DR: The upper bound on the rate of the arithmetic code is derived and a simple decoding method is presented for a general multiple error correction.
Abstract: : The upper bound on the rate of the arithmetic code is derived. Comparisons to actual rates are presented. Some codes have rates very close to this bound. A simple decoding method is presented for a general multiple error correction. The time required for the decoding depends on the decoding index k. For a small decoding index, the decoding can be much faster by using some parallel hardwares. (Author)

Journal ArticleDOI
TL;DR: A general method is proposed for decoding any cyclic binary code at extremely high speed using only modulo 2 adders and threshold elements, and the decoders may be designed for maximum-likelihood decoding.
Abstract: A general method is proposed for decoding any cyclic binary code at extremely high speed using only modulo 2 adders and threshold elements, and the decoders may be designed for maximum-likelihood decoding. The number of decoding cycles is a fraction of the number of digits in the code word.


Journal ArticleDOI
TL;DR: A binary code which locates the position of a single subblock containing errors in a code word is described briefly, and a decoding technique employing feedback shift registers is discussed.
Abstract: A binary code which locates the position of a single subblock containing errors in a code word is described briefly, and a decoding technique employing feedback shift registers is discussed.

Journal ArticleDOI
TL;DR: A decoding scheme, with feedback, for linear convolutional codes requires no post-decoding processing involving memory to retrieve the information sequences, which implies that no additional errors occur that are due only to the information retrieving process.
Abstract: A decoding scheme, with feedback, for linear convolutional codes is given. The scheme when applied to linear nonsystematic convolutional codes requires no post-decoding processing involving memory to retrieve the information sequences. This implies that no additional errors occur that are due only to the information retrieving process.