scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1968"


Journal ArticleDOI
N. Abramson1
TL;DR: It is shown how to synthesize simple decoders for cyclic product codes in the form of a cascade of decoder, each of which operates on one of the subcodes forming the product code.
Abstract: In this paper, it is shown how to synthesize simple decoders for cyclic product codes. Cyclic product codes may be synthesized in the form of interlaced codes if the block lengths of the codes are relatively prime. In this case, the decoder can be synthesized in the form of a cascade of decoders, each of which operates on one of the subcodes forming the product code. The cascade decoders described differ from most of the decoders given in the coding literature in two major respects. First, they can be built. Second, instead of correcting all error patterns with weight less than some fixed value and no error patterns of greater weight, cascade decoders correct many error patterns beyond their guaranteed correction capability. Thus, the effective error-correction performance using a cascade decoder may be considerably beyond the usual Elias bound for product codes.

60 citations


Journal ArticleDOI
TL;DR: This paper discusses the use of two types of convolutional codes, diffuse threshold-decoded codes and Gallager codes, on channels with memory (burst channels), and proves that, for one important diffuse code, propagation is finite and small.
Abstract: This paper discusses the use of two types of convolutional codes, diffuse threshold-decoded codes and Gallager codes, on channels with memory (burst channels) The operation of these codes is explained and test results are given for a variety of equipments operated over phone line, HF radio, and troposcatter channels Error propagation in the threshold-decoded codes is discussed and, in the Appendix, we prove that, for one important diffuse code, propagation is finite and small

52 citations


Journal ArticleDOI
TL;DR: A decoding method having no internal feedback, definite decoding (DD), is formalized and it is shown that a code using FD with limited L exists if and only if that same code can be decoded using DD.
Abstract: The error-propagation effect in decoding convolutional codes is a result of the internal feedback in the usual decoding method, feedback decoding (FD). As a measure of this effect, the propagation length L of a system is defined as the maximum span of decoding errors following a decoding error when all succeding parity checks are satisfied. A relationship between L , the parity check matrix, and the decoding algorithm is developed. A decoding method having no internal feedback, definite decoding (DD), is formalized. It is shown that a code using FD with limited L exists if and only if that same code can be decoded using DD. When using DD a smaller class of errors is corrected. The self-orthogonal codes are shown to be decodable using FD with L small. The minimum possible value of L when using bounded distance decoding is given for some of these codes. Codes are given which minimize the spacing between single correctable errors using DD. These values of spacing are compared with those for similar (known) codes which use FD, and with the theoretical minimum spacing.

34 citations



Book ChapterDOI
01 Jan 1968
TL;DR: It seems highly significant that most of the new codes found for implementation by threshold decoders have their origins in number theory rather than in algebra, which promises to provide additional classes of codes suitable for threshold decoding.
Abstract: Publisher Summary This chapter presents the recent advances in threshold decoding. In a linear code, whether of the block type or of the convolutional type, the redundant bits in the encoded set are each formed as a modulo-2 summation of selected information bits. Threshold decoding is a simple solution to the decoding problem based on a special subset of parity checks. Threshold decoding has two common forms: (1) majority decoding and (2) a posteriori probability (APP) decoding. The importance of APP decoding is that it permits use of the statistical information regarding the received bits available at the receiver in contrast with purely algebraic decoding techniques. In many systems such as those signaling over the Gaussian white noise channel, this results in an unacceptable degradation of performance. It seems highly significant that most of the new codes found for implementation by threshold decoders have their origins in number theory rather than in algebra. Further use of number theory in this way promises to provide additional classes of codes suitable for threshold decoding.

12 citations


Journal ArticleDOI
TL;DR: Several structural and distance properties of the class of codes are derived and utilized in developing a decoding algorithm and the choice of generator sequence has a marked effect on the ability of the decoder to recover after an error is made.
Abstract: Several structural and distance properties of the class of codes are derived and utilized in developing a decoding algorithm. Thresholds are determined from distance properties of the codes. Further properties of the codes, under the conditions of the algorithm, are utilized to reduce the number of path comparisons required to ensure minimum distance decoding. Simulation results show that the choice of generator sequence has a marked effect on the ability of the decoder to recover after an error is made.

8 citations


Journal ArticleDOI
TL;DR: A new decoding algorithm for some convolutional codes constructed from block codes is given and it is shown that the codes obtained from one-step orthogonalizable block codes are majority decodable.
Abstract: A new decoding algorithm for some convolutional codes constructed from block codes is given. The algorithm utilizes the decoding algorithm for the corresponding block code. It is shown that the codes obtained from one-step orthogonalizable block codes are majority decodable. Error propagation in some of these convolutional codes is studied. It is shown that if decoded with moderately reduced capability, these codes exhibit limited error propagation. A mode switching decoding method is suggested to realize a larger error correction capability while maintaining limited error propagation.

6 citations


Journal ArticleDOI
01 Oct 1968
TL;DR: An upper bound on the average digit error probability of a linear block code is given which is dependent only upon the minimum distance of the code and the tightness of this bound is demonstrated.
Abstract: An upper bound on the average digit error probability of a linear block code is given which is dependent only upon the minimum distance of the code. The tightness of this bound is also demonstrated, and an example is given where knowledge of the average digit error probability is important.

4 citations


Journal ArticleDOI
Udo Augustin1
TL;DR: A simpler proof of the statement above is presented which uses only ‘maximal code estimate’ (see Wolfowitz, 1964) and a converse estimate.
Abstract: Given any pair of arbitrary alphabet channels satisfying coding theorem and its strong converse. Then the coding theorem and its strong converse also holds for the product channel and the capacity of the product channel equals the sum of the capacities of the components. Wyner derives this result in (Wyner, 1966) by means of list decoding methods (which obviously have no strong connection with this kind of problem). Here a simpler proof of the statement above is presented which uses only ‘maximal code estimate’ (see Wolfowitz, 1964) and a converse estimate.

2 citations





01 Mar 1968
TL;DR: This paper presents a probabilistic evaluation of the fluctuotions of the dolo in terms of number of error words and the number of correction pulses when decoding informotion and pority.
Abstract: Performance evaluation of several convolutional and block codes with threshold decoding for space telemetry