scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 1978"


Journal ArticleDOI
TL;DR: BCH codes are constructed over integer residue rings by using BCH oces over both p-adic finite fields to construct BCH codes overinteger residue rings.
Abstract: BCH codes are constructed over integer residue rings by using BCH oces over both p-adic finite fields.

48 citations


Journal ArticleDOI
TL;DR: Van der Horst and Berger have conjectured that the covering radius of the binary 3-error-correcting Bose-Chaudhuri-Hocquenghem (BCH) code of length 2^{m} - l, m \geq 4 is 5.
Abstract: Van der Horst and Berger have conjectured that the covering radius of the binary 3-error-correcting Bose-Chaudhuri-Hocquenghem (BCH) code of length 2^{m} - l, m \geq 4 is 5. Their conjecture was proved earlier when m \equiv 0, 1 , or 3 (mod 4). Their conjecture is proved when m \equiv 2 (mod 4).

29 citations


01 Jan 1978
TL;DR: Using a general decoding technique of Solomon, certain block codes on a Gaussian channel are evaluated and all four perform quite favorably with respect to the constraint-length 7 rate 1/2 convolutional code presently used on NASA's Mariner-class spacecraft.
Abstract: Using a general decoding technique of Solomon we evaluate the performance of certain block codes on a Gaussian channel. Quadratic residue codes of lengths 48 and 80 as well as BCH codes of length 128 and rates 1/2 and 1/3 are considered. All four of these codes perform quite favorably with respect to the constraint-length 7 rate 1/2 convolutional code presently used on NASA's Mariner-class spacecraft.

21 citations


Journal ArticleDOI
01 May 1978
TL;DR: It is shown that Winograd's algorithm can be used to compute an integer transform over GF(q), where q is a Mersenne prime, which makes it possible to more easily encode b.h.c. and r.s. codes.
Abstract: It is shown that Winograd's algorithm can be used to compute an integer transform over GF(q), where q is a Mersenne prime. This new algorithm requires fewer multiplications than the conventional fast Fourier transform (f.f.t). The transform over GF(q) can be implemented readily on a digital computer. This fact makes it possible to more easily encode b.c.h. and r.s. codes.

9 citations


Journal ArticleDOI
01 Dec 1978
TL;DR: A simple transparent proof of Berlekamp's algorithm that uses continued fraction approximations to implement BCH and RS codes is given.
Abstract: It was shown recently that BCH and RS codes can be implemented by Berlekamp's algorithm using continued fraction approximations. A simple transparent proof of Berlekamp's algorithm that uses such a development is given in this paper.

4 citations


15 Oct 1978
TL;DR: The performance of block codes on a Gaussian channel is evaluated in this article, showing that the BCH codes are markedly superior to convolutional codes currently used for deep space missions, which provides a basis for a simple, almost optimum procedure for decoding these codes.
Abstract: The performance of certain block codes on a Gaussian channel is evaluated. The BCH codes are markedly superior to convolutional codes currently used for deep space missions. The algorithm is used to derive results, which provides a basis for a simple, almost optimum procedure for decoding these codes.

3 citations


Journal ArticleDOI
M.W. Williard1
TL;DR: An introduction to redundancy encoding as used in digital data communications is described, followed by a discussion of the binary symmetric channel, burst noise channels, and the use of interleaving to randomize burst errors.
Abstract: An introduction to redundancy encoding as used in digital data communications is described. The need for redundancy is first addressed, followed by a discussion of the binary symmetric channel, burst noise channels, and the use of interleaving to randomize burst errors. The concept of redundancy is presented next, showing how it is used to supply the highest possible degree of error detection or how it can be applied to provide for the detection and correction of a lesser number of errors. The use of some codes to correct some errors and also to detect, but not correct, additional errors is discussed. The properties of block codes are developed beginning with repetition codes then covering single-parity check codes, Hamming (single-error detection) codes, and Bose-Chadhuri-Hocquenghem (BCH)codes. The basic properties and structures of these codes are emphasized with examples of implementation procedures for both encoding and decoding.

3 citations


Journal ArticleDOI
TL;DR: Prefacing an Elias code with iterations of one or more primitive Bose-Chaudhuri-Hoequenghem (BCH) codes is shown to provide error-free decoding for any channel with p ≤ 0.42 and to yield code rates closer to capacity than those of Elias's original code.
Abstract: Improvements on the rates of iterated codes for error-free decoding on the binary symmetric channel are presented. Approximations to the performance of Elias's original error-free codes are replaced with virtually exact results that demonstrate higher code rates and the ability to decode from noisier channels than the original results indicated. Prefacing an Elias code with iterations of one or more primitive Bose-Chaudhuri-Hoequenghem (BCH) codes is shown to provide error-free decoding for any channel with p \leq 0.42 and to yield code rates closer to capacity than those of Elias's original code. An heuristic algorithm is given for selecting an efficient set of BCH codes to iterate.

3 citations


15 Oct 1978
TL;DR: An algorithm was developed which optimally decodes a block code for minimum probability of symbol error in an iterative manner and approaches the optimum estimate after only a fraction of the parity check equations were used.
Abstract: An algorithm was developed which optimally decodes a block code for minimum probability of symbol error in an iterative manner. The initial estimate is made by looking at each bit independently and is improved by considering bits related to it through the parity check equations. The dependent bits are considered in order of interesting probability of error. Since the computation proceeds in a systematic way with the bits having the greatest effect being used first, the algorithm approaches the optimum estimate after only a fraction of the parity check equations were used.

1 citations