scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 1976"


Journal ArticleDOI
TL;DR: The last part of a three-part series on threshold decoding new convolutional codes concludes the list with new rate 1/2 codes and suggests the concatenation of such codes based upon the uncorrectable error statistics of the decoder.
Abstract: This paper is the last part of a three-part series on threshold decoding new convolutional codes. The first part of this paper concludes the list with new rate 1/2 codes. For these codes a characteristic equation, in terms of the number of correctable errors and code constraint length, is derived by least square approximations. The second part of the paper is concerned with the usefulness of the codes derived including those in Parts I and II. Based upon the uncorrectable error statistics of the decoder, the concatenation of such codes is suggested. The characteristic of this class of codes with the property of not producing additional and bursty errors at the output of the decoder when the capability of the decoder is exceeded, is not shared by most powerful decoders such as Reed-Solomon, BCH, and Viterbi.

54 citations


Journal ArticleDOI
TL;DR: An extensive study of binary triple-error-correcting codes of primitive length n = 2^{m} - 1 is reported that results in a complete decoding algorithm whenever the maximum coset weight W_{max} is five.
Abstract: An extensive study of binary triple-error-correcting codes of primitive length n = 2^{m} - 1 is reported that results in a complete decoding algorithm whenever the maximum coset weight W_{max} is five. In this regard it is shown that W_{max} = 5 when four divides m , and strong support is provided for the validity of the conjecture that W_{max} = 5 for all m . The coset weight distribution is determined exactly in some cases and bounded in others.

51 citations



Journal ArticleDOI
TL;DR: Soft decision by successive erasures for binary transmission has properties not present in the nonbinary case and the procedure is asymptotically optimum for increasing SNR's.
Abstract: This paper deals with soft decision as a means to bridge the gap in performance between a receiver using hard decision symbol estimation followed by an algebraic decoder and a maximum-likelihood receiver. A measure of the reliability of the code symbol estimates is introduced to facilitate the decoding process. The decoding operation studied erases the least reliable received symbols and then applies an algorithm capable of correcting errors and erasures. This procedure, termed successive-erasure decoding (SED), was introduced by G. D. Forney in connection with general minimum-distance decoding (GMD). It is studied for binary and nonbinary transmission using polyphase signals on the additive white Gaussian noise (AWGN) channel. The exponential behavior of the error probability at high signal-to-noise ratio (SNR) is calculated and is supplemented by computer simulations. The results indicate that soft decision by successive erasures for binary transmission has properties not present in the nonbinary case. In the binary case the procedure is asymptotically optimum for increasing SNR's. On the nonbinary channel, however, the procedure is only capable of bridging part of the gap in performance between maximum-likelihood decoding (MLD) and hard decision decoding (HDD).

34 citations


Journal ArticleDOI
TL;DR: The coset leader of greatest weight in the 3-error-correcting BCH code of length 2^{m}-1 has weight 5, for odd m \geq 5 .
Abstract: The coset leader of greatest weight in the 3-error-correcting BCH code of length 2^{m}-1 has weight 5, for odd m \geq 5 .

21 citations


Journal ArticleDOI
Po Chen1
TL;DR: This concise paper demonstrates the decoding of BCH codes by using a multisequence linear feedback shift register (MLFSR) synthesis algorithm and finds a class of good codes has been found with errorcorrecting capability at least as good as the BCH bound.
Abstract: This concise paper demonstrates the decoding of BCH codes by using a multisequence linear feedback shift register (MLFSR) synthesis algorithm. The algorithm is investigated and developed. With this algorithm, a class of good codes has been found with errorcorrecting capability at least as good as the BCH bound. The application of this algorithm to BCH decoding is discussed.

13 citations


Patent
01 Apr 1976
TL;DR: In this article, the Bose-Chaudhure-Hocquenghem (BCH) code is used for transmission of data using the apparatus for data transmission.
Abstract: The apparatus is used for transmission of data using the Bose-Chaudhure-Hocquenghem (BCH) code, which is a cyclic code with redundances which enable the correction of errors. At the receiver the code blocks are fed into a first correction network, which sorts them to produce collections of errors which may be corrected. The data is then fed into a second correction network in which statistically divided errors are tested. The corrections are produced in correct priority. If the first correction network produces no collections of errors capable of being corrected, the second correction network still operates. The second correction network corrects the results of the first. Registers and buffer stores are used for storing the (BCH) coded data.

3 citations


Dhriti Kapur1
01 Jan 1976
TL;DR: Most of the codes surveyed in this thesis have been found suitable in computer environment with existing trade-off between redundancy and decoding time.
Abstract: The thesis presents an investigative survey of Error-Correcting Codes suitable for application in computer environment. Errorcorrecting codes have been successfully utilized to improve reliability in transmitting information in communication systems. In recent years the phenomenol increase in information handled by digital computers has enhanced the need for computer system reliability. In the survey with respect to error-correction the overall computer system has been broadly classified into three sections, • namely the computer memory system, the computer peripheral system and the central processing unit. Each section is discussed under a separate heading. Error-correcting codes used in computer memory systems depend upon the configuration of memory. For those memories which are packaged on single bit per card basis, single error-correcting, double error-detecting Hamming type codes, double error-correcting, triple error-detecting BCH codes, and one step majority decodable codes play a useful role in increasing the reliability of memory. Byte error-correcting codes form the basis of correcting errors in memories configured as multiple bit per card. A general class of maximal codes was developed by Hong and Patel whose structure is not restricted to any homogeneous bit per card arrangement and is capable of correcting single random byte errors. Cyclic codes formed the basis of the error-correcting scheme -1in magnetic tape and disc drives which are part of the computer peripheral system. Cyclic Redundancy Code CCRC) and the Orthogonal Rectangular Code (ORC) were found applicable to magnetic tape units. In magnetic disc systems Fire codes with high speed decoding could be used for single channel. Recently Malhotra and Fisher have come up with a practical error-correcting scheme for multichannel disc systems. Reed-Solomon codes were best suited for photodigital mass storage systems. The decoding scheme employed a hybrid hardware-software technique to simplify the complexity of decoding the multiple character correcting code. v In the processing unit of the computer the error-correcting codes used are arithmetic codes. The best known among these which are suitable for computer arithmetic as well as easily implementable are the residue codes. The Biresidue code proposed by Rao involved circuit redundancy of the magnitude of 30-35 percent of the main processor which is definitely more economical than duplicating redundant schemes like Triple Modular Redundancy. A desirable feature of the error-correcting codes used in computer systems is the fast and simple encoding and decoding procedure. To ensure efficient operation the speed of implementation of the code must be comparable to the speed of operation of the computer system. Most of the codes surveyed in this thesis have been found suitable in computer environment with existing trade-off between"redundancy and decoding time. -2CHAPTER

1 citations


Journal ArticleDOI
TL;DR: An expression for the average distortion per bit for double-error-correcting narrow-sense primitive BCH codes is derived.
Abstract: It is known that error-correcting codes can be used to encode sources in the sense of data compression. We derive an expression for the average distortion per bit for double-error-correcting narrow-sense primitive BCH codes.