scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1988"


Journal ArticleDOI
TL;DR: In this article, the concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P+l), where l can be varied between 1 and (N-1)P. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame.
Abstract: The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P+l), where l can be varied between 1 and (N-1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. the application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimise throughput. >

1,967 citations


Journal ArticleDOI
TL;DR: It is shown that a large number of errors can be detected by monitoring the control flow and memory-access behavior and two techniques for control-flow checking are discussed and compared with current error-detection techniques.
Abstract: Concurrent system-level error detection techniques using a watchdog processor are surveyed. A watchdog processor is a small and simple coprocessor that detects errors by monitoring the behavior of a system. Like replication, it does not depend on any fault model for error detection. However, it requires less hardware than replication. It is shown that a large number of errors can be detected by monitoring the control flow and memory-access behavior. Two techniques for control-flow checking are discussed and compared with current error-detection techniques. A scheme for memory-access checking based on capability-based addressing is described. The design of a watchdog for performing reasonable checks on the output of a main processor by executing assertions is discussed. >

599 citations


Journal ArticleDOI
TL;DR: The probabilistic method is used to find minimum weights of all extended quadratic residue codes of length 440 or less, and can be generalized to codes over GF(q) with q>2.
Abstract: An algorithm is developed that can be used to find, with a very low probability of error (10/sup -100/ or less in many cases), the minimum weights of codes far too large to be treated by any known exact algorithm. The probabilistic method is used to find minimum weights of all extended quadratic residue codes of length 440 or less. The probabilistic algorithm is presented for binary codes, but it can be generalized to codes over GF(q) with q>2. >

268 citations


Journal ArticleDOI
TL;DR: Two concurrent error detection (CED) schemes are proposed for N-point fast Fourier transform (FFT) networks that consists of log/sub 2/N stages with N/2 two-point butterfly modules for each stage to achieve both error detection and location.
Abstract: Two concurrent error detection (CED) schemes are proposed for N-point fast Fourier transform (FFT) networks that consists of log/sub 2/N stages with N/2 two-point butterfly modules for each stage. The method assumes that failures are confined to a single complex multiplier or adder or to one input or output set of lines. Such a fault model covers a broad class of faults. It is shown that only a small overhead ratio, O(2/log/sub 2/N) of hardware, is required for the networks to obtain fault-secure results in the first scheme. A novel data retry technique is used to locate the faulty modules. Large roundoff errors can be detected and treated in the same manner as functional errors. The retry technique can also distinguish between the roundoff errors and functional errors that are caused by some physical failures. In the second scheme, a time-redundancy method is used to achieve both error detection and location. It is sown that only negligible hardware overhead is required. However, the throughput is reduced to half that of the original system, without both error detection and location, because of the nature of time-redundancy methods. >

257 citations


Patent
01 Feb 1988
TL;DR: In this paper, a method and apparatus for operating a multi-unit memory system so that one of such units may readily be replaced in service is described, and the system comprises an error correction code (ECC) generation circuit, a plurality of read/write memory units and at least one spare read or write memory unit.
Abstract: A method and apparatus are disclosed for operating a multi-unit memory system so that one of such units may readily be replaced in service. The system comprises an error correction code (ECC) generation circuit, a plurality of read/write memory units and at least one spare read/write memory unit. The ECC circuit generates an error correction code for each block of data to be stored in the system and supplies this code along with the block of data to the memory units for storage. The system further comprises means for generating from a sequence of blocks of data and associated error correction codes retrieved from these memory units a sequence of bits which correct an error in the information retrieved from one memory unit and means for writing this sequence of correction bits to the spare read/write memory unit. Advantageously, the system also comprises means for rewriting the sequence of correction bits to a memory unit after a faulty memory unit has been repaired or replaced. Preferably, the sequence of correction bits is generated by the same ECC circuit which generates the error correction codes; and the sequence of correction bits is connected to the spare memory unit, a repaired unit or a replacement unit through an array of multiplexers.

256 citations


Patent
15 Sep 1988
TL;DR: In this article, the combination of trellis coding and MPSK signaling with asymmetry (nonuniform spacing) to the signal set is disclosed with regard to its suitability for a fading mobile satellite communication channel.
Abstract: The combination of trellis coding and MPSK signaling with asymmetry (nonuniform spacing) to the signal set is disclosed with regard to its suitability for a fading mobile satellite communication channel. For MPSK signaling, introducing nonuniformity in the phase spacing between signal points provides an improvement in performance over that achievable with trellis codes symmetric MPSK signaling, all this without increasing the average or peak power, or changing the bandwidth constraints imposed on the system. Block interleaving may be used to reduce error and pilot tone(s) may be used for improving the error correction performance of the trellis decoder in the presence of channel fading.

154 citations


Journal ArticleDOI
TL;DR: A linear algebraic interpretation is developed for previously proposed algorithm-based fault tolerance schemes and it is shown why the correction scheme does not work for general weight vectors, and a novel fast-correction algorithm for a weighted distance-5 code is derived.
Abstract: A linear algebraic interpretation is developed for previously proposed algorithm-based fault tolerance schemes. The concepts of distance, code space, and the definitions of detection and correction in the vector space R/sup n/ are explained. The number of errors that can be detected or corrected for a distance-(d+1) code is derived. It is shown why the correction scheme does not work for general weight vectors, and a novel fast-correction algorithm for a weighted distance-5 code is derived. >

150 citations


Journal ArticleDOI
TL;DR: Use of a table look-up in computing the CRC bits will efficiently implement these codes in software.
Abstract: Cyclic Redundancy Check (CRC) codes provide a simple yet powerful method of error detection during digital data transmission. Use of a table look-up in computing the CRC bits will efficiently implement these codes in software.

129 citations


Book
26 Apr 1988
TL;DR: In this article, the authors present a full-duplex operation with ancillary functions needed to make a full data set, including the Ancillary Functions Needed to Make a Full Data Set, as well as a Carrier Recovery Timing Recovery Linear Adaptive Equalizers Nonlinear Equalizers Coding Used for Forward-Acting Error Correction Full-Duplex Operation
Abstract: Modem Marketing Base-Band Transmission Pass-Band Transmission and Modulation Methods Receiver Strategies and Components Carrier Recovery Timing Recovery Linear Adaptive Equalizers Nonlinear Equalizers Coding Used for Forward-Acting Error Correction Full-Duplex Operation The Ancillary Functions Needed to Make a Full Data Set.

116 citations


Patent
13 May 1988
TL;DR: In this paper, error locations and error values are simultaneously found by inserting a first value of x, xa1 (22) into the expressions deltaeven(x) (24c) and deltaodd(x), (24b) and also into an error value polynomial PHI (x)(24a).
Abstract: Error locations and error values are simultaneously found by inserting a first value of x, xa1 (22) into the expressions deltaeven(x) (24c) and deltaodd(x) (24b) and also into an error value polynomial PHI (x) (24a). Next, while the error locator equation is evaluated at the calculated values of deltaeven(xa1) and deltaodd(xa1) to determine if xa1 is a solution (26b), the now known values of the error evaluator polynomial PHI (xa1) and the expression deltaodd(xa1) are substituted into an error value formula (26a). Thus as soon as an error location is found, the error can then be quickly corrected (30). Next, the values are calculated for another value of x, xa2 (34b). The newly evaluated expressions are then substituted into the error locator equation and the error value equation formula to determine if xa2 is a solution. If xa2 is a solution, the error value va2 which was simultaneously calculated for xa2 is used to correct the error (20). If xa2 is not a solution, the calculated error value is ignored (28). Then, the expressions are similarly evaluated at the other values of x.

104 citations


Journal ArticleDOI
TL;DR: Simulation and analysis results are presented to illustrate the adder's timing characteristics, hardware requirements, and error-detection capabilities.
Abstract: The adder is intended to be used as a building block in the design of more complex circuits and systems using very large scale integration (VLSI). An efficient approach to error detection has been selected through extensive comparisons of several methods that use hardware, time, and hybrid redundancy. Simulation and analysis results are presented to illustrate the adder's timing characteristics, hardware requirements, and error-detection capabilities. One novel feature of the analysis is the introduction of error latency as a means of comparing the error-detection capabilities of several alternative approaches. >

Patent
Einar O. Traa1
28 Jan 1988
TL;DR: In this paper, an analog-to-digital converter comprises a set of comparators (12a-12f), whose logic states are a function of an analog input signal voltage and one or more reference voltage signals supplied by a resistive network, which are connected to a decoder for processing the thermometer code outputs of the comparators to generate a digital word output corresponding to the voltage amplitude of the analog signal.
Abstract: An analog-to-digital converter (10) comprises a set of comparators (12a-12f) for providing a set of different output signals whose logic states are a function of an analog input signal voltage and one or more reference voltage signals supplied by a resistive network (16). The comparators are connected to a decoder (20) for processing the thermometer code outputs of the comparators to generate a digital word output corresponding to the voltage amplitude of the analog signal. Several of the comparators are also connected to an error checking network (22), including a preconditioning circuit (100) and a detection circuit (102) for processing these comparator outputs to provide an error signal whenever one or more of the comparators are not operating properly. The error checking network and decoder are connected to an error correction circuit (26) for correcting the digital word signal in accordance with the error signal. Also, comparator circuits are provided which are well-adapted for high-­speed operation and error checking.

Journal ArticleDOI
TL;DR: In this article, a 4-bit file memory using 16-levels (4-bits)/cell storage is described, which has 1-Mb single-transistor dynamic memory cells which are divided into 4-kb sequential access blocks.
Abstract: A 4-bit semiconductor file memory using 16-levels (4-bits)/cell storage is described. The device has 1-Mb single-transistor dynamic memory cells which are divided into 4-kb sequential-access blocks. It incorporates a staircase-pulse generator for multilevel storage operations, a voltage regulator to protect against power-supply voltage surge, and a soft-error-correction circuit based on a cyclic hexadecimal code. The device is fabricated using 1.3- mu m CMOS technology. It operates with a 5-V single power supply. Random block selection time is 147 mu s, while the sequential data rate is 210 ns. A single-incident alpha particle destroys 4-bit data in two or more adjacent cells. The error correction circuit completely corrects these errors. The soft-error rate under actual operating conditions with error correction is expected to be under 100 FIT (10/sup -7/ h/sup -1/). >

Patent
05 Feb 1988
TL;DR: An error-correcting method and apparatus for a block of data provided with an error correcting parity for error correction and an error checking parity that can be used to generate a syndrome for error checking is described in this article.
Abstract: An error-correcting method and apparatus for a block of data provided with an error-correcting parity for error correction and an error-checking parity that can be used to generate a syndrome for error-checking, in which error correction is carried out by the use of the error-correcting parity and then an error check is carried out by the use of the error-checking parity thereby to increase the reliability of the error-checking data, wherein error information produced by the error-correction process using the error-correcting parity is utilized to correct the syndrome used in the error-checking process, so as to execute the respective operations in parallel and reduce the required data processing time or through-put.

Journal ArticleDOI
TL;DR: This famous anthology describes the theoretical analysis of converter performances, the actual design of converters and their simulation, circuit implementations, and applications.
Abstract: This now famous anthology brings together various aspects of oversampling methods and compares and evaluates design approaches. It describes the theoretical analysis of converter performances, the actual design of converters and their simulation, circuit implementations, and applications.

Proceedings ArticleDOI
TL;DR: The authors report on a model of error detection called RELAY, which provides a fault-based criterion for test data selection and introduces similar concepts, origination and transfer, as the first erroneous evaluation and the persistence of that erroneous evaluation respectively.

Journal ArticleDOI
TL;DR: An analysis of the selective-repeat ARQ scheme shows that the use of code combining yields a significant throughput even at very high channel error rates, thus making the system very robust under severe degradations of the channel.
Abstract: Sequential decoding with ARQ (automatic-repeat-request) and code combining under the timeout condition is considered. That is, whenever the decoding time of a given packet exceeds some predetermined duration, decoding is stopped and retransmission of the packet is requested. However, the unsuccessful packets are not discarded, but are combined with their retransmitted copies. It is shown that the use of code combining allows sequential decoding to operate efficiently even when the coding rate R exceeds the computational cutoff rate R/sub comp/. Furthermore, an analysis of the selective-repeat ARQ scheme shows that the use of code combining yields a significant throughput even at very high channel error rates, thus making the system very robust under severe degradations of the channel. >

Patent
23 Aug 1988
TL;DR: A pipelined error correction circuit iteratively determines syndromes, error locator and evaluator equations, and error locations and associated error values for received Reed-Solomon code words.
Abstract: A pipelined error correction circuit iteratively determines syndromes, error locator and evaluator equations, and error locations and associated error values for received Reed-Solomon code words. The circuit includes a plurality of Galois Field multiplying circuits which use a minimum number of circiut elements to generate first and second products. Each Galois Field multiplying circuit includes a first GF multiplier which multiplies one of two input signals in each of two time intervals by a first value to produce a first product. The circuit includes a second GF multiplier which further multiplies one of the first products by a second value to generate a second product. The first and second products are then applied to the first GF multiplier as next input signals. The multiplying circuit minimizes the elements required to generate two products by using a first, relatively complex multiplier for both the first and second products and then a second relatively simple multiplier to generate the second product. This simplifies the multiplying circuit which would otherwise require two relatively complex multipliers. The error correction circuit determines, for each received code word, an error locator equation by iteratively updating a preliminary error locator equation. The circuit determines for a given iteration whether or not to update the preliminary error locator equation by comparing a particular variable with zero.

Patent
12 Apr 1988
TL;DR: In this article, the authors proposed a code error detecting method using the CCA, which consists of a sector buffer memory for storing various kinds of data, an adder circuit for performing exclusive OR addition of data and a CRC generator/checker for producing error check parities by dividing a result of addition performed in the adder by a predetermined generator polynomial, a temporary memory, and an error correction circuit for calculating error correction parities which are used to correct an error of the data.
Abstract: The code error detecting method uses the code error detecting apparatus which comprises a sector buffer memory for storing various kinds of data, an adder circuit for performing exclusive OR addition of data, a CRC generator/checker for producing error check parities by dividing a result of addition performed in the adder circuit by a predetermined generator polynomial, a temporary memory for storing the error check parities, and an error correction circuit for calculating error correction parities which are used to correct an error of the data. At the time of recording, the data incorporating various parities produced in the code error detecting apparatus is recorded on an optical disk, and, at the time of reproducing, the data reproduced and stored in the sector buffer memory is corrected by using the reproduced error correction parities, and the error check parities are obtained by using the CRC generator/checker. Then, the error check parities thus obtained are compared with the reproduced error check parities stored in the sector buffer memory to thereby detect a code error of the data.

Journal ArticleDOI
TL;DR: The potentially useful properties of this self-checking error checker, modified by means of mixed-radix conversion, are examined.
Abstract: It has been shown previously that a mixed-radix converter can be modified to perform all the essential functions of an error checker for error detection and correction in residue number system hardware architectures. Since the computations in a mixed-radix converter are themselves done in residue arithmetic, error checking by means of mixed-radix conversion also checks for errors that occur in the hardware of the mixed-radix converter. The potentially useful properties of this self-checking error checker are examined. >

Journal ArticleDOI
Alan Edward Baratz1, Adrian Segall1
TL;DR: It is shown that the HDLC initialization procedure does not ensure synchronization and allows inadvertent loss of data, and several link initialization procedures are proposed and it is proved that they do ensure synchronization.
Abstract: It is known that HDLC (high-level data link control) and other bit-oriented DLC procedures ensure data transmission reliability across noisy transmission media, provided that all frame errors are detected and the link processes are synchronized at initialization. It is shown that the HDLC initialization procedure does not ensure synchronization and allows inadvertent loss of data. Several link initialization procedures are proposed and it is proved that they do ensure synchronization. >

Patent
14 Oct 1988
TL;DR: In this paper, an evaluation logic circuit is included that is coupled to the ECC logic and the CRC logic and responsive to the error polynomial and the syndrome for generating an uncorrectable error signal if the detected errors do not match the correctable errors.
Abstract: Occurrence of uncorrectable errors in a stored sector of data which includes a data block, an error checking and correcting (ECC) block and an error detecting (CRC) block is detected ECC logic is connected to a data bus and responsive to the ECC block in the sector, for generating an error polynomial identifying a location and a value for correctable errors in the sector CRC logic is connected to the data bus and responsive to the CRC block in the sector for generating a syndrome identifying detected errors in the data block An evaluation logic circuit is included that is coupled to the ECC logic and the CRC logic and responsive to the error polynomial and the syndrome for generating an uncorrectable error signal if the detected errors do not match the correctable errors The error checking and correcting code is a Reed-Solomon code as in the X3B11 standard Likewise the CRC code is a Reed-Solomon code as in the X3B11 standard The evaluation logic implements a reverse CRC generation polynomial having a plurality of terms in the same order as the error polynomial Detection logic receives the plurality of terms of the error polynomial, generates an estimated CRC syndrome based on the reverse CRC generation polynomial, and generates the uncorrectable error signal if the estimated CRC syndrome is not equal to the generated CRC syndrome

Journal ArticleDOI
TL;DR: The coding gain of a constraint-length-three, rate one-half convolutional code over a long clear-air atmospheric direct-detection optical communication channel using binary pulse-position modulation signalling is directly measured as a function of interleaving delay for both hard- and soft-decision Viterbi decoding.
Abstract: The coding gain of a constraint-length-three, rate one-half convolutional code over a long clear-air atmospheric direct-detection optical communication channel using binary pulse-position modulation signalling is directly measured as a function of interleaving delay for both hard- and soft-decision Viterbi decoding. Maximum coding gains theoretically possible for this code with perfect interleaving and physically unrealizable perfect-measurement decoding were about 7 dB under conditions of weak clear-air turbulence, and 11 dB at moderate turbulence levels. The time scale of the fading (memory) of the channel was directly measured to be tens to hundreds of milliseconds, depending on turbulence levels. Interleaving delays of 5 ms between transmission of the first and second channel bits output by the encoder yield coding gains within 1.5 dB of theoretical limits with soft-decision Viterbi decoding. Coding gains of 4-5 dB were observed with only 100 mu s of interleaving delay. Soft-decision Viterbi decoding always yielded 1-2 dB more coding gain than hard-decision Viterbi decoding. >

Journal ArticleDOI
TL;DR: The fundamental theory of t-error correcting and d(d>t)-unidirectional error detecting (t-EC d-UED) codes is given and the encoding/decoding methods for these codes are described.
Abstract: The fundamental theory of t-error correcting and d(d>t)-unidirectional error detecting (t-EC d-UED) codes is given. Many construction methods for t-EC d-UED codes are developed. The encoding/decoding methods for these codes are described. The optimality of these codes is considered.

Journal ArticleDOI
TL;DR: A new class of codes, called burst identification codes, is defined and studied, which can be used to determine the patterns of burst errors and has lower redundancy than other comparable codes.
Abstract: A new class of codes, called burst identification codes, is defined and studied. These codes can be used to determine the patterns of burst errors. Two-dimensional burst correcting codes can be easily constructed from burst identification codes. The resulting class of codes is simple to implement and has lower redundancy than other comparable codes. The results are pertinent to the study of radiation effects on VLSI RAM chips, which can cause two-dimensional bursts of errors. >

Journal ArticleDOI
01 Nov 1988
TL;DR: A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndromePolynomial.
Abstract: It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation that is needed to decode a Reed-Solomon (RS) code. In the paper, a simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained simultaneously and simply, by the Euclidean algorithm only. With this improved technique, the complexity of time-domain Reed-Solomon decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

Book
17 Feb 1988
TL;DR: Benjamin Arazi's truly commonsense approach provides a solid grounding in the subject, explaining principles intuitively from a hardware perspective and redrawn and analyzed from a theoretical point of view for readers who are interested in tackling the mathematics at a more advanced level.
Abstract: Teaching the theory of error correcting codes on an introductory level is a difficult task. The theory, which has immediate hardware applications, also concerns highly abstract mathematical concepts. This text explains the basic circuits in a refreshingly practical way that will appeal to undergraduate electrical engineering students as well as to engineers and technicians working in industry.Arazi's truly commonsense approach provides a solid grounding in the subject, explaining principles intuitively from a hardware perspective. He fully covers error correction techniques, from basic parity check and single error correction cyclic codes to burst error correcting codes and convolutional codes. All this he presents before introducing Galois field theory - the basic algebraic treatment and theoretical basis of the subject, which usually appears in the opening chapters of standard textbooks. One entire chapter is devoted to specific practical issues, such as Reed-Solomon codes (used in compact disc equipment), and maximum length sequences (used in various fields of communications). The basic circuits explained throughout the book are redrawn and analyzed from a theoretical point of view for readers who are interested in tackling the mathematics at a more advanced level.Benjamin Arazi is an Associate Professor in the Department of Electrical and Computer Engineering at the Ben-Gurion University of the Negev. His book is included in the Computer Systems Series, edited by Herb Schwetman.

Journal ArticleDOI
TL;DR: It is formally proved that these two error rates are equal for cyclic codes with either erasing or reproducing decoders.
Abstract: There are two types of bounded-distance decoders for linear block codes: erasing decoders that discard uncorrectable received words, and reproducing decoders that reproduce uncorrectable received words. Exact expressions for the information-symbol and decoded-symbol error rates are derived for both types. Necessary and sufficient conditions are derived for the quality of the information-symbol and decoded symbol error rates, It is formally proved that these two error rates are equal for cyclic codes with either erasing or reproducing decoders. For reproducing decoders, two approximations to the information-bit error rate and their applicability are examined. >

Journal ArticleDOI
TL;DR: A method for constructing a class of t-error correcting and all unidirectional error-detecting systematic codes is proposed and these codes have been shown to be more efficient than codes constructed using other methods proposed in the literature.
Abstract: A method for constructing a class of t-error correcting and all unidirectional error-detecting systematic codes is proposed. These codes have been shown to be more efficient than codes constructed using other methods proposed in the literature. In a special case, the code constructed is the Berger code. >

Patent
06 Jul 1988
TL;DR: In this article, an autocalibrated multistage analog-to-digital converter precisely maintains appropriate error correction levels for each stage during operation of the converter to minimize quantization errors.
Abstract: An autocalibrated multistage analog to digital converter precisely maintains appropriate error correction levels for each stage during operation of the converter to minimize quantization errors. An error signal is derived from the digital output of the converter based upon the slope of the input analog signal, determined either explicitly via hardware or implicitly via software, and an overflow/underflow condition. The error signal is fed back to a calibration control circuit to generate individual error correction levels for various variable correction devices within the analog to digital converter, such as a variable analog delay device. The variations from nominal established at calibration that are due to age, temperature or other environmental factors generate the error signal that varies from a nominal value and is fed back to alter the various error correction levels to minimize the error variation.