scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1989"


Journal ArticleDOI
TL;DR: An iterative descent algorithm based on a Lagrangian formulation for designing vector quantizers having minimum distortion subject to an entropy constraint is discussed and it is shown that for clustering problems involving classes with widely different priors, the ECVQ outperforms the k-means algorithm in both likelihood and probability of error.
Abstract: An iterative descent algorithm based on a Lagrangian formulation for designing vector quantizers having minimum distortion subject to an entropy constraint is discussed. These entropy-constrained vector quantizers (ECVQs) can be used in tandem with variable-rate noiseless coding systems to provide locally optimal variable-rate block source coding with respect to a fidelity criterion. Experiments on sampled speech and on synthetic sources with memory indicate that for waveform coding at low rates (about 1 bit/sample) under the squared error distortion measure, about 1.6 dB improvement in the signal-to-noise ratio can be expected over the best scalar and lattice quantizers when block entropy-coded with block length 4. Even greater gains are made over other forms of entropy-coded vector quantizers. For pattern recognition, it is shown that the ECVQ algorithm is a generalization of the k-means and related algorithms for estimating cluster means, in that the ECVQ algorithm estimates the prior cluster probabilities as well. Experiments on multivariate Gaussian distributions show that for clustering problems involving classes with widely different priors, the ECVQ outperforms the k-means algorithm in both likelihood and probability of error. >

635 citations


Journal ArticleDOI
TL;DR: An improved Gaussian approximation to the probability of data bit error is performed and shows that if no error control exists in the desired packet or if block error control is used when multiple-access interference is high, the error dependence increases the average probability of packet success beyond that predicted by models which use independent bit errors.
Abstract: A technique is developed to find an accurate approximation to the probability of data bit error and the probability of packet success in a direct-sequence spread-spectrum multiple-access (DS/SSMA) packet radio system with random signature sequences. An improved Gaussian approximation to the probability of data bit error is performed. Packet performance is analyzed by using the theory of moment spaces to gain insight into the effect of bit-to-bit error dependence caused by interfering signal relative delays and phases which are assumed constant over the duration of a desired packet. Numerical results show that if no error control exists in the desired packet or if block error control is used when multiple-access interference is high, the error dependence increases the average probability of packet success beyond that predicted by models which use independent bit errors. However, when block error control is used and the multiple-access interference is low, the bit error dependencies cause a reduction in packet error performance. >

411 citations


Journal ArticleDOI
TL;DR: An investigation is conducted of the high-rate punctured convolutional codes suitable for Viterbi and sequential decoding of known short-memory codes.
Abstract: An investigation is conducted of the high-rate punctured convolutional codes suitable for Viterbi and sequential decoding. Results on known short-memory codes M >

312 citations


Proceedings ArticleDOI
21 Jun 1989
TL;DR: Several concurrent error detection schemes suitable for a watch-dog processor were evaluated by fault injection andSoft errors were induced into a MC6809E microprocessor by heavy-ion radiation from a Californium-252 source to characterize the errors and determine coverage and latency for the variouserror detection schemes.
Abstract: Several concurrent error detection schemes suitable for a watch-dog processor were evaluated by fault injection. Soft errors were induced into a MC6809E microprocessor by heavy-ion radiation from a Californium-252 source. Recordings of error behavior were used to characterize the errors as well as to determine coverage and latency for the various error detection schemes. The error recordings were used as input to programs that simulate the error detection schemes. The schemes evaluated detected up to 79% of all errors within 85 bus cycles. Fifty-eight percent of the errors caused execution to diverge permanently from the correct program. The best schemes detected 99% of these errors. Eighteen percent of the errors affected only data, and the coverage of these errors was at most 38%. >

223 citations


Patent
18 Aug 1989
TL;DR: A fault-tolerant memory system (FTMS) as discussed by the authors uses a dedicated microprocessor-controlled computer system which serializes blocks of user data as they are received from the host system, deserializes those blocks when they are returned to the host systems, implements an error correction code system for the user data blocks, scrubs the data stored in the user memory, remaps data block storage locations within the user memories, and performs host computer interface operations.
Abstract: A fault-tolerant memory system or "FTMS" is intended for use as mass data storage for a host computer system. The FTMS incorporates a dedicated microprocessor-controlled computer system which serializes blocks of user data as they are received from the host system, deserializes those blocks when they are returned to the host system, implements an error correction code system for the user data blocks, scrubs the data stored in the user memory, remaps data block storage locations within the user memory as initial storage locations therein acquire too may hard errors for error correction to be effected with the stored error correction data, and performs host computer interface operations. Data in the FTMS is not bit-addressable. Instead, serialization of the user data permits bytes to be stored sequentially within the user memory much as they would be stored on a hard disk, with bytes being aligned in the predominant direction of serial bit failure within the off-spec DRAM devices. Such a data storage method facilitates error correction capability.

176 citations


Journal ArticleDOI
TL;DR: In this article, the use of global information in state estimation to correct errors in the direct measurements of the status data is proposed and conditions for detectability of errors are analyzed, where the telemetered data of breaker and switch status are processed in the EMS computer to determine the present network topology of the system.
Abstract: In modern energy management systems (EMS), there are two types of measurement data collected by the SCADA (supervisory control and data acquisition) system, namely, status data of breakers and switches, and analog data of real and reactive power flows, injections, and bus voltages. The status data are used to determine real-time topology of the network. The analog data are used to determine line and transformer loading and voltage profile. These data are noisy due to measurement errors, communication noise, missing data, etc. In addition to simple checking of the analog data locally, in most modern EMS, state estimation is used to process these data globally to correct errors in the raw analog measurement data. In this paper, the use of global information in state estimation to correct errors in the direct measurements of the status data is proposed and conditions for detectability of errors are analyzed. The telemetered data of breaker and switch status are processed in the EMS computer to determine the present network topology of the system, and this function is called network topology processor. Errors in status data will show up as errors in the network topology. Sasson et al and Dy Liacco et al used a tree search algorithm for the network topology processor. The method is widely used in practice. Bonanomi et al proposed a sequential search method through the network graph. Recently Lugtu et al suggested the approach of using state estimation results for topology error detection.

172 citations


Journal ArticleDOI
TL;DR: The least storage and node computation required by a breadth-first tree or trellis decoder that corrects t errors over the binary symmetric channels is calculated.
Abstract: The least storage and node computation required by a breadth-first tree or trellis decoder that corrects t errors over the binary symmetric channels is calculated. Breadth-first decoders work with code paths of the same length, without backtracking. The Viterbi algorithm is an exhaustive trellis decoder of this type; other schemes look at a subset of the tree or trellis paths. For random tree codes, theorems about the asymptotic number of paths required and their depth are proved. For concrete convolutional codes, the worst case storage for t error sequences is measured. In both cases the optimal decoder storage has the same simple dependence on t. The M algorithm and algorithms proposed by G.J. Foschini (ibid., vol.IT-23, p.605-9, Sept. 1977) and by S.J. Simmons (PhD. diss., Queens Univ., Kingston, Ont., Canada) are optimal, or nearly so; they are all far more efficient than the Viterbi algorithm. >

133 citations


Journal ArticleDOI
Jehoshua Bruck1, Mario Blaum1
TL;DR: Performing maximum-likelihood decoding in a linear block error-correcting code is shown to be equivalent to finding a global maximum of the energy function of a certain neural network, and the connection between maximization of polynomials over the n-cube is investigated.
Abstract: Several ways of relating the concept of error-correcting codes to the concept of neural networks are presented. Performing maximum-likelihood decoding in a linear block error-correcting code is shown to be equivalent to finding a global maximum of the energy function of a certain neural network. Given a linear block code, a neural network can be constructed in such a way that every codeword corresponds to a local maximum. The connection between maximization of polynomials over the n-cube and error-correcting codes is also investigated; the results suggest that decoding techniques can be a useful tool for solving such maximization problems. The results are generalized to both nonbinary and nonlinear codes. >

129 citations


Patent
09 Jun 1989
TL;DR: In this paper, the combined benefits of decision feedback equalization and error correction coding are realized in a communications system by the use of a plurality of coders and decoders respectively disposed in the transmitter and receiver.
Abstract: The combined benefits of decision feedback equalization and error correction coding are realized in a communications system by the use of a plurality of coders and decoders respectively disposed in the transmitter and receiver. The plurality of encoders and decoders is used to interleave the data symbols so that each coder and decoder is operative upon every Mth symbol, where M is the number of coders or decoders. By a judicious choice of M, both the probability of noise impairing the recovery of successive symbols and the error propagation effects inherent in decision feedback equalizers are reduced.

97 citations


Journal ArticleDOI
TL;DR: The performance of variable-rate Reed-Solomon error-control coding for meteor-burst communications and the derivation of tractable expressions for the probability of correct decoding for bounded-distance decoding on a memoryless channel with a time-varying symbol error probability are considered.
Abstract: The performance of variable-rate Reed-Solomon error-control coding for meteor-burst communications is considered. The code rate is allowed to vary from codeword to codeword within each packet, and the optimum number of codewords per packet and optimum rates for the codewords are determined as a function of the length of the message and the decay rate for the meteor trail. The resulting performance is compared to that obtained from, fixed-rate coding. Of central importance is the derivation of tractable expressions for the probability of correct decoding for bounded-distance decoding on a memoryless channel with a time-varying symbol error probability. A throughout measure is developed that is based on the probability distribution of the initial signal-to-noise ratio. >

86 citations


Patent
10 Mar 1989
TL;DR: In this paper, a clip level error correction system is employed in a way which does not interfere with system level correction methods, and this counter-intuitive approach to the generation of forced hard errors nonetheless enhances overall memory system reliability since it enables the employment of the complement/recomplement algorithm which depends upon the presence of reproducible errors for proper operation.
Abstract: In a memory system comprising a plurality of memory units each of which possesses unit-level error correction capabilities and each of which are tied to a system level error correction function, memory reliability is enhanced by providing means for fixing the output of one of the memory units at a fixed value in response to the occurrence of an uncorrectable error in one of the memory units. This counter-intuitive approach to the generation of forced hard errors nonetheless enhances overall memory system reliability since it enables the employment of the complement/recomplement algorithm which depends upon the presence of reproducible errors for proper operation. Thus, clip level error correction systems, which are increasingly desirable at high packaging densities, are employed in a way which does not interfere with system level error correction methods.

Journal ArticleDOI
TL;DR: An algorithm is presented for error detection and correction of disparity, as a process separate from stereo matching, with the contention that matching is not necessarily the best way to utilize all the physical constraints characteristic to stereopsis.
Abstract: An algorithm is presented for error detection and correction of disparity, as a process separate from stereo matching, with the contention that matching is not necessarily the best way to utilize all the physical constraints characteristic to stereopsis As a result of the bias in stereo research towards matching, vision tasks like surface interpolation and object modeling have to accept erroneous data from the stereo matchers without the benefits of any intervening stage of error correction An algorithm which identifies all errors in disparity data that can be detected on the basis of figural continuity and corrects them is presented The algorithm can be used as a postprocessor to any edged-based stereo matching algorithm, and can additionally be used to automatically provide quantitative evaluations on the performance of matching algorithms of this class >

Journal ArticleDOI
Mario Blaum1, H. Van Tilborg1
TL;DR: Asymmetric error-correcting codes turn out to be a powerful tool in the proposed construction of binary systematics codes that can correct t random errors and detect more than t unidirectional errors.
Abstract: The authors present families of binary systematics codes that can correct t random errors and detect more than t unidirectional errors. The first step of the construction is encoding the k information symbols into a codeword of an (n', k, 2t+1) error-correcting code. The second step involves adding more bits to this linear error-correcting code in order to obtain the detection capability of all unidirectional errors. Asymmetric error-correcting codes turn out to be a powerful tool in the proposed construction. The resulting codes significantly improve previous results. Asymptotic estimates and decoding algorithms are presented. >

Patent
28 Jun 1989
TL;DR: In this paper, a deinterleaving and error correction system for a playback system of an optical recording disk apparatus is presented. But the performance of this system is limited by the number of correctable errors for each sector.
Abstract: A deinterleaving and error correction system which is advantageously utilized in a playback system of an optical recording disk apparatus As each block of sector data, encoded for example with the Reed-Solomon error correction code with block interleaving, is read from the disk, the positions within the data block at which drop-out of the playback signal occurs are respectively stored in a memory in which the data symbols are also stored, with these drop-out positions being stored as error position data Error correction processing is executed using the error position data in conjunction with the code words, enabling the maximum number of correctable errors for each sector to be substantially increased using a simple system configuration

Journal ArticleDOI
TL;DR: A modified DCT (discrete cosine transform) has been developed for this system that needs only one-half the number of multipliers of a conventional DCT, making the hardware scale smaller.
Abstract: An experimental digital VTR system is described. A modified DCT (discrete cosine transform) has been developed for this system that needs only one-half the number of multipliers of a conventional DCT, making the hardware scale smaller. A data quantity estimator controls the variable-length coding for completion within a synchronization block, keeping the picture quality high. Three stages of error correction correct more errors and make the probability of miscorrection lower. A high area packing density of 270 Mb/in/sup 2/ is brought about by the use of a low-noise head amplifier, optimized payback signal processing, high-performance magnetic heads and tapes, and a stable tracking to narrow tracks. >

Proceedings ArticleDOI
N. Seshadri1, C.-W. Sundberg1
27 Nov 1989
TL;DR: Two generalized Viterbi algorithms (GVAs) for the decoding of convolutional codes are presented, a parallel algorithm that simultaneously identifies the L best estimates of the transmitted sequence, and a serial algorithm that identifies the lth best estimate using the knowledge about the previously found l-1 estimates.
Abstract: Presented are two generalized Viterbi algorithms (GVAs) for the decoding of convolutional codes. They are respectively, a parallel algorithm that simultaneously identifies the L best estimates of the transmitted sequence, and a serial algorithm that identifies the lth best estimate using the knowledge about the previously found l-1 estimates. These algorithms are applied to combined speech and channel coding systems, concatenated codes, trellis-coded modulation, partial response (continuous-phase modulation), and hybrid ARQ (automatic repeat request) schemes. As an example, for a concatenated code more than 2 dB is gained by the use of the GVA with L=3 over the Viterbi algorithm for block error rates less than 10/sup -2/. The channel is a Rayleigh fading channel. >

Patent
13 Apr 1989
TL;DR: A data transmission and reception apparatus and method capable of selecting either of two modes having the same sampling frequency in which the bit numbers of one data unit are either m or n (which are integral numbers and m>n), while using the same error correction encoder and decoder for the two modes, by inserting m-n bits of dummy data bits into the n-bit data.
Abstract: A data transmission and reception apparatus and method capable of selecting either of two modes having the same sampling frequency in which the bit numbers of one data unit are either m or n (which are integral numbers and m>n), while using the same error correction encoder and decoder for the two modes, by inserting m-n bits of dummy data bits into the n-bit data so as to handle it as m-bit data during the processes of error correction encoding and decoding.

Journal ArticleDOI
TL;DR: In this paper, a parity retransmission hybrid automatic repeat request (ARQ) scheme was proposed which uses rate 1/2 convolutional codes and Viterbi decoding.
Abstract: A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty. >

01 Jan 1989
TL;DR: A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding and constitutes a first approximation to a nonstationary channel.
Abstract: A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.<>

Journal ArticleDOI
TL;DR: A unified methodology using signal flow graphs is proposed and used to analyze several existing ARQ protocols and offers additional advantages in obtaining results such as the mean response time, and in analyzing the ARZ protocols in which the feedback channel is not error-free.
Abstract: A unified methodology using signal flow graphs is proposed and used to analyze several existing ARQ (automatic repeat request) protocols. The methodology offers a systematic and efficient way of obtaining throughput and delay. It also offers additional advantages in obtaining results such as the mean response time, and in analyzing the ARQ protocols in which the feedback channel is not error-free. >

Journal ArticleDOI
TL;DR: A decoding algorithm for algebraic geometric codes that was given by A.N. Skorobogatov and S.G. Vladut is considered and the author gives a modified algorithm, with improved performance, which he obtains by applying the above algorithm a number of times in parallel.
Abstract: A decoding algorithm for algebraic geometric codes that was given by A.N. Skorobogatov and S.G. Vladut (preprint, Inst. Problems of Information Transmission, 1988) is considered. The author gives a modified algorithm, with improved performance, which he obtains by applying the above algorithm a number of times in parallel. He proves the existence of the decoding algorithm on maximal curves by showing the existence of certain divisors. However, he has so far been unable to give an efficient procedure of finding these divisors. >

Patent
09 Nov 1989
TL;DR: In this article, a shift correction decoder is used to correct the forward and backward shift errors present in the received channel encoded data, which is accomplished using a code, such as a BCH code over GF(p) or negacyclic code, which treats each received symbol as a vector having p states.
Abstract: Channel encoded data (for example run length limited encoded data) is further encoded in accordance with a shift correction code prior to transmission. Upon reception, forward and backward shift errors present in the received channel encoded data are corrected by a shift correction decoder. The shift error correction is accomplished using a code, such as (for example) a BCH code over GF(p) or a negacyclic code, which treats each received symbol as a vector having p states. For a single shift error correction, p=3 and there are three states (forward shift, backward shift, no shift). In one embodiment, conventional error correction codewords which encode the user data may be interleaved within successive shift correction codewords prior to channel encoding, thereby enabling the error correction system to easily handle a high rate of randomly distributed shift errors (which otherwise would result in a high rate of short error bursts that exceed the capacity of the block error correction code).

Patent
27 Apr 1989
TL;DR: In this paper, the error pointing signals can be cyclic redundancy check (CRC) signals and error pointing redundancy signals are recorded between all of the resynchronization signals for pointing to signals in error for enhancing the error correction.
Abstract: A record medium, such as a magnetic tape, optical disk, magnetic disk, and the like stores data signals and error redundancy signals Resynchronization signals are interleaved between the recorded signals such that the error redundancy signals are usable to correct signals recorded between such interposed resynchronization signals wherein no error redundancy signals are recorded Error pointing redundancy signals are recorded between all of the resynchronization signals for pointing to signals in error for enhancing the error correction Such error pointing signals can be cyclic redundancy check (CRC) signals Controls for taking advantage of the above-described arrangement are also described Reframing and clock synchronization controls are also disclosed

Patent
Akihiro Shikakura1
07 Mar 1989
TL;DR: In this paper, a digital information transmitting and receiving system constructed of a transmitter and a receiver is presented, where the transmitter includes a high efficiency encoding circuit for compressing digital information codes using correlativity among the information codes, and outputting compressed information codes; an error correcting code generating circuit for sampling the compressed information code in the direction of correlativity used by the high-efficiency encoding circuit, and generating an error-correcting code using the sampled compressed codes.
Abstract: A digital information transmitting and receiving system constructed of a transmitter and a receiver. The transmitter includes a high efficiency encoding circuit for compressing digital information codes using correlativity among the digital information codes, and outputting compressed information codes; an error correcting code generating circuit for sampling the compressed information codes in the direction of correlativity used by the high efficiency encoding circuit, and generating an error correcting code using the sampled compressed information codes; and a transmitting unit for transmitting a code train including the compressed information codes and the error correcting code. The receiver includes a receiving unit for receiving the code train; a first error correcting circuit for correcting an error code of the compressed information codes by using the error correcting code within the code train received by the receiving unit, the first error correcting circuit outputting an error flag representative of whether there is an uncorrectable code within the code train having a predetermined number of the compressed information codes; and a second error correcting circuit for correcting an error code of the information code in units of the predetermined number of the compressed information codes, in accordance with the error flag.

Journal ArticleDOI
TL;DR: An error correction and diffusion algorithm to code and quantize digital phase holograms is presented and it is a one step process easy to implement.

Patent
16 Mar 1989
TL;DR: In this paper, a method and apparatus for improving the quality of speech samples communicated via sub-band coding utilizing adaptive bit allocation was proposed, by providing error detection only on the adaptive bit assignment information.
Abstract: A method and apparatus is disclosed (Fig. 3) for improving the quality of speech samples communicated via sub-band coding utilizing adaptive bit allocation (Fig. 1), by providing error detection only on the adaptive bit allocation information. A first error detection code (340), such as a cyclic redundancy check (CRC), is calculated on the bit allocation parameters (330) in the transmitter (130) and sent to the receiver (180), where a second error detection code (370) is calculated based upon the reconstructed bit allocation parameters (360). The transmitted error detection code is then used to determine (380) if the received bit allocation information is correct, and if not, the frame of speech data is discarded. By protecting only the bit allocation information, additional speech frames may be salvaged from the error-prone channel, thus further increasing speech intelligibility.

Journal ArticleDOI
TL;DR: The authors compute the weight distributions for various code lengths and show the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate 10/sup -5/ >.
Abstract: Investigates the error detecting capabilities of the shortened hamming codes adopted for error detection in IEEE Standard 802.3. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The authors compute the weight distributions for various code lengths. From the results, they show the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate 10/sup -5/ >

Patent
24 Nov 1989
TL;DR: In this article, a decoder is arranged to operate as a single-bit error correction circuit (ECC) and as a multiple-bit detection circuit (EDC), where the decoder starts and remains in the ECC state as long as no errors are detected in a received data message.
Abstract: ERROR CORRECTION AND DETECTION APPARATUS AND METHOD Abstract A decoder is arranged to operate as a single-bit error correction circuit (ECC) and as a multiple-bit error detection circuit (EDC). The decoder starts and remains in the ECC state as long as no errors are detected in a received data message. When an error is detected or corrected in a received data message, the decoder switches to the EDC state where it remains as long as errors are detected in the received data message. When no errors are detected in the received data message, the decoder switches back to the ECC state. In a generalized multistate decoder, switching occurs from one state to another state, each state having a different error correcting capability, in response to a predetermined number of errors corrected or detected in the received data.

Patent
19 Jun 1989
TL;DR: A data transmission and reception apparatus and method capable of selecting either of two modes having the same sampling frequency in which the bit number of one data unit are either m or n (which are integral numbers and m>n), while using the same error correction encoder and decoder for the two modes, by inserting m-n bits of dummy data bits into the n-bit data as discussed by the authors.
Abstract: A data transmission and reception apparatus and method capable of selecting either of two modes having the same sampling frequency in which the bit number of one data unit are either m or n (which are integral numbers and m>n), while using the same error correction encoder and decoder for the two modes, by inserting m-n bits of dummy data bits into the n-bit data so as to handle it as m-bit data during the processes of error correction encoding and decoding and eliminating from the error correction encoded data the dummy data and a redundant code of the error correction code formed by the dummy data so that the data transmission rate can be lowered.

Proceedings Article
01 Jan 1989
TL;DR: In this paper, the problem of designing erasure-correcting binary linear codes that protect against the loss of data caused by disk failures in large disk arrays is addressed, and a simple method for data reconstruction is given.
Abstract: A crucial issue in the design of very large disk arrays is the protection of data against catastrophic disk failures. Although today single disks are highly reliable, when a disk array consists of 100 or 1000 disks, the probability that at least one disk will fail within a day or a week is high. In this paper we address the problem of designing erasure-correcting binary linear codes that protect against the loss of data caused by disk failures in large disk arrays. We describe how such codes can be used to encode data in disk arrays, and give a simple method for data reconstruction. We discuss important reliability and performance constraints of these codes, and show how these constraints relate to properties of the parity check matrices of the codes. In so doing, we transform code design problems into combinatorial problems. Using this combinatorial framework, we present codes and prove they are optimal with respect to various reliability and performance constraints.