scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1995"


Proceedings ArticleDOI
18 Jun 1995
TL;DR: A new simple method for trellis termination is described, the effect of interleaver choice on the weight distribution of the code, and the use of unequal rate component codes which yields better performance are introduced.
Abstract: Turbo codes are the most exciting and potentially important development in coding theory in many years. They were introduced in 1993 by Berrou, Glavieux and Thitimajshima, and claimed to achieve near Shannon-limit error correction performance with relatively simple component codes and large interleavers. A required E/sub b//N/sub o/ of 0.7 dB was reported for BER of 10/sup -5/ and code rate of 1/2. However, some important details that are necessary to reproduce these results were omitted. This paper confirms the accuracy of these claims, and presents a complete description of an encoder/decoder pair that could be suitable for PCS applications. We describe a new simple method for trellis termination, we analyze the effect of interleaver choice on the weight distribution of the code, and we introduce the use of unequal rate component codes which yields better performance. Turbo codes are extended to encoders with multiple codes and a suitable decoder structure is developed, which is substantially different from the decoder for two-code based encoders.

444 citations


Journal ArticleDOI
TL;DR: In this article, a vectorial equation for geometric error description is presented, which is independent from the machine structure and evaluation of measurement effectiveness is discussed and error compensation methods and strategies are examined.

293 citations


Journal ArticleDOI
TL;DR: Proposes some reversible variable length codes (RVLCs) which can be decoded instantaneously both in the forward and backward directions and have high transmission efficiency.
Abstract: Proposes some reversible variable length codes (RVLCs) which can be decoded instantaneously both in the forward and backward directions and have high transmission efficiency. These codes can be used, for example, in the backward reconstruction of video signals from the data last received when some signal is lost midway in the transmission. Schemes for a symmetrical RVLC requiring only a single code table and for an asymmetrical RVLC having short average code length are introduced. They compare favorably with other reversible codes such as B2 codes in several aspects. >

290 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: A new family of maximum distance separable (MDS) array codes is presented, and it is shown that the upper bound obtained from these codes is close to the lower bound and, most importantly, does not depend on the size of the code symbols.
Abstract: A new family of maximum distance separable (MDS) array codes is presented. The code arrays contain p information columns and r independent parity columns, each column consisting of p-1 bits, where p is a prime. We extend a previously known construction for the case r=2 to three and more parity columns. It is shown that when r=3 such extension is possible for any prime p. For larger values of r, we give necessary and sufficient conditions for our codes to be MDS, and then prove that if p belongs to a certain class of primes these conditions are satisfied up to r/spl les/8. One of the advantages of the new codes is that encoding and decoding may be accomplished using simple cyclic shifts and XOR operations on the columns of the code array. We develop efficient decoding procedures for the case of two- and three-column errors. This again extends the previously known results for the case of a single-column error. Another primary advantage of our codes is related to the problem of efficient information updates. We present upper and lower bounds on the average number of parity bits which have to be updated in an MDS code over GF (2/sup m/), following an update in a single information bit. This average number is of importance in many storage applications which require frequent updates of information. We show that the upper bound obtained from our codes is close to the lower bound and, most importantly, does not depend on the size of the code symbols.

258 citations


Proceedings ArticleDOI
18 Jun 1995
TL;DR: A simple feed-forward correction technique based on pilot cells is proposed, that dramatically reduces the degradation due to phase-noise and allows the design of low-cost tuners through specifying the required phase- noise characteristics.
Abstract: In OFDM transmission schemes, phase-noise from oscillator instabilities in the receiver is a potentially serious problem, especially when bandwidth efficient, high order signal constellations are employed. The paper analyses the two effects of phase-noise: inter-carrier interference (ICI) and a phase error common to all OFDM sub-carriers. Through numerical integration, the ICI power can be evaluated and is shown as a function of the number of OFDM sub-carriers and various parameters of the phase-noise model. Increasing the number of sub-carriers causes an increase in the ICI power, which our analysis indeed shows to become a potential problem, since it can lead to a BER floor. The analysis allows the design of low-cost tuners through specifying the required phase-noise characteristics. A similar technique is applied to calculate the variance of the common phase error. After showing that the common phase error is essentially uncorrelated from symbol to symbol, we propose a simple feed-forward correction technique based on pilot cells, that dramatically reduces the degradation due to phase-noise. This is confirmed by BER simulations of a coded OFDM scheme (proposed for terrestrial transmission of digital television) with 64 QAM.

252 citations


Journal ArticleDOI
TL;DR: Optimal error estimates are derived for a complete discretization of linear parabolic problems using space–time finite elements based on the orthogonality of the Galerkin procedure and the use of strong stability estimates.
Abstract: Optimal error estimates are derived for a complete discretization of linear parabolic problems using space–time finite elements. The discretization is done first in time using the discontinuous Galerkin method and then in space using the standard Galerkin method. The underlying partitions in time and space need not be quasi uniform and the partition in space may be changed from time step to time step. The error bounds show, in particular, that the error may be controlled globally in time on a given tolerance level by controlling the discretization error on each individual time step on the same (given) level, i.e., without error accumulation effects. The derivation of the estimates is based on the orthogonality of the Galerkin procedure and the use of strong stability estimates. The particular and precise form of these error estimates makes it possible to design efficient adaptive methods with reliable automatic error control for parabolic problems in the norms under consideration.

245 citations


Journal ArticleDOI
TL;DR: A combinatorial analysis is presented to derive a closed-form expression for the number of transmission errors that occur in a block transmitted through a Gilbert channel that simplifies the computations needed to investigate the tradeoffs among the decoding error probability, degree of interleaving, and the error-correction ability of a code.
Abstract: Presents a combinatorial analysis to derive a closed-form expression for the number of transmission errors that occur in a block transmitted through a Gilbert channel. This expression simplifies the computations needed to investigate the tradeoffs among the decoding error probability, degree of interleaving, and the error-correction ability of a code. The authors illustrate how a designer may apply the method to determine different combinations of the degree of interleaving and error correction ability to achieve a specified decoding error rate. >

231 citations


PatentDOI
TL;DR: The quantized parameter bits are grouped into several categories according to their sensitivity to bit errors, and the ratio between the actual spectral envelope and the smoothed spectral envelope is used to enhance the spectral envelope.
Abstract: The quantized parameter bits are grouped into several categories according to their sensitivity to bit errors. More effective error correction codes are used to encode the most sensitive parameter bits, while less effective error correction codes are used to encode the less sensitive parameter bits. This method improves the efficiency of the error correction and improves the performance if the total bit rate is limited. The perceived quality of coded speech is improved. A smoothed spectral envelope is created in the frequency domain. The ratio between the actual spectral envelope and the smoothed spectral envelope is used to enhance the spectral envelope. This reduces distortion which is contained in the spectral envelope.

217 citations


Proceedings ArticleDOI
21 Jun 1995
TL;DR: The paper formulates and discusses timing problems in real-time systems from the sampled data point of view and different ways to eliminate the effects of communication delays are considered.
Abstract: The paper formulates and discusses timing problems in real-time systems from the sampled data point of view. Different ways to eliminate the effects of communication delays are considered.

213 citations


01 Jan 1995
TL;DR: The results obtained show that turbo-equalization manages to overcome multipath effects, totally on Gauss channels, and partially but still satisfactorily on Rayleigh channels.
Abstract: This paper presents a receiving scheme intended to combat the detrimental effects of intersymbol interference for digital transmissions protected by convolutional codes. The receiver performs two successive soft-output decisions, achieved by a symbol detector and a channel decoder, through an iterative process. At each iteration, extrinsic information is extracted from the detection and decoding steps and is then used at the next iteration as in turbo-decoding. From the implementation point of view, the receiver can be structured in a modular way and its performance, in bit error rate terms, is directly related to the number of modules used. Simulation results are presented for transmissions on Gauss and Rayleigh channels. The results obtained show that turbo-equalization manages to overcome multipath effects, totally on Gauss channels, and partially but still satisfactorily on Rayleigh channels.

194 citations


Journal ArticleDOI
TL;DR: In this paper, a new class of punctured convolutional codes that are complementary (CPC codes) is proposed and analyzed, which is called a type III hybrid ARQ scheme.
Abstract: Presents a new class of punctured convolutional codes that are complementary (CPC codes). A set of punctured convolutional codes derived from the same original low rate code are said to be complementary if they are equivalent (in terms of their distance properties) and if when combined yield at least the original low rate code. Based on these CPC codes the author proposes and analyzes a variation of the type II hybrid ARQ scheme which is called a type III hybrid ARQ scheme. With the type III hybrid ARQ scheme, the starting code rate can be chosen to match the channel noise requirements, and as with the type II scheme, packets that are detected in error are not discarded, but are combined with complementary transmissions provided by the transmitter to help recover the transmitted message. The main advantage is that any complementary sequence sent for a packet that is detected with errors is self decodable. That is the decoder does not have to rely on previously received sequences for the same data packet for decoding, as is generally the case with incremental redundancy ARQ schemes. This feature is desirable especially in situations where a transmitted packet can be lost or severely damaged as a result of interference. CPC codes can find applications in diversity transmission systems. A novel complementary diversity scheme which makes use of CPC codes is briefly discussed. >

Patent
11 Sep 1995
TL;DR: In this paper, a high-speed byte-parallel pipelined error-correcting system for Reed-Solomon codes is proposed, which can be used with any type of parallel data storage or transmission media to create an arbitrary level of fault tolerance and allow previously considered unreliable media to be effectively used in highly reliable memory or communications systems.
Abstract: A high-speed byte-parallel pipelined error-correcting system for Reed-Solomon codes includes a parallelized and pipelined encoder and decoder and a feedback failure location system. Encoding is accomplished in a parallel fashion by multiplying message words by a generator matrix. Decoding is accomplished with or without byte failure location information by multiplying the received word by an error detection matrix, solving the key equation and generating the most-likely error word and code word in a parallel and pipelined fashion. Parallelizing and pipelining allows inputs to be received at very high (fiber optic) rates and outputs to be delivered at correspondingly high rates with minimum delay. The error-correcting system can be used with any type of parallel data storage or transmission media to create an arbitrary level of fault-tolerance and allows previously considered unreliable media to be effectively used in highly reliable memory or communications systems.

Proceedings ArticleDOI
F. Daffara1, O. Adami1
25 Jul 1995
TL;DR: The authors have analytically derived the frequency detector characteristic curve and its noise power spectral density and have shown that it permits a considerable improvement in the noise level to be achieved.
Abstract: Deals with the carrier frequency synchronization of orthogonal multicarrier systems, which are an effective transmission technique for coping with the typical channel impairments present in mobile reception. A new carrier frequency detector is introduced and its performance thoroughly analyzed in the presence of a multipath channel. In particular the authors have analytically derived the frequency detector characteristic curve and its noise power spectral density. They have compared the new algorithm with other known algorithms and have shown that it permits a considerable improvement in the noise level to be achieved.

Proceedings ArticleDOI
25 Jul 1995
TL;DR: Different modulation schemes supporting multiple data rates in a direct sequence code division multiple access (DS/CDMA) system are studied, focusing on how to support personal communication services.
Abstract: Different modulation schemes supporting multiple data rates in a direct sequence code division multiple access (DS/CDMA) system are studied, focusing on how to support personal communication services. Both AWGN and multipath Rayleigh fading channels are considered. It is shown that the multi processing-gain scheme and the multi-channel scheme have almost the same performance. However, the multi-channel scheme has some advantages due to near-far resistance, easier code design and easier multi-user receiver construction. The drawback though, is the need for linear amplifiers. A multi-modulation scheme is also possible, but the performance for the users with the high data rates is significantly worse than for the other schemes.

Patent
02 Jun 1995
TL;DR: In this article, a system and method for wireless transmission of information which is subject to fading by using a RF carrier modulated with a subcarrier modulating with the information is described.
Abstract: A system and method is disclosed for wireless transmission of information which is subject to fading by using a RF carrier modulated with a subcarrier modulated with the information. The system has a bus interface which communicates with a digital signal processor which controls the transmitting and receiving circuitry functions. The bus interface is for connection to a computer bus which is connected to a computer which originates information to be transmitted by transmitting circuitry and which receives information from receiving circuitry. The digital signal processor provides first and second encoded information streams each comprising the information to be transmitted with the second stream being delayed by a time delay interval with respect to the first stream which is equal to or greater than the fading interval. The first and second encoded information streams modulate cycles of the subcarrier to produce first and second parallel information streams which are time offset by the time delay interval. The receiving circuitry has a detector for detecting transmitted first and second parallel information streams with the second parallel information stream being delayed from the first parallel information stream by the time delay interval. The digital signal processor determines if faded information is present in the frames of the detected first and second parallel information streams by processing the error correction code therein to determine if a number of bit errors are present which exceed the bit error correction capacity of the error correction code. The digital signal processor places an error marker within the detected first and second parallel information streams to mark each faded information unit and controls replacement of each error marker within at least one of the first and second parallel information streams with replacement data bits within a frame in one of the first and second parallel information streams which were time offset at transmission by the time delay interval to produce error free transmitted information.

Patent
05 Jun 1995
TL;DR: In this article, a method for providing error correction for an array of disks using non-volatile random access memory (NV-RAM) is described, which is used to increase the speed of RAID recovery from a disk error(s).
Abstract: A method is disclosed for providing error correction for an array of disks using non-volatile random access memory (NV-RAM). Non-volatile RAM is used to increase the speed of RAID recovery from a disk error(s). This is accomplished by keeping a list of all disk blocks for which the parity is possibly inconsistent. Such a list of disk blocks is much smaller than the total number of parity blocks in the RAID subsystem. The total number of parity blocks in the RAID subsystem is typically in the range of hundreds of thousands of parity blocks. Knowledge of the number of parity blocks that are possibly inconsistent makes it possible to fix only those few blocks, identified in the list, in a significantly smaller amount of time than is possible in the prior art. The technique for safely writing to a RAID array with a broken disk is complicated. In this technique, data that can become corrupted is copied into NV-RAM before the potentially corrupting operation is performed.

Book ChapterDOI
19 Apr 1995
TL;DR: It is shown using measurements over the Internet as well as analytic modeling that the number of consecutively lost audio packets is small unless the network load is very high, which indicates that open loop error control mechanisms based on forward error correction would be adequate to reconstruct most lostaudio packets.
Abstract: We consider the problem of distributing audio data over networks such as the Internet that do not provide support for real-time applications. Experiments with such networks indicate that audio quality is mediocre in large part because of excessive audio packet losses. In this paper, we show using measurements over the Internet as well as analytic modeling that the number of consecutively lost audio packets is small unless the network load is very high. This indicates that open loop error control mechanisms based on forward error correction would be adequate to reconstruct most lost audio packets.

Patent
Kumiko Nakakita1, Keiji Tsunoda1
28 Dec 1995
TL;DR: In this article, a scheme for error control on AAL in ATM networks capable of realizing a reliable communication with a high throughput and a low latency is proposed, where the segmented data are sequentially written into each column of a matrix-shaped data region in an interleaver, while variably setting a last column of the last column in the interleavers.
Abstract: A scheme for error control on AAL in ATM networks capable of realizing a reliable communication with a high throughput and a low latency. On AAL, the segmented data are sequentially written into each column of a matrix shaped data region in an interleaver, while variably setting a last column of the data region in the interleaver. Then, an error control code for the data up to the last column in each row of the data region in the interleaver is obtained and written into a corresponding location within a matrix shaped error control code region in the interleaver. The contents of each column of the data region and the error control code region in the interleaver are then read out, and a prescribed header/trailer is attached to a prescribed number of columns of the data and/or the error control codes read out from the interleaver to form a data unit. Each data unit is sequentially given to a lower layer such that data units are transmitted in forms of ATM cells through the ATM network and an error correction using the error control codes can be carried out for the data at a receiving side when an error occurs during data transfer.

Patent
Sr. Daniel James Malone1
08 Feb 1995
TL;DR: In this paper, a method for reducing errors in data read from a disk drive comprises the steps of determining errors, correcting the errors through the use of error correction circuitry, maintaining a metric of the errors that are read from the transducer, and when the metric reaches a threshold value, applying a toggle procedure to the transducers in an attempt to improve their performance.
Abstract: A method for reducing errors in data read from a disk drive comprises the steps of: determining errors in data read from a transducer in the disk drive; correcting the errors in data through the use of error correction circuitry; maintaining a metric of the errors in data that are read from the transducer; and when the metric reaches a threshold value, applying a toggle procedure to the transducer in an attempt to improve the transducer's performance. The method further contemplates periodically applying a toggle procedure to the transducer in an attempt to reduce the errors in data, in addition to the application of a toggle procedure when the threshold value is reached.

Journal ArticleDOI
TL;DR: In this paper, a new approach to error analysis in CFD aiming at reliable and efficient adaptive quantitative error control is proposed, based on a precise analysis of hydrodynamic stability coupled with Galerkin orthogonality.
Abstract: We critically review the available error analysis in computational fluid dynamics (CFD) and come to the conclusion that the existing error estimates are meaningless in most cases of interest. We propose a new approach to error analysis in CFD aiming at reliable and efficient adaptive quantitative error control. This is based on a precise analysis of hydrodynamic stability coupled with Galerkin orthogonality. We prove a priori- and a posteriori-type error estimates in a model case for pipe flow, formulate corresponding adaptive algorithms, and discuss the potential of this approach for adaptive error control in CFD.

Journal ArticleDOI
TL;DR: A posteriori error estimates for a prototype elliptic model problem discretized by the finite element with a canomical multigrid algorithm are proved.
Abstract: We consider the problem of adaptive error control in the finite element method including the error resulting from, inexact solution of the discrete equations. We prove a posteriori error estimates for a prototype elliptic model problem discretized by the finite element with a canomical multigrid algorithm. The proofs are based on a combination of so-called strong stability and, the orthogonality inherent in both the finite element method can the multigrid algorithm.

Patent
31 Oct 1995
TL;DR: In this article, the authors proposed a multi-level storage device including at least a first plurality of cells storing an identical first number (greater than one) of binary data, and a corresponding for second plurality for storing a second number of error check and correcting words equal to the first number.
Abstract: The invention relates to a multi-level storage device including: at least a first plurality of cells storing an identical first number (greater than one) of binary data, and at least a corresponding for second plurality of cells for storing a second number of error check and correcting words equal to said first number, said words being respectively associated with sets of binary data, each including at least one binary data for each cell in said first plurality. In this way, many of the known error correction algorithms can be applied to obtain comparable results to those provided by binary memories. In addition, where multi-level cells are used for storing the error check and correcting words, the device dimension requirements can also be comparable.

Patent
30 Nov 1995
TL;DR: In this paper, the transmission information is divided into a number of sequences equal to the number of used spread codes, and these sequences are error correction encoded, primarily modulated according to a selected one of a plurality of types of multi-valued modulation schemes, secondarily modulated by the selected spread codes.
Abstract: A system adaptively determines the number of spread codes to use and a type of multi-valued modulation system based on the station of the system. Transmission information is divided into a number of sequences equal to the number of used spread codes. These sequences are error correction encoded, primarily modulated according to a selected one of a plurality of types of multi-valued modulation schemes, secondarily modulated by the selected spread codes, and transmitted.

Book
01 Jan 1995
TL;DR: Channel models for basic on error control and error detecting codes for the BSC and Protocols are presented.
Abstract: Preface. 1. Channel models. 2. Basic on error control. 3. Error detecting codes for the BSC. 4. Codes for other channels. 5. Protocols. 6. Code optimization. 7. Concluding remarks. References. Index.

Proceedings ArticleDOI
14 Sep 1995
TL;DR: CD3-OFDM allows to achieve C/N performance similar to coherent demodulation with pilot tones, when the same channel coding and modulation scheme is adopted, and can be suitable for digital television broadcasting services over selective radio channels.
Abstract: The paper describes a novel channel estimation scheme (identified as CD3, coded decision directed demodulation) for coherent demodulation of OFDM (orthogonal frequency division multiplex) signals making use of any constellation format (e.g. QPSK, 16 QAM, 64 QAM). The structure of the CD3-OFDM demodulator is described, based on a new channel estimation loop exploiting the error correction capability of a forward error correction (FEC) decoder and frequency and time domain filtering to mitigate the effects of noise and residual errors. In contrast to the conventional coherent OFDM demodulation schemes, CD3-OFDM does not require the transmission of a comb of pilot tones for channel estimation and equalisation, therefore yielding a significant improvement in spectrum efficiency (typically between 5% and 15%). The performance of the system with QPSK and 64 QAM modulations is analysed by computer simulations, on AWGN and frequency selective channels. The results indicate that CD3-OFDM allows to achieve C/N performance similar to coherent demodulation with pilot tones, when the same channel coding and modulation scheme is adopted. Otherwise, when the additional capacity is exploited to increase the FEC redundancy instead of the useful bit-rate, CD3 can offer significant C/N advantages (typically from 2 to 5 dB depending on the channel characteristics). Therefore CD3-OFDM can be suitable for digital television broadcasting services over selective radio channels.

Patent
Christopher P. Zook1
06 Jun 1995
TL;DR: In this article, CRC check remainder bytes are generated by using input data from a read sector to obtain regenerated CRC bytes for the read sector, and adding the regenerated bytes to incoming CRC bytes of the read sectors.
Abstract: In an error correction method for rotating magnetic media, CRC check remainder bytes are generated by using input data from a read sector to obtain regenerated CRC bytes for the read sector, and adding the regenerated CRC bytes to incoming CRC bytes of the read sector. Error patterns for correcting errors in the read sector are generated. Both the CRC check remainder bytes and the error patterns are used for confirming that the error patterns accurately correct the errors in the read sector. The confirmation involves loading a predetermined number of the CRC check remainder bytes into a corresponding number of registers (1302). For each of a plurality of error patterns of the sector, the error pattern is added to the register (1302) having its CRC check remainder bytes affected by the error pattern. A determination is then made whether the contents of any of the registers (1302) indicates inaccurate correction of the errors.

Journal ArticleDOI
TL;DR: In this article, an on-line error compensation system for coordinate measuring machines (CMMs) is described, which is based on three laser optical Multi-Degree-of-Freedom Measurement Systems (MDFM systems), each for one axis of CMMs.
Abstract: An on-line error compensation system for coordinate measuring machines (CMMs) is described, which is based on three laser optical Multi-Degree-of-Freedom Measurement Systems (MDFM systems), each for one axis of CMMs. Twelve of the twenty-one error components associated with a CMM are measured on-line by these MDFM systems. The remaining error components are measured by off-line methods which use commercially available measurement systems, such as a laser interferometer. Two mathematical error models have been developed to synthesize the error components and to predict the errors at the probe stylus tip. One model corresponds to the conventional off-line error compensation and the other to the on-line error compensation. The errors predicted by these models are then subtracted from the nominal coordinates of the CMM, thus improving its measurement accuracy. Diagonal tests using a laser interferometer system were designed to check the error compensation effect. Test results showed that the use of the on-line compensation system can further improve the compensation effect as compared with the conventional off-line compensation system.

Journal ArticleDOI
Yu-Dong Yao1
TL;DR: An effective go-back-N ARQ scheme is proposed which estimates the channel state in a simple manner, and adaptively switches its operation mode in a channel where error rates vary slowly.
Abstract: In nonstationary channels, error rates vary considerably. The author proposes an effective go-back-N ARQ scheme which estimates the channel state in a simple manner, and adaptively switches its operation mode in a channel where error rates vary slowly. It provides higher throughput than other comparable ARQ schemes under a wide variety of error rate conditions. >

Journal ArticleDOI
TL;DR: Techniques for, and measure the performance of, fast software implementation of the cyclic redundancy check (CRC), weighted sum codes (WSC), one's-complement checksum, Fletcher (1982) Checksum, CXOR checksum and block parity code are discussed.
Abstract: Software implementations of error detection codes are considered to be slow compared to other parts of the communication system. This is especially true for powerful error detection codes such as CRC. However, we have found that powerful error detection codes can run surprisingly fast in software. We discuss techniques for, and measure the performance of, fast software implementation of the cyclic redundancy check (CRC), weighted sum codes (WSC), one's-complement checksum, Fletcher (1982) checksum, CXOR checksum, and block parity code. Instruction count alone does not determine the fastest error detection code. Our results show the computer memory hierarchy also affects performance. Although our experiments were performed on a Sun SPARCstation LX, many of the techniques and conclusions will apply to other processors and error detection codes. Given the performance of various error detection codes, a protocol designer can choose a code with the desired speed and error detection power that is appropriate for his network and application.

Journal ArticleDOI
TL;DR: This paper presents the first rigorous and quantitative theoretical analysis of the conversion error introduced by an important type of D/A converter with dynamic element matching and provides an expression for the power of the white noise in terms of thePower of the input sequence and the component matching errors.
Abstract: A known approach to reducing harmonic distortion in D/A converters involves a technique called dynamic element matching, The idea is not to reduce the power of the overall conversion error but rather to give it a random, noise-like structure. By reducing the correlation among successive samples of the conversion error, harmonic distortion is reduced. This paper presents the first rigorous and quantitative theoretical analysis of the conversion error introduced by an important type of D/A converter with dynamic element matching, In addition to supporting previously published experimental results that indicate the conversion error consists of white noise instead of harmonic distortion, the analysis provides an expression for the power of the white noise in terms of the power of the input sequence and the component matching errors. A yield estimation technique based on the expression is presented that can be used to estimate how the power of the white noise varies across different copies of the same D/A converter circuit for any given component matching error statistics.