scispace - formally typeset
Search or ask a question

Showing papers on "White noise published in 1995"


Journal ArticleDOI
TL;DR: A new optical encoding method of images for security applications is proposed and it is shown that the encoding converts the input signal to stationary white noise and that the reconstruction method is robust.
Abstract: We propose a new optical encoding method of images for security applications. The encoded image is obtained by random-phase encoding in both the input and the Fourier planes. We analyze the statistical properties of this technique and show that the encoding converts the input signal to stationary white noise and that the reconstruction method is robust.

2,361 citations


Journal ArticleDOI
TL;DR: It is proved in this paper that the estimation of filter coefficients may be based on the cancellation of 4th-order output cross-cumulants, and that the FIR model of mixtures is not realistic enough and must be improved.

323 citations


Journal ArticleDOI
TL;DR: In this paper, the heat equation with a random potential that is a white noise in space and time was studied in one space dimension and the statistical properties of the solution were investigated.
Abstract: We study, in one space dimension, the heat equation with a random potential that is a white noise in space and time. This equation is a linearized model for the evolution of a scalar field in a space-time-dependent random medium. It has also been related to the distribution of two-dimensional directed polymers in a random environment, to the KPZ model of growing interfaces, and to the Burgers equation with conservative noise. We show how the solution can be expressed via a generalized Feynman-Kac formula. We then investigate the statistical properties: the two-point correlation function is explicitly computed and the intermittence of the solution is proven. This analysis is carried out showing how the statistical moments can be expressed through local times of independent Brownian motions.

283 citations


Journal ArticleDOI
TL;DR: The performance of a single-term approximation to the optimal LF classifier is evaluated analytically and is shown to be very close to that of the optimal.
Abstract: New algorithms based on the likelihood functional (LF) and approximations thereof are proposed for the problem of classifying MPSK modulations in additive white Gaussian noise. Previously introduced classifiers for this problem are theoretically interpreted as simplified versions of the ones in here. The performance of a single-term approximation to the optimal LF classifier is evaluated analytically and is shown to be very close to that of the optimal. Furthermore, recursive algorithms for the implementation of this new quasi-log-likelihood-ratio (qLLR) classifier are derived which imply no significant increase in classifier complexity. The present method of generating classification algorithms can be generalized to arbitrary two-dimensional signal constellations. >

254 citations


PatentDOI
TL;DR: In this paper, a hearing compensation system for the hearing impaired comprises a plurality of bandpass filters having an input connected to an input transducer and each bandpass filter having an output connected to the input of one of a multiplicative AGC circuits.
Abstract: A hearing compensation system for the hearing impaired comprises a plurality of bandpass filters having an input connected to an input transducer and each bandpass filter having an output connected to the input of one of a plurality of multiplicative AGC circuits whose outputs are summed together and connected to the input of an output transducer. The multiplicative AGC circuits attenuate acoustic signals having a constant background level without the loss of speech intelligibility. The identification of the background noise portion of the acoustic signal is made by the constancy of the envelope of the input signal in each of the several frequency bands. The background noise that will be suppressed includes multi-talker speech babble, fan noise, feedback whistle, fluorescent light hum, and white noise.

243 citations


Journal ArticleDOI
TL;DR: A set of decision criteria for identifying different types of digital modulation is developed and it is found that all modulation types of interest have been classified with success rate ≥90% at SNR = 10 dB.

242 citations


Journal ArticleDOI
TL;DR: Simulation results of a convolutional-coded communication system are presented that demonstrate the superiority of the OSA and the SSA over the conventional VA when they are used as detectors when the decision delay of the detectors equals the channel memory.
Abstract: In contrast to the conventional Viterbi algorithm (VA) which generates hard-outputs, an optimum soft-output algorithm (OSA) is derived under the constraint of fixed decision delay for detection of M-ary digital signals in the presence of intersymbol interference and additive white Gaussian noise. The OSA, a new type of the conventional symbol-by-symbol maximum a posteriori probability algorithm, requires only a forward recursion and the number of variables to be stored and recursively updated increases linearly, rather than exponentially, with the decision delay. Then, with little performance degradation, a suboptimum soft-output algorithm (SSA) is obtained by simplifying the OSA. The main computations in the SSA, as in the VA, are the add-compare-select operations. Simulation results of a convolutional-coded communication system are presented that demonstrate the superiority of the OSA and the SSA over the conventional VA when they are used as detectors. When the decision delay of the detectors equals the channel memory, a significant performance improvement is achieved with only a small increase in computational complexity. >

195 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider LTI systems perturbed by parametric uncertainties, modeled as white noise disturbances, and show how to maximize, via statefeedback control, the smallest norm of the noise intensity vector producing instability in the mean square sense, using convex optimization over linear matrix inequalities.

190 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: It is shown that lattice codes can achieve capacity on the additive white Gaussian noise channel and there exists a lattice code with rate no less than R and average error probability upper-bounded by e.
Abstract: It is shown that lattice codes can achieve capacity on the additive white Gaussian noise channel. More precisely, for any rate R less than capacity and e>0, there exists a lattice code with rate no less than R and average error probability upper-bounded by e. These lattice codes include all points of the (translated) lattice within the spherical bounding region (not just the ones inside a thin spherical shell).

164 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that if the white noise in the AR model is weakly stationary with finite fourth moments, then under the null hypothesis of no changepoint, the normalized Gaussian likelihood ratio test statistic converges in distribution to the Gumbel extreme value distribution.
Abstract: The problem of testing whether or not a change has occurred in the parameter values and order of an autoregressive model is considered. It is shown that if the white noise in the AR model is weakly stationary with finite fourth moments, then under the null hypothesis of no changepoint, the normalized Gaussian likelihood ratio test statistic converges in distribution to the Gumbel extreme value distribution. An asymptotically distribution-free procedure for testing a change of either the coefficients in the AR model, the white noise variance or the order is also proposed. The asymptotic null distribution of this test is obtained under the assumption that the third moment of the noise is zero. The proofs of these results rely on Horvath's extension of Darling-Erdos' result for the maximum of the norm of a $k$-dimensional Ornstein-Uhlenbeck process and an almost sure approximation to partial sums of dependent random variables.

154 citations


Journal ArticleDOI
TL;DR: This numerical study of fractional Brownian noise focuses on determining the limitations of the dispersional analysis method, in particular, assessing the effects of signal length and of added noise on the estimate of the Hurst coefficient,H.
Abstract: Fractal signals can be characterized by their fractal dimension plus some measure of their variance at a given level of resolution. The Hurst exponent, H, is 0.5 for positively correlated series, and = 0.5 for random, white noise series. Several methods are available: dispersional analysis, Hurst rescaled range analysis, autocorrelation measures, and power special analysis. Short data sets are notoriously difficult to characterize; research to define the limitations of the various methods is incomplete. This numerical study of fractional Brownian noise focuses on determining the limitations of the dispersional analysis method, in particular, assessing the effects of signal length and of added noise on the estimate of the Hurst coefficient, H, (which ranges from 0 to 1 and is 2 - D, where D is the fractal dimension). There are three general conclusions: (i) pure fractal signals of length greater than 256 points give estimates of H that are biased but have standard deviations less than 0.1; (ii) the estimates of H tend to be biased toward H = 0.5 at both high H (> 0.8) and low H ( 0.6, and the method is particularly robust for signals with high H and long series, where even 100% noise added has only a few percent effect on the estimate of H. Dispersional analysis can be regarded as a strong method for characterizing biological or natural time series, which generally show long-range positive correlation.

Journal ArticleDOI
TL;DR: This paper provides a new closed form expression for calculating the probability that an MPSK signal will lie in a particular decision region when received over N independent and identically distributed Rayleigh fading channels corrupted by additive white Gaussian noise.
Abstract: This paper provides a new closed form expression for calculating the probability that an MPSK signal will lie in a particular decision region when received over N independent and identically distributed Rayleigh fading channels corrupted by additive white Gaussian noise This expression is applied to provide a unified method to derive the exact symbol error rate and bit error rate for MPSK signals considering N channel diversity reception The N channel diversity reception techniques considered are maximal ratio combining (MRC) and selection combining MRC with identical channels and dissimilar channels is considered The results for MRC can be extended to provide an approximation for the error rates of MPSK under equal gain combining >

Journal ArticleDOI
17 Sep 1995
TL;DR: This article examines Tomlinson-Harashima precoding (1971, 1972) on discrete-time channels having intersymbol interference and additive white Gaussian noise and the importance of symbol rate to ZF-THP performance is demonstrated.
Abstract: This article examines Tomlinson-Harashima precoding (1971, 1972) on discrete-time channels having intersymbol interference and additive white Gaussian noise. An exact expression for the maximum achievable information rate of zero-forcing (ZF) THP is derived as a function of the channel impulse response, the input power constraint, and the additive white Gaussian noise variance. Information rate bounds are provided for the minimum mean-square error (MMSE) THP. The performance of ZF-THP and MMSE-THP relative to each other and to channel capacity is explored in general and for some example channels. The importance of symbol rate to ZF-THP performance is demonstrated.

Journal ArticleDOI
10 Sep 1995-EPL
TL;DR: In this article, the role of internal symmetries of the periodic structure is investigated from the viewpoint of optimizing the current amplitude at fixed noise intensity, and it is shown that the current increases monotonically and saturates at infinitely strong noise intensity.
Abstract: Spatially periodic structures are exposed to additive Poissonian white shot noise of zero average. Because the underlying master equation no longer obeys the principle of detailed balance, these non-equilibrium fluctuations induce a macroscopic current—even in the absence of spatial asymmetry. The resulting current can be expressed in analytical closed form and we discuss its behaviour in the limits of very weak and very strong noise intensities. We find that the current increases monotonically and—in contrast to common intuition—saturates at infinitely strong noise intensity. The role of internal symmetries of the periodic structure is investigated from the viewpoint of optimizing the current amplitude at fixed noise intensity.

Journal ArticleDOI
TL;DR: In this paper, a wavelet analysis is applied to detect and characterize singular events, or jerks, in the time series made of the last century monthly mean values of the east component of the geomagnetic field from European observatories.
Abstract: Wavelet analysis is applied to detect and characterize singular events, or singularities, or jerks, in the time series made of the last century monthly mean values of the east component of the geomagnetic field from European observatories. After choosing a well-adapted wavelet function, the analysis is first performed on synthetic series including an “internal”, or “main”, signal made of smooth variation intervals separated by singular events with different “regularities”, a white noise and an “external” signal made of the sum of a few harmonics of a long-period variation (11 years). The signatures of the main, noise, and harmonic signals are studied and compared, and the conditions in which the singular events can be clearly isolated in the composite signal are elucidated. Then we apply the method systematically to the real geomagnetic series (monthly means of Y from European observatories) and show that five arid only five remarkable events are found in 1901, 1913, 1925, 1969, and 1978. The characteristics of these singularities (in particular, homogeneity of some derived functions of the wavelet transform over a large range of timescales) demonstrate that these events have a single source (of course, internal). Also the events are more singular than was previously supposed (their “regularity” is closer to 1.5 than to 2., indicating that noninteger powers of time should be used in representing the time series between the jerks).

Journal ArticleDOI
TL;DR: The parity relation approach is compared with the traditional detection filter design, and is shown to be more straightforward and have milder existence conditions; if subjected to the same specification, the two approaches yield identical residual generators.

Journal ArticleDOI
TL;DR: In this article, the authors consider the fixed-design regression model with long-range dependent normal errors and show that the finite-dimensional distributions of the properly normalized Gasser-Miiller and Priestley-Chao estimators of the regression function converge to those of a white noise process.
Abstract: We consider the fixed-design regression model with long-range dependent normal errors and show that the finite-dimensional distributions of the properly normalized Gasser-Miiller and Priestley-Chao estimators of the regression function converge to those of a white noise process. Furthermore, the distributions of the suitably renormalized maximal deviations over an increasingly finer grid converge to the Gumbel distribution. These results contrast with our previous findings for the corresponding problem of estimating the marginal density of long-range dependent stationary sequences.

Journal ArticleDOI
TL;DR: The generalized coherence estimates are developed as a natural generalization of the magnitude-squared coherence estimate-a widely used statistic for nonparametric detection of a common signal on two noisy channels and found to provide better detection performance than the MC approach in terms of the minimum signal-to-noise ratio.
Abstract: The paper introduces the generalized coherence (GC) estimate and examines its application as a statistic for detecting the presence of a common but unknown signal on several noisy channels. The GC estimate is developed as a natural generalization of the magnitude-squared coherence (MSC) estimate-a widely used statistic for nonparametric detection of a common signal on two noisy channels. The geometrical nature of the GC estimate is exploited to derive its distribution under the H/sub 0/ hypothesis that the data channels contain independent white Gaussian noise sequences. Detection thresholds corresponding to a range of false alarm probabilities are calculated from this distribution. The relationship of the H/sub 0/ distribution of the GC estimate to that of the determinant of a complex Wishart-distributed matrix is noted. The detection performance of the three-channel GC estimate is evaluated by simulation using a white Gaussian signal sequence in white Gaussian noise. Its performance is compared with that of the multiple coherence (MC) estimate, another nonparametric multiple-channel detection statistic. The GC approach is found to provide better detection performance than the MC approach in terms of the minimum signal-to-noise ratio on all data channels necessary to achieve desired combinations of detection and false alarm probabilities. >

Journal ArticleDOI
TL;DR: Optimal estimation algorithms for signal filtering, prediction, and smoothing in the presence of white Gaussian noise are derived based on the method of maximum likelihood, which has convenient recursive implementations that are efficient both in terms of computation and storage.
Abstract: The chaotic sequences corresponding to tent map dynamics are potentially attractive in a range of engineering applications. Optimal estimation algorithms for signal filtering, prediction, and smoothing in the presence of white Gaussian noise are derived for this class of sequences based on the method of maximum likelihood. The resulting algorithms are highly nonlinear but have convenient recursive implementations that are efficient both in terms of computation and storage. Performance evaluations are also included and compared with the associated Cramer-Rao bounds. >

PatentDOI
TL;DR: In this article, a voice activity detector uses an energy estimate to detect the presence of speech in a received speech signal in a noise environment, and a set of high pass filters are used to filter the signal based upon the background noise level.
Abstract: A method and apparatus for improving sound quality in a digital cellular radio system receiver. A voice activity detector uses an energy estimate to detect the presence of speech in a received speech signal in a noise environment. When no speech is present the system attenuates the signal and inserts low pass filtered white noise. In addition, a set of high pass filters are used to filter the signal based upon the background noise level. This high pass filtering is applied to the signal regardless of whether speech is present. Thus, a combination of signal attenuation with insertion of low pass filtered white noise during periods of non-speech, along with high pass filtering of the signal, improves sound quality when decoding speech which has been encoded in a noisy environment.

Journal ArticleDOI
TL;DR: It is demonstrated that previously well established results on constant amplitude harmonics are special cases of the present analysis and shown to be asymptotically equivalent to certain nonlinear least squares estimators, and are also compared with the maximum likelihood ones.
Abstract: Multiplicative noise causes smearing of spectral lines and thus hampers frequency estimation relying on conventional spectral analysis. In contrast, cyclic mean and correlation statistics have proved to be useful for harmonic retrieval in the presence of multiplicative and additive noise of arbitrary color and distribution. Performance analysis of cyclic estimators is carried through both for nonzero and zero mean multiplicative noises. Cyclic estimators are shown to be asymptotically equivalent to certain nonlinear least squares estimators, and are also compared with the maximum likelihood ones. Large sample variance expressions of the cyclic estimators are derived and compared with the corresponding Cramer-Rao bounds when the noises are white Gaussian. It is demonstrated that previously well established results on constant amplitude harmonics are special cases of the present analysis. Simulations not only validate the large sample performance analysis, but also provide concrete examples regarding relative statistical efficiency of the cyclic estimators. >

Journal ArticleDOI
TL;DR: It is shown that this intrinsically wideband HOC-based method possesses an immunity to imperfect knowledge of exact frequency locations and delivers a performance that tightly lower-bounds that of the optimal likelihood-ratio test.
Abstract: A general framework that theoretically links the higher-order correlation (HOC) domain with statistical decision theory is explored. It is then applied to the problem of classification of M-ary frequency shift keying (MFSK) signals when contaminated by additive white Gaussian noise (AWGN). In particular, we propose a novel class of classifiers that utilizes time-domain HOC operations while completely avoiding the explicit determination of the spectrum of the observed signal. It is shown that this method delivers a performance that tightly lower-bounds that of the optimal likelihood-ratio test. In addition, this intrinsically wideband HOC-based method possesses an immunity to imperfect knowledge of exact frequency locations. Substantial performance improvement is also reported over the energy-based rule whenever it is applicable. >

Journal ArticleDOI
TL;DR: In this paper, the generation of correlated vectors for non-Gaussian clutter is considered for log normal, Weibull, and K-probability distributions, and the procedure for forming samples of each type of clutter is shown to be equivalent to passing white Gaussian noise through a linear filter followed by a nonlinear operation.
Abstract: The generation of correlated vectors for non-Gaussian clutter is considered for log normal, Weibull, and K-probability distributions. Previous results for log normal and Weibull distributions are summarized. Expressions for the probability distributions and moments of K-distributed clutter of any correlation are derived. Procedures for forming samples of each type of clutter are shown to be equivalent to passing white Gaussian noise through a linear filter followed by a nonlinear operation. Curves of correlation coefficients necessary for the simulation of these vectors are presented for each distribution. >

Proceedings ArticleDOI
07 Jun 1995
TL;DR: In this article, the convergence of the Newton-Kantorovich method for total-variation denoising has been studied in terms of the convergence domain of the Euler-Lagrangian equation.
Abstract: The denoising problem can be solved by posing it as a constrained minimization problem. Theobjective function is the TV norm of the denoised image whereas the constraint is the requirement that the denoised image does not deviate too much from the observed image. The Euler-Lagrangianequation corresponding to the minimization problem is a nonlinear equation. The Newton method forsuch equation is known to have a very small domain of convergence. In this paper, we propose to couplethe Newton method with the continuation method. Using the Newton-Kantorovich theorem, we give abound on the domain of convergence. Numerical results are given to illustrate the convergence.Key Words: Denoising, Total-variation, Newton method, Fixed-point method. 1 Introduction Noises are introduced in images in the formation, transmission or recording process. In this paper,we concern with the removal of noises in an image. Consider the model equationu0(x,y) =u(x,y)+r(x,y) (1) where i(x, y) is a Gaussian white noise, u0(x, y) is the observed intensity function of the image and u(x, y)is the original image. Our objective is to get a reasonable approximation of u(x, y).There are many different methods proposed to obtain an estimate of u(x, y). In Rudin, Osher and

Journal ArticleDOI
Dennis R. Morgan1
TL;DR: The paper establishes a theoretical basis for the slow asymptotic convergence and suggests postfiltering as a remedy that would be useful for the full-band LMS AEC and may also be applicable to subband designs.
Abstract: In most acoustic echo canceler (AEC) applications, an adaptive finite impulse response (FIR) filter is employed with coefficients that are computed using the LMS algorithm. The paper establishes a theoretical basis for the slow asymptotic convergence that is often noted in practice for such applications. The analytical approach expresses the mean-square error trajectory in terms of eigenmodes and then applies the asymptotic theory of Toeplitz matrices to obtain a solution that is based on a general characterization of the actual room impulse response. The method leads to good approximations even for a moderate number of taps (N>16) and applies to both full-band and subband designs. Explicit mathematical expressions of the mean-square error convergence are derived for bandlimited white noise, a first-order Markov process, and, more generally, pth-order rational spectra and a direct power-law model, which relates to lowpass FIR filters. These expressions show that the asymptotic convergence is generally slow, being at best of order 1/t for bandlimited white noise. It is argued that input filter design cannot do much to improve slow convergence. However, the theory suggests postfiltering as a remedy that would be useful for the full-band LMS AEC and may also be applicable to subband designs. >

Journal ArticleDOI
TL;DR: In this paper, a dual pair G and G* of smooth and generalized random variables over the white noise probability space is studied, where G is constructed by norms involving exponentials of the Ornstein-Uhlenbeck operator, while G* is its dual.
Abstract: A dual pairG andG* of smooth and generalized random variables, respectively, over the white noise probability space is studied.G is constructed by norms involving exponentials of the Ornstein-Uhlenbeck operator,G* is its dual. Sufficient criteria are proved for when a function onL(ℝ) is theL-transform of an element inG orG*.

Journal ArticleDOI
TL;DR: The a posteriori probability for the location of bursts of noise additively superimposed on a Gaussian AR process is derived to give a sequentially based restoration algorithm suitable for real-time applications.
Abstract: In this paper we derive the a posteriori probability for the location of bursts of noise additively superimposed on a Gaussian AR process. The theory is developed to give a sequentially based restoration algorithm suitable for real-time applications. The algorithm is particularly appropriate for digital audio restoration, where clicks and scratches may be modelled as additive bursts of noise. Experiments are carried out on both real audio data and synthetic AR processes and significant improvements are demonstrated over existing restoration techniques. >

Journal ArticleDOI
TL;DR: A dynamic programming algorithm and a suboptimal but computationally efficient method for estimation of a chaotic signal in white Gaussian noise that produce efficient estimates at high signal-to-noise ratios are proposed.
Abstract: A dynamic programming algorithm and a suboptimal but computationally efficient method for estimation of a chaotic signal in white Gaussian noise are proposed The nonlinear map is assumed known so that only the initial condition need be estimated Computer simulations confirm that both approaches produce efficient estimates at high signal-to-noise ratios >

Journal ArticleDOI
TL;DR: In this article, the authors use model data to assess some of the uncertainties involved in estimating when we could expect to detect ocean greenhouse warming signals, which is defined as the length of a climate time series required to detect a given linear trend in the presence of the natural climate variability.
Abstract: Recent investigations have considered whether it is possible to achieve early detection of greenhouse-gas-induced climate change by observing changes in ocean variables. In this study we use model data to assess some of the uncertainties involved in estimating when we could expect to detect ocean greenhouse warming signals. We distinguish between detection periods and detection times. As defined here, detection period is the length of a climate time series required in order to detect, at some prescribed significance level, a given linear trend in the presence of the natural climate variability. Detection period is defined in model years and is independent of reference time and the real time evolution of the signal. Detection time is computed for an actual time-evolving signal from a greenhouse warming experiment and depends on the experiment's start date. Two sources of uncertainty are considered: those associated with the level of natural variability or noise, and those associated with the time-evolving signals. We analyze the ocean signal and noise for spatially averaged ocean circulation indices such as heat and fresh water fluxes, rate of deep water formation, salinity, temperature, transport of mass, and ice volume. The signals for these quantities are taken from recent time-dependent greenhouse warming experiments performed by the Max Planck Institute for Meteorology in Hamburg with a coupled ocean-atmosphere general circulation model. The time-dependent greenhouse gas increase in these experiments was specified in accordance with scenario A of the Intergovernmental Panel on Climate Change. The natural variability noise is derived from a 300-year control run performed with the same coupled atmosphere-ocean model and from two long (>3000 years) stochastic forcing experiments in which an uncoupled ocean model was forced by white noise surface flux variations. In the first experiment the stochastic forcing was restricted to the fresh water fluxes, while in the second experiment the ocean model was additionally forced by variations in wind stress and heat fluxes. The mean states and ocean variability are very different in the three natural variability integrations. A suite of greenhouse warming simulations with identical forcing but different initial conditions reveals that the signal estimated from these experiments may evolve in noticeably different ways for some ocean variables. The combined signal and noise uncertainties translate into large uncertainties in estimates of detection time. Nevertheless, we find that ocean variables that are highly sensitive indicators of surface conditions, such as convective overturning in the North Atlantic, have shorter signal detection times (35–65 years) than deep-ocean indicators (≥100 years). We investigate also whether the use of a multivariate detection vector increases the probability of early detection. We find that this can yield detection times of 35–60 years (relative to a 1985 reference date) if signal and noise are projected onto a common “fingerprint” which describes the expected signal direction. Optimization of the signal-to-noise ratio by (spatial) rotation of the fingerprint in the direction of low-noise components of the stochastic forcing experiments noticeably reduces the detection time (to 10–45 years). However, rotation in space alone does not guarantee an improvement of the signal-to-noise ratio for a time-dependent signal. This requires an “optimal fingerprint” strategy in which the detection pattern (fingerprint) is rotated in both space and time.

Journal ArticleDOI
TL;DR: This paper presents the first rigorous and quantitative theoretical analysis of the conversion error introduced by an important type of D/A converter with dynamic element matching and provides an expression for the power of the white noise in terms of thePower of the input sequence and the component matching errors.
Abstract: A known approach to reducing harmonic distortion in D/A converters involves a technique called dynamic element matching, The idea is not to reduce the power of the overall conversion error but rather to give it a random, noise-like structure. By reducing the correlation among successive samples of the conversion error, harmonic distortion is reduced. This paper presents the first rigorous and quantitative theoretical analysis of the conversion error introduced by an important type of D/A converter with dynamic element matching, In addition to supporting previously published experimental results that indicate the conversion error consists of white noise instead of harmonic distortion, the analysis provides an expression for the power of the white noise in terms of the power of the input sequence and the component matching errors. A yield estimation technique based on the expression is presented that can be used to estimate how the power of the white noise varies across different copies of the same D/A converter circuit for any given component matching error statistics.