scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 1992"


Journal ArticleDOI
TL;DR: Several important digital processing techniques for optical-fiber sensor systems that use electronically scanned white-light interferometry are presented, which are able to increase greatly the dynamic range of the measurement under a low signal-to-noise ratio environment.
Abstract: Several important digital processing techniques for optical-fiber sensor systems that use electronically scanned white-light interferometry are presented. These include fringe restoration, fringe-order identification, and resolution enhancement techniques. A pure low-coherence interference fringe pattern is restored by dividing, pixel by pixel, the beam intensity profile from the signal. The central (zero-order) fringe of the pattern is identified by using a centroid algorithm. A linear interpolation or a localized centroid algorithm is used to enhance further the phase resolution. Theoretical analyses, computer simulations, and experimental verifications have shown that these techniques are able to increase greatly the dynamic range of the measurement under a low signal-to-noise ratio environment.

119 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a technique for front-end signal processing of signals from LHC or SSC detectors which precisely defines the origin of an event in time while maintaining amplitude measurement with an excellent signal to noise ratio.
Abstract: We describe a new technique for front-end signal processing of signals from LHC or SSC detectors which precisely defines the origin of an event in time while maintaining amplitude measurement with an excellent signal to noise ratio. The method is designed for use with silicon detectors whose leakage currents may be substantially increased during the lifetime of an experiment by radiation damage, although it is likely to be applicable to other types of detector. It makes use of a shaping amplifier with a time constant of several beam crossing intervals and is particularlt well matched to CMOS front ends where low power consumption and low noise is best achieved by utilising pulse shapes with time constants ∼50 ns. It is based on discrete time filtering of data extracted from an analogue pipeline after a first level trigger. A finite impulse response type filter deconvolutes the sampled voltages of a shaped pulse to retrieve the original impulse signal with high precision. We describe the mathematical basis of the technique and its implications for timing and signal to noise. Measurements have been made on a CMOS amplifier intended as a prototype for readout of silicon microstrip detectors at LHC which demonstrates the power of this approach. A CMOS circuit emulating the filter is being built. It has been implemented with extremely low power consumption (

109 citations


Patent
16 Jul 1992
TL;DR: In this article, a pocketsize electronic travel and commuter pass which can be used for making valid payment for fares or purchases of services and goods is disclosed, which exhibits a high signal to noise ratio.
Abstract: A pocketsize electronic travel and commuter pass which can be used for making valid payment for fares or purchases of services and goods is disclosed. The pass contains capacitive plates or inductive coils (4) in pairs of two operated in such a way that their mutual phasing is correct for close proximity signal communication with an accountancy system. Noise and interference signals will appear in antiphase on the plates or coils (4) and will not affect the desired signal communication. As a result, the pass exhibits a high signal to noise ratio.

108 citations


Journal ArticleDOI
TL;DR: Weighted averages of brain evoked potentials are obtained by weighting each single EP sweep prior to averaging to maximize the signal-to-noise ratio (SNR) of the resulting average if they satisfy a generalized eigenvalue problem involving the correlation matrices of the underlying signal and noise components.
Abstract: Weighted averages of brain evoked potentials (EPs) are obtained by weighting each single EP sweep prior to averaging. These weights are shown to maximize the signal-to-noise ratio (SNR) of the resulting average if they satisfy a generalized eigenvalue problem involving the correlation matrices of the underlying signal and noise components. The signal and noise correlation matrices are difficult to estimate and the solution of the generalized eigenvalue problem is often computationally impractical for real-time processing. Correspondingly, a number of simplifying assumptions about the signal and noise correlation matrices are made which allow an efficient method of approximating the maximum SNR weights. Experimental results are given using actual auditory EP data which demonstrate that the resulting weighted average has estimated SNRs that are up to 21% greater than the conventional ensemble average SNR. >

97 citations


Journal ArticleDOI
TL;DR: Findings illustrate the noise susceptibility of Nucleus cochlear implant users and suggest that single-channel digital noise reduction techniques may offer some relief from this problem.
Abstract: The recognition of phonemes in consonant-vowel-consonant words, presented in speech-shaped random noise, was measured as a function of signal to noise ratio (S/N) in 10 normally hearing adults and 10 successful adult users of the Nucleus cochlear implant. Optimal scores (measured at a S/N of

92 citations


Journal ArticleDOI
TL;DR: The capacity and cutoff rates for channels with linear intersymbol interference, power dependent crosstalk noise, and additive white noise are examined, focusing on high speed digital subscriber line data transmission.
Abstract: The capacity and cutoff rates for channels with linear intersymbol interference, power dependent crosstalk noise, and additive white noise are examined, focusing on high speed digital subscriber line data transmission. The effects of varying the level of additive white noise, crosstalk coupling gain, sampling rate, and input power levels are studied in detail for a set of simulated two-wire local loops. A closed-form expression for the shell constrained Gaussian cutoff rate on the crosstalk limited channel is developed and related to the capacity, showing that the relationship between these two rates is the same as on a channel without crosstalk noise. The study also projects achievable rates on a digital subscriber line, inside and outside of a carrier serving area, with a sophisticated but realizable receiver. >

91 citations


Journal ArticleDOI
TL;DR: The conclusion is that the eigenimage filter is the optimal linear filter that achieves SDF and CPV simultaneously.
Abstract: The performance of the eigenimage filter is compared with those of several other filters as applied to magnetic resonance image (MRI) scene sequences for image enhancement and segmentation. Comparisons are made with principal component analysis, matched, modified-matched, maximum contrast, target point, ratio, log-ratio, and angle image filters. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), segmentation of a desired feature (SDF), and correction for partial volume averaging effects (CPV) are used as performance measures. For comparison, analytical expressions for SNRs and CNRs of filtered images are derived, and CPV by a linear filter is studied. Properties of filters are illustrated through their applications to simulated and acquired MRI sequences of a phantom study and a clinical case; advantages and weaknesses are discussed. The conclusion is that the eigenimage filter is the optimal linear filter that achieves SDF and CPV simultaneously. >

91 citations


Journal ArticleDOI
TL;DR: The limitation of the conventional signal-to-noise ratio as a performance measure in matched-filter-based optical pattern recognition for input-scene noise that is disjoint (or nonoverlapping) with the target is investigated.
Abstract: The limitation of the conventional signal-to-noise ratio as a performance measure in matched-filter-based optical pattern recognition for input-scene noise that is disjoint (or nonoverlapping) with the target is investigated.

88 citations


Journal Article
TL;DR: This paper investigates the problem of adaptive equalisation in the presence of intersymbol interference, additive white Gaussian noise and co-channel interference and derives a two-stage learning strategy that enables the radial basis function network to realise the optimal Bayesian symbol-decision equaliser.
Abstract: The paper investigates the problem of adaptive equalisation in the presence of intersymbol interference, additive white Gaussian noise and co-channel interference. The radial basis function network is designed to realise a sophisticated nonlinear adaptive equaliser capable of operating under poor signal to interference ratio and signal to noise ratio conditions. A two-stage learning strategy is derived by exploiting the nature of the data structure and this enables the radial basis function network to realise the optimal Bayesian symbol-decision equaliser. At the first stage of the learning, a supervised clustering algorithm is employed to model the effects of the channel intersymbol interference. This learning stage is extremely simple and robust, and it is capable of producing a very good approximation to the optimal Bayesian solution. At the second stage of the learning, the network structure is expanded and an unsupervised clustering algorithm is added into the learning procedure so that the network can model the full effects of the co-channel interference and achieve the full optimal equalisation solution.

86 citations


Journal ArticleDOI
TL;DR: In this paper, a two-stage learning strategy is derived by exploiting the nature of the data structure and this enables the radial basis function network to realize the optimal Bayesian symboldecision equaliser.

83 citations


Journal ArticleDOI
TL;DR: A generalized principal components transform (PCT) that maximizes the signal-to-noise ratio (SNR) and that tailors to the multiplicative speckle noise characteristics of polarimetric SAR images is developed and makes automated image segmentation and better human interpretation possible.
Abstract: A generalized principal components transform (PCT) that maximizes the signal-to-noise ratio (SNR) and that tailors to the multiplicative speckle noise characteristics of polarimetric SAR images is developed. An implementation procedure that accurately estimates the signal and the noise covariance matrices is established. The properties of the eigenvalues and eigenvectors are investigated, revealing that the eigenvectors are not orthogonal, but the principal component images are statistically uncorrelated. Both amplitude (or intensity) and phase difference images are included for the PCT computation. The NASA/JPL polarimetric SAR imagery of P, L, and C bands and quadpolarizations is used for illustration. The capabilities of this principal components transformation in information compression and speckle reduction makes automated image segmentation and better human interpretation possible. >

Journal ArticleDOI
TL;DR: It is found that contrast sensitivity in spatial noise was independent of eccentricity as long as contrast sensitivity was lower with noise than without, and without M-scaling the effect of increasing eccentricity is similar to that of increasing viewing distance.

Journal ArticleDOI
TL;DR: In this article, a method of sensor placement for the purpose on-orbit modal identification and test-analysis correlation is presented, which is an extension of the affective Independence method presented in past work to include the effects of a general representation of measurement noise.
Abstract: A method of sensor placement for the purpose on-orbit modal identification and test-analysis correlation is presented. The method is an extension of the affective Independence method presented in past work to include the effects of a general representation of measurement noise. Sensor noise can be distributed nonuniformly throughout the structure as well as correlated between sensors. The only restriction is that the corresponding noise covariance intensity matrix is positive definite

Journal ArticleDOI
01 Jun 1992
TL;DR: In this article, the coherence function is used to filter the observations so giving an estimate of the signal s/sub 1/ is the signal to be estimated, and a generalisation of these procedures is offered.
Abstract: With the development of hands-free radio communications there is great interest in noise cancelling or speech enhancement in a car. The authors assume that M observations are available; each one is composed of signal and noise (s/sub i/+b/sub i/), and s/sub 1/ is the signal to be estimated. Whatever the distance between microphones is, the signals are strongly correlated, while the correlation between noises becomes rather weak for a sufficiently great distance. The coherence function is then a pertinent criterion to know whether a speech signal exists or not. The three methods presented use the coherence function to filter the observations so giving an estimate of the signal s/sub 1/. The procedures presented are first shown for M=2. Then, a generalisation of these procedures is offered. The performances of the three methods have been evaluated on real noisy speech signals by objective tests (gain on the signal-to-noise ratio, distance measures) and informal listening tests.< >

Journal ArticleDOI
TL;DR: The signal-to-noise ratio (SNR) improvement given by the new design is equal to that of the matched filter for the signal under consideration, and hence is maximal when the noise is Gaussian and additive.
Abstract: A design is presented for a phase-sensitive detector (PSD) based on matched filter theory, which is implemented using digital signal processing (DSP). The signal-to-noise ratio (SNR) improvement given by the new design is equal to that of the matched filter for the signal under consideration, and hence is maximal when the noise is Gaussian and additive. The theory of operation of analogue phase-sensitive detectors is discussed, and the SNR improvement obtained by using an ideal PSD is derived, along with the specific conditions under which this SNR can be expected. The limitations of real PSDs are then discussed. The new design is then presented in detail and its performance is compared to the analogue PSD. Experimental results are given which support the theoretical model of the demodulator, and an example of the use of the demodulator in a real application is given.

Journal ArticleDOI
TL;DR: In this paper, the application of the wavelet transform in the determination of peak intensities in flow-injection analysis was studied with regard to its properties of minimizing the effects of noise and baseline drift.

Patent
09 Dec 1992
TL;DR: In this article, a software-created variable frequency digital filter is used to filter signal responses at the cross-correlation frequency in order to obtain an average waveform value for each run having an increased signal to noise ratio over the individual waveform segments.
Abstract: Apparatus for cross-correlation frequency domain fluorometry and/or phosphorimetry in which means are provided for sequentially performing runs of the cross correlation frequency domain fluorometry and/or phosphorimetry at sequentially differing first and second frequencies. The intensities of signal responses of the runs are detected at the respective cross-correlation frequency in each run. The detection of the signal response is prolonged in each run until an integrated signal with a specified standard deviation has been acquired at each of the differing runs. Preferably the sequential runs are automatically executed by a program. Also, the waveforms sensed by deriving the resultant signal response in each run are folded. That is: corresponding segments of the waveforms are superimposed to obtain an average waveform value for each run having an increased signal to noise ratio over the individual waveform segments. Also, preferably, a software-created variable frequency digital filter is used to filter signal responses at the cross-correlation frequency.

Journal ArticleDOI
TL;DR: A new approach to blind equalization is investigated in which the receiver performs joint data and channel estimation in an iterative manner, instead of estimating the channel inverse, the receiver computes the maximum-likelihood estimate of the channel itself.
Abstract: A new approach to blind equalization is investigated in which the receiver performs joint data and channel estimation in an iterative manner. Instead of estimating the channel inverse, the receiver computes the maximum-likelihood estimate of the channel itself. The iterative algorithm that is developed involves maximum-likelihood sequence estimation (Viterbi decoding) for the data estimation part, and least-squares estimation for the channel estimation part. A suboptimal algorithm is also proposed that uses a reduced-state trellis instead of the Viterbi algorithm. Simulation results show that the performance obtained by these algorithms is comparable to that of a receiver operating with complete knowledge of the channel.

Journal ArticleDOI
TL;DR: It is demonstrated that the error diffusion procedure is a powerful means to avoid signal error caused by a nonlinear system and an appropriate filter design method is described.
Abstract: An analysis of the error diffusion procedure is presented that is based on the terminology of filter theory. It is demonstrated that the error diffusion procedure is a powerful means to avoid signal error caused by a nonlinear system. An appropriate filter design method is described. The theoretical results are applied to treat picture binarization as well as quantization and coding in diffractive optics-digital holography.

Journal ArticleDOI
TL;DR: It is shown that an initial stage of filter-bank analysis is effective for achieving noise robustness and the zero-crossing method performs well for estimating low frequencies and hence for first formant frequency estimation in speech at high noise levels.
Abstract: The authors discuss a method for spectral analysis of noise corrupted signals using statistical properties of the zero-crossing intervals. It is shown that an initial stage of filter-bank analysis is effective for achieving noise robustness. The technique is compared with currently popular spectral analysis techniques based on singular value decomposition and is found to provide generally better resolution and lower variance at low signal to noise ratios (SNRs). These techniques, along with three established methods and three variations of these method, are further evaluated for their effectiveness for formant frequency estimation of noise corrupted speech. The theoretical results predict and experimental results confirm that the zero-crossing method performs well for estimating low frequencies and hence for first formant frequency estimation in speech at high noise levels ( approximately 0 dB SNR). Otherwise, J.A. Cadzow's high performance method (1983) is found to be a close alternative for reliable spectral estimation. As expected the overall performance of all techniques is found to degrade for speech data. The standard autocorrelation-LPC method is found best for clean speech and all methods deteriorate roughly equally in noise. >

Journal ArticleDOI
TL;DR: The Monte Carlo method is used to calculate signal-to-noise ratios and detective quantum efficiencies in imaging thin contrasting details of air, fat, bone and iodine within a water phantom using X-ray spectra and detectors of CsI, BaFCl and Gd2O2S.
Abstract: A lower limit to patient irradiation in diagnostic radiology is set by the fundamental stochastics of the energy imparted to the image receptor (quantum noise) Image quality is investigated and expressed in terms of the signal-to-noise ratio due to quantum noise The Monte Carlo method is used to calculate signal-to-noise ratios (SNRDelta S) and detective quantum efficiencies (DQEDelta S) in imaging thin contrasting details of air, fat, bone and iodine within a water phantom using X-ray spectra (40-140 kV) and detectors of CsI, BaFCl and Gd2O2S The atomic composition of the contrasting detail influences considerably the values of SNRDelta S due to the different modulations of the energy spectra of primary photons passing beside and through the contrasting detail By matching the absorption edges of the contrasting detail and the detector, a partially absorbing detector may be more efficient (yield higher SNRDelta S) than a totally absorbing one; this is demonstrated for the case of detecting an iodine detail using a CsI detector The degradation of SNRDelta S and DQEDelta S due to scatter is larger when the detector is operated in the photon counting compared to in the energy integrating mode and for partially absorbing compared to totally absorbing detectors

Journal ArticleDOI
TL;DR: The formulation permits the noise covariance between receiver difference and sum channels to be complex rather than only real-valued, so that the sources of noise jamming are not required to be positioned in the receiving-antenna mainlobe and to be copolarized with the antenna response there.
Abstract: In many monopulse radars, feedback in the angle-tracking servo system is taken to be directly proportional to the monopulse ratio. In those radars, monopulse measurements are conditioned on simultaneous occurrences of receiver sum-channel video exceeding a detection threshold: if a detection fails to occur, the measurement is ignored, and the angle-tracking servo is made to coast. Such conditioning is shown to be necessary in order that the noise power be finite in the servo feedback. The conditional mean value and conditional variance of the monopulse ratio are derived and quantified in terms of threshold level as well as signal-to-noise ratio. The formulation permits the noise covariance between receiver difference and sum channels to be complex rather than only real-valued, so that the sources of noise jamming are not required to be positioned in the receiving-antenna mainlobe and to be copolarized with the antenna response there. Nonfluctuating and Rayleigh-fluctuating target cases are considered and compared, and fluctuation loss is quantified. >

Journal ArticleDOI
TL;DR: In this article, a parametric analysis of the Fourier phase spectrum was performed for both point sources and an extended object and the results demonstrated improvements in power spectrum estimation for point sources.
Abstract: The use of predetection compensation for the effects of atmospheric turbulence combined with postdetection image processing for imaging applications with large telescopes is addressed. Full and partial predetection compensation with adaptive optics is implemented by varying the number of actuators in the deformable mirror. The theoretical expression for the single-frame power spectrum signal-to-noise ratio (SNR) is reevaluated for the compensated case to include the statistics of the compensated optical transfer function. Critical to this analysis is the observation that the compensated optical transfer function does not behave as a circularly complex Gaussian random variable except at high spatial frequencies. Results from a parametric study of performance are presented to demonstrate improvements in power spectrum estimation for both point sources and an extended object and improvements in the Fourier phase spectrum estimation for an extended object. Full compensation is shown to provide a large improvement in the power spectrum SNR over the uncompensated case, while successively smaller amounts of predetection compensation provide smaller improvements, until a low degree of compensation gives results essentially identical to those of the uncompensated case. Three regions of performance were found with respect to the object Fourier phase spectrum estimate obtained from bispectrum postprocessing: (1) the fully compensated case in which bispectrum postprocessing provides no improvement in the phase estimate over that obtained from a fully compensated long-exposure image, (2) a partially compensated regime in which applying bispectrum postprocessing to the compensated images provides a phase spectrum estimation superior to that of the uncompensated bispectrum case, and (3) a poorly compensated regime in which the results are essentially indistinguishable from those of the uncompensated case. Accurate simulations were used to obtain some parameters for the power spectrum SNR analysis and to obtain the Fourier phase spectrum results.

Journal ArticleDOI
TL;DR: A prototype of an efficient, accurate, low-atomic number areal detector is developed using thin plates of plastic scintillator as detectors and a detailed noise analysis demonstrates that the image intensifier reduces acquisition time 10000-fold, reduces noise relative to signal 200-fold and reduces amplifier gain noise as well.
Abstract: Because of the large dose gradients encountered near brachytherapy sources, an efficient, accurate, low-atomic number areal detector, which can record dose at many points simultaneously, is highly desirable. We have developed a prototype of such a system using thin plates of plastic scintillator as detectors. A micro-channel plate (MCP) image intensifier was used to amplify the optical scintillation images produced by radioactive 125 I and 137 Cs sources in water placed 0.5–5.7 cm distance from the detector. A charge-coupled device (CCD) digital camera was used to acquire 2-D light-intensity distributions from the image intensifier output window. For both isotopes, a small area (2 × 3 mm 2 ) PVT detector yields a CCD net count rate that is linear with respect to absorbed dose rate within ± 3% out to 5.7 cm distance. Acquisition times range from 1.5–400 sec with a reproducibility of 0.5–5.5%. If a large-area (6 × 20 em 2 ) PVT detector is used, a four-fold increase in count rate and large deviations from linearity are observed, indicating that neighboring pixels contribute light to the signal through diffusion and scattering in PVT and water. A detailed noise analysis demonstrates that the image intensifier reduces acquisition time 10000-fold, reduces noise relative to signal 200-fold, and reduces amplifier gain noise as well.

Proceedings ArticleDOI
TL;DR: In this article, a bipolar intensity approach was proposed to increase the speed and simplicity of the computation of off-axis transmission holograms, with applications to the real-time display of holographic images.
Abstract: Several methods of increasing the speed and simplicity of the computation of off-axis transmission holograms are presented, with applications to the real-time display of holographic images. A bipolar intensity approach enables a linear summation of interference fringes, a factor of two speed increase, and the elimination of image noise caused by object self- interference. An order of magnitude speed increase is obtained through the use of precomputed look-up tables containing a large array of elemental interference patterns corresponding to point source contributions from each of the possible locations in image space. Results achieved using a data-parallel supercomputer to compute horizontal-parallax- only holographic patterns containing 6 megasamples indicate that an image comprised of 10,000 points with arbitrary brightness (grayscale) can be computed in under one second.

Journal ArticleDOI
TL;DR: To increase range resolution and produce acceptable range sidelobe levels, filtering techniques rather than direct complex correlation are applied in coded excitation systems.
Abstract: To increase range resolution and produce acceptable range sidelobe levels, filtering techniques rather than direct complex correlation are applied in coded excitation systems. A filter design technique based on both peak sidelobe levels and minimum sidelobe energy criteria is developed. In comparison to a classic inverse filter, this approach reduces sidelobe levels under a prespecified threshold. Further reduction can be achieved by extending the filter length. This technique can be more generally applied to similar signal processing problems. Both the mathematical formulation and simulation results demonstrating the utility of the technique are presented. A discussion of quantization effects is included. >

Journal Article
TL;DR: In this paper, a perceptual frequency weighting function is introduced which provides closer matching to the ear's measured sensitivity at high frequencies than do existing functions, and the object is to minimize the total perceived output noise power.
Abstract: The design of noise shaping filters for requantization in nonoversampling digital audio applications is examined. The object is to minimize the total perceived output noise power. A new perceptual frequency weighting function is introduced which provides closer matching to the ear's measured sensitivity at high frequencies than do existing functions.

Journal ArticleDOI
01 Sep 1992
TL;DR: An XY-addressable image architecture based on this pixel was implemented in silicon and the logarithmic response will be quantified using a more general signal to noise ratio.
Abstract: Intended for industrial applications, a pixelstructure with a logarithmic response is presented in this paper. An XY-addressable image architecture based on this pixel was implemented in silicon. Measurement data on this sensor are discussed. The logarithmic response will be quantified using a more general signal to noise ratio. A parallel is drawn with concepts of human perception theory.

Patent
06 Jul 1992
TL;DR: In this paper, a real-time 3D medical ultrasound imaging machine is described, where large extended transmitters are used with a great range of different pulse types, giving improved signal to noise ratio.
Abstract: A real time 3D medical ultrasound imaging machine is disclosed. Large extended transmitters are used with a great range of different pulse types, giving improved signal to noise ratio. Image reconstruction is done by filtered ellipsoidal backprojection. The filter is an inverse triplet filter. Sparse arrays may be used. The imaging machine additionally promises higher resolution, greater sensitivity 2D real time images displayed simultaneously with the real time 3D image.

Journal ArticleDOI
K. A. Shinpaugh1, Roger L. Simpson1, A. L. Wicks1, S. M. Ha1, J. L. Fleming1 
TL;DR: Frequency estimation via the FFT with zero-padding and a Gaussian interpolation scheme was found to produce the lowest bias and random errors.
Abstract: A variety of methods have been developed to obtain acurate frequency estimates from laser Doppler velocimetry (LDV) signals. Rapid scanning and fiber optic LDV systems require robust methods for extracting accurate frequency estimates with computational efficiency from data with poor signal-to-noise ratios. These methods typically fall into two general categories, time domain parametric techniques and frequency domain techniques. The frequency domain approach is initiated by transforming the Doppler bursts into the frequency domain using the fast Fourier transform (FFT). From this basic transformation a variety of interpolation procedures (parabolic, Gaussian, and centroid fits) have been developed to optimize the frequency estimation accuracy. The time domain approaches are derived from the parametric form of a sinusoid. The estimation of constants in this relationship is performed to satisfy specific constraints, typically a minimization of a variance expression. A comparison of these techniques is presented using simulated signals and additive Gaussian and Poisson white noise. The statistical bias and random errors for each method are presented from 200 signal simulations at each condition. Frequency estimation via the FFT with zero-padding and a Gaussian interpolation scheme was found to produce the lowest bias and random errors.