scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 1994"


Journal ArticleDOI
TL;DR: An algorithm is derived that isolates the coherent structures of a signal and describes an application to pattern extraction from noisy signals, using a greedy algorithm called a matching pursuit, which computes a suboptimal expansion.
Abstract: Computing the optimal expansion of a signal in a redundant dictionary of waveforms is an NP-hard problem. We introduce a greedy algorithm, called a matching pursuit, which computes a suboptimal expansion. The dictionary waveforms that best match a signal's structures are chosen iteratively. An orthogonalized version of the matching pursuit is also developed. Matching pursuits are general procedures for computing adaptive signal representations. With a dictionary of Gabor functions, a matching pursuit defines an adaptive time-frequency transform. Matching pursuits are chaotic maps whose attractors define a generic noise with respect to the dictionary. We derive an algorithm that isolates the coherent structures of a signal and describe an application to pattern extraction from noisy signals.

381 citations


Journal ArticleDOI
O. Toker1, O. Toker2, S. Masciocchi1, E. Nygård1, A. Rudge1, P. Weilhammer1 
TL;DR: In this paper, a low noise Si-strip detector readout chip has been designed and built in 1.5 μm CMOS technology, which is optimized w.r.t. noise.
Abstract: A low noise Si-strip detector readout chip has been designed and built in 1.5 μm CMOS technology. The chip is optimized w.r.t. noise. Measurements with this chip connected to several silicon strip detectors are presented. A noise performance of ENC = 135 e− + 12 e−/pF and signal to noise ratios between 40–80, depending on the detector, for minimum ionizing particles traversing 280 300 μ m silicon has been achieved.

186 citations


Journal ArticleDOI
TL;DR: An adaptive algorithm for estimating from noisy observations, periodic signals of known period subject to transient disturbances and an application of the Fourier estimator to estimation of brain evoked responses is included.
Abstract: Presents an adaptive algorithm for estimating from noisy observations, periodic signals of known period subject to transient disturbances. The estimator is based on the LMS algorithm and works by tracking the Fourier coefficients of the data. The estimator is analyzed for convergence, noise misadjustment and lag misadjustment for signals with both time invariant and time variant parameters. The analysis is greatly facilitated by a change of variable that results in a time invariant difference equation. At sufficiently small values of the LMS step size, the system is shown to exhibit decoupling with each Fourier component converging independently and uniformly. Detection of rapid transients in data with low signal to noise ratio can be improved by using larger step sizes for more prominent components of the estimated signal. An application of the Fourier estimator to estimation of brain evoked responses is included. >

156 citations


Journal ArticleDOI
TL;DR: Maximum likelihood estimation using the Viterbi algorithm (MLSE-VA) and sequential sequence estimation (SSE) are developed and diversity is combined with both MLSE- VA and SSE to improve the error performance.
Abstract: Presents sequence estimation for the frequency selective Rayleigh fading channel. Maximum likelihood estimation using the Viterbi algorithm (MLSE-VA) and sequential sequence estimation (SSE) are developed. Both MLSE-VA and SSE consist of a set of Kalman filters which estimate the fading channel as time evolves. Computer simulations for two different channel models show that the error performance of the two approaches is essentially the same. SSE however has considerably less computational complexity than MLSE-VA. To improve the error performance, diversity is combined with both MLSE-VA and SSE. The simulations show that diversity results in a signal to noise ratio gain of greater than 10 dB. >

140 citations


Journal ArticleDOI
TL;DR: It is shown that the magnitude of the interelement correlation is the key parameter governing phase correction performance, and techniques that utilize a small correction reference region are more susceptible to noise and missing elements than techniques which use larger reference regions.
Abstract: A common framework is presented to classify several phase correction techniques. A subset of these techniques are evaluated through simulations which utilize 2-D phase aberration profiles measured in the breast. The techniques are compared based on their ability to reduce phase errors, stability, and sensitivity to noise and missing elements in the transducer array. Significant differences are observed in these measures of performance when the size and location of the aperture area used to generate a phase reference signal are varied. Techniques that utilize a small correction reference region are more susceptible to noise and missing elements than techniques which use larger reference regions. The algorithms encounter problems in 2-D phase correction when making the transition from one row to the next, due to the low interelement correlation at the transition points. It is shown that the magnitude of the interelement correlation is the key parameter governing phase correction performance. >

135 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a receiver that maximizes the signal-to-noise ratio (SNR) in a particular DS-CDMA system model under various constraints.
Abstract: Minimum probability of bit error is difficult to achieve in a DS-CDMA receiver. Since multiple-access noise is the sum of many independent random processes, it is reasonable to approximate it by a Gaussian process of the same power spectral density. This leads to the criterion of maximizing signal-to-noise ratio (SNR). In this paper, receivers that maximize SNR in a particular DS-CDMA system model under various constraints are proposed and analyzed. The method proposed here does not require locking and despreading multiple arriving CDMA signals. The maximization of SNR is compared with the minimization of probability of error, when the receiver is constrained to operate bit-by-bit, in the absence of knowledge of the other users' spreading codes, timing, and phase. >

131 citations


Journal ArticleDOI
TL;DR: A fast and simple method to estimate small time delays in narrowband signals is proposed, based on using the Hilbert transform in correlation between two signals and consists of only one scalar product, which makes it fast.
Abstract: In many areas the time delay of arrival (TDOA) is desired. In the case of narrowband signals we propose a fast and simple method to estimate small time delays. This method is shown to have the same or better accuracy as the cross correlation methods for small delays in the order of fractions of the sample interval. It is based on using the Hilbert transform in correlation between two signals and consists of only one scalar product, which makes it fast. It may also be used in applications with narrowband signals where the measurements are repeatable, such as ultrasonic imaging and nondestructive testing. In ultrasonic applications, due to fluctuations in the insonified media, a small random time shift may be present causing the signals to be misaligned in time. Averaging signals under these conditions will result in a distortion of the signal shape. We propose an averaging method to avoid this and to accomplish a higher SNR without the distortion. Simulations and experiments from ultrasonic applications are presented. >

116 citations


Proceedings ArticleDOI
01 Jun 1994
TL;DR: The damped Richardson-Lucy iteration (DRL) as mentioned in this paper is based on a modified form of the Poisson likelihood function that is flatter in the vicinity of a good fit.
Abstract: The Richardson-Lucy iteration (also known as the EM iteration for image restoration with Poisson statistics) is the most widely used image restoration technique for optical astronomical data. Like all maximum likelihood methods, it suffers from noise amplification in the restored images. Previously suggested methods for dealing with this problem (stopping the iteration early or smoothing the final image) have serious drawbacks for astronomical applications. This paper describes a new image restoration iteration based on the RL method that reduces noise amplification. The method is based on a modified form of the Poisson likelihood function that is flatter in the vicinity of a good fit. The resulting iteration is very similar to the RL iteration, but includes a new spatially adaptive damping factor that prevents noise amplification in regions of the image where a smooth model provides an adequate fit to the data; thus, I call this the `damped Richardson-Lucy iteration.' The damped iteration converges more rapidly than the RL method and can be accelerated using the same techniques as the standard RL iteration. Results are shown for both simulated data and Hubble Space Telescope images.

106 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of a weighted global iteration for the extended Kaiman filter was evaluated in a simulated earthquake input-response pair and it was found that the weighted global iterative procedure converged to give reasonable estimates provided the ground shaking intensity was high enough to trigger significant yielding.

93 citations


Patent
13 Oct 1994
TL;DR: In this article, the early and late signals are input to a symbol synchronizing estimator that produces an interpolation control signal used by the filters to synchronize the symbol timing to the sample timing.
Abstract: A method and apparatus for synchronizing symbol timing for a QPSK demodulator. A matched filter pair outputs respective "early-punctual-late" signals. The early and late signals are input to a symbol synchronizing estimator that produces an interpolation control signal used by the filters to synchronize the symbol timing to the sample timing. The punctual signal is output as an information bearing signal representing the received inphase and quadrature signals. The matched filters are interpolating matched filters. The symbol synchronizing estimator normalizes the early and late signals in a manner that allow the demodulator to "flywheel" over signals having a low signal to noise ratio below a predetermined threshold.

90 citations


Journal ArticleDOI
V. Friedman1
TL;DR: A new algorithm for the estimation of the frequency of a single sinusoid in white noise, based on the computation of the interval between zero crossings, is presented, showing that for a high signal-to-noise ratio, the output error spectrum is concentrated in the high-frequency region.
Abstract: Presents a new algorithm for the estimation of the frequency of a single sinusoid in white noise, based on the computation of the interval between zero crossings. It is shown that for a high signal-to-noise ratio, the output error spectrum is concentrated in the high-frequency region. Theoretically, the desired precision can be achieved using a proper low-pass filter. The spectrum of the error due to the interpolation of the zero crossing is computed. Simulation results show that a precision of 10/sup -7/-10/sup -8/ can be obtained with modest computational effort. >

Journal ArticleDOI
C. G. Xie1, S.M. Huang, C.P. Lenn, A.L. Stott, M.S. Beck 
01 Oct 1994
TL;DR: In this paper, an experimental method for evaluating the performance of capacitance tomographic flow imaging systems is described, which is defined to describe the performance, namely spatial and permittivity resolution, accuracy (system errors) and signal to noise ratio.
Abstract: An experimental method for evaluating the performance of capacitance tomographic flow imaging systems is described. Criteria are defined to describe the performance of these systems, namely spatial and permittivity resolution, accuracy (system errors) and signal to noise ratio. Static physical models simulating typical flow distribution patterns are used as standard test objects and The criteria are quantified by comparing the reconstructed images with the standards. Systems with 8 and 12 electrodes have been tested and the results compared to show the improvement obtained with the increased number of electrodes.

Journal ArticleDOI
TL;DR: In this article, an improved version of the commonly used extended Kalman filter (EKF) by incorporating an adaptive filter procedure is presented, where the system noise covariance is updated in time segments to ensure statistical consistency between the predicted error covariance and the mean square of actual residuals.
Abstract: In the application of system identification to a structural system, unknown parameters are determined based on the numerical analysis of input and output measurements. The accuracy of an identified parameter and its uncertainty both depend on the numerical method, measurement noise and modeling error. Most studies, however, identify parameter means without addressing the issue of parameter uncertainties. Presented in this paper is an improved version of the commonly used extended Kalman filter (EKF) by incorporating an adaptive filter procedure. The system noise covariance is updated in time segments in order to ensure statistical consistency between the predicted error covariance and the mean square of actual residuals. Comprising two stages in a cycle, the adaptive EKF method not only identifies the parameter values but also gives a useful estimate of uncertainties. Two numerical examples of simulation with noise are presented. The first example illustrates the superior statistical performance of the pr...

Journal ArticleDOI
TL;DR: In this paper, a probability theory was proposed to predict the relative standard deviation of repeated measurements in high-performance liquid chromatography (HPLC) using a mixed random process comprising white noise and Markov process.
Abstract: The aim of this paper is to propose and experimentally prove a probability theory to predict the relative standard deviation of repeated measurements in high-performance liquid chromatography (HPLC). The baseline drift in HPLC, often formulated as 1/f noise, is approximated by a mixed random process comprising white noise and Markov process. The standard deviations (SD), w, of the white noise and the SD, m and retention parameter, p, of the Markov process completely specify the stochastic properties of the respective random processes and are determined from the power spectral density of the baseline by least-squares curve fitting

Proceedings ArticleDOI
15 Mar 1994
TL;DR: An algorithm that isolates the coherent structures of a signal and an application to pattern extraction from noisy signals is described, which derives a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions.
Abstract: To compute the optimal expansion of signals in redundant dictionary of waveforms is an NP complete problem. We introduce a greedy algorithm, called matching pursuit, that performs a suboptimal expansion. The waveforms are chosen iteratively in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions, a matching pursuit defines an adaptive time-frequency transform. We derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit is a chaotic map, whose attractor defines a generic noise with respect to the dictionary. We derive an algorithm that isolates the coherent structures of a signal and an application to pattern extraction from noisy signals is described.

Proceedings ArticleDOI
19 Apr 1994
TL;DR: An evaluation of the CSS-PMC approach using the Noisex 92 database shows high recognition performance for very noisy environments, and for the Lynx helicopter noise, CSS- PMC gives 97% accuracy at 0 dB SNR.
Abstract: This paper describes a scheme fur robust speech recognition at very poor signal to noise ratios. It consists of a continuous spectral subtraction (CSS) scheme integrated into a parallel model combination (PMC) compensation framework. In this CSS-PMC scheme, a smoothed estimate of the long term spectrum is continuously calculated and subtracted from the signal. At the same time, the HMMs are compensated using PMC for the signal distortion caused by the CSS stage. The paper presents an evaluation of the CSS-PMC approach using the Noisex 92 database. The results show high recognition performance for very noisy environments. For example, for the Lynx helicopter noise, CSS-PMC gives 97% accuracy at 0 dB SNR. >

Journal ArticleDOI
TL;DR: In this paper, a general image reconstruction algorithm for any bipolar drive configuration has been produced and compared in terms of their resolution and noise performance, for the adjacent, cross and polar drive configurations.
Abstract: The standard 'backprojection' image reconstruction algorithm developed at Sheffield uses data obtained by passing current between adjacent pairs of electrodes and measuring voltage differences between the remaining pairs. Previously it has been argued that this configuration gives the best resolution compared to all other bipolar drive configurations. However, it also gives the worst signal to noise ratio data and it is possible that under conditions of poor signal to noise ratio it may be advantageous to use an alternative drive configuration, even at the expense of resolution. A general image reconstruction algorithm for any bipolar drive configuration has been produced. Reconstruction algorithms for the adjacent, cross (drive electrodes 90 degrees apart) and polar drive (drive electrodes 180 degrees apart) have been examined and compared in terms of their resolution and noise performance.

Journal ArticleDOI
TL;DR: In this article, a complex adaptive notch filter is implemented as a constrained IIR filter using a complex Gauss-Newton type algorithm to adjust its coefficients, which has fast convergence, small bias, and achieves the Cramer-Rao bound.
Abstract: In this paper, conventional real coefficient adaptive notch filters (ANF's) are extended to complex coefficient ones. This complex adaptive notch filter is implemented as a constrained IIR filter using a complex Gauss-Newton type algorithm to adjust its coefficients. When the ANF algorithm is applied to estimate the frequencies of sinusoids embedded in white noise. The results show that this algorithm has fast convergence, small bias, and achieves the Cramer-Rao bound. Furthermore, this ANF algorithm is used to estimate the parameters of multiple chirp signals. Simulation results also demonstrate very good performance. Finally, when this ANF algorithm is used to suppress narrowband interference in QPSK spread spectrum communication systems, the analytic result reveals that its signal-to-noise ratio improvement factor is grater than the factor of conventional one-sided prediction filter. >

Journal ArticleDOI
TL;DR: This work considers one global and two local clutter metrics; it compares actual target acquisition results to those predicted by the metrics and suggests that clutter in the scene affects the human performance.
Abstract: To model the target acquisition capability of an electro-optic system operated by a human, one should take into account how the clutter in the scene affects the human performance. We consider one global and two local clutter metrics; we compare actual target acquisition results to those predicted by our metrics.

Journal ArticleDOI
TL;DR: A high-performance cascaded sigma-delta modulator has a new three-stage fourth-order topology and provides functionally a maximum signal to quantization noise ratio of 16 bits and 16.5-bit dynamic range with an oversampling ratio of only 32%.
Abstract: A high-performance cascaded sigma-delta modulator is presented. It has a new three-stage fourth-order topology and provides functionally a maximum signal to quantization noise ratio of 16 bits and 16.5-bit dynamic range with an oversampling ratio of only 32. This modulator is implemented with fully differential switch-capacitor circuits and is manufactured in a 2-/spl mu/m BiCMOS process. The converter, operated from +/-2.5 V power supply, +/-1.25 V reference voltage and oversampling clock of 48 MHz, achieves 97 dB resolution at a Nyquist conversion rate of 1.5 MHz after comb-filtering decimation. The power consumption of the converter is 180 mW. >

Proceedings ArticleDOI
09 Jun 1994
TL;DR: In this article, a second-order sigma-delta modulator with a 3-b internal quantizer employing the individual level averaging technique has been designed and implemented in a 1.2 μm CMOS technology.
Abstract: A second-order sigma-delta modulator with a 3-b internal quantizer employing the individual level averaging technique has been designed and implemented in a 1.2 μm CMOS technology. Testing results show no observable harmonic distortion components above the noise floor. Peak S/(N + D) ratio of 91 dB and dynamic range of 96 dB have been achieved at a clock rate of 2.56 MHz for a 20 KHz baseband. No tone is observed in the baseband as the amplitude of a 10 KHz input sine wave is reduced from −0.5 dB to −107 dB below the voltage reference. The active area of the prototype chip is 3.1 mm 2 and it dissipates 67.5 mW of power from a 5 V supply

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the signal-to-noise properties of greenhouse-gas changes in a coupled ocean-atmosphere model and found that the dominant signal and noise patterns are highly similar, and the separation of the signal from the natural variability noise is difficult.
Abstract: Results from a control integration and time-dependent greenhouse warming experiments performed with a coupled ocean-atmosphere model are analysed in terms of their signal-to-noise properties. The aim is to illustrate techniques for efficient description of the space-time evolution of signals and noise and to identify potentially useful components of a multivariate greenhouse-gas “fingerprint”. The three 100-year experiments analysed here simulate the response of the climate system to a step-function doubling of CO2 and to the time-dependent greenhouse-gas increases specified in Scenarios A (“Business as Usual”) and D (“Draconian Measures”) of the Intergovernmental Panel on Climate Change (IPCC). If signal and noise patterns are highly similar, the separation of the signal from the natural variability noise is difficult. We use the pattern correlation between the dominant Empirical Orthogonal Functions (EOFs) of the control run and the Scenario A experiment as a measure of the similarity of signal and noise patterns. The EOF 1 patterns of signal and noise are least similar for near-surface temperature and the vertical structure of zonal winds, and are most similar for sea level pressure (SLP). The dominant signal and noise modes of precipitable water and stratospheric/tropospheric temperature contrasts show considerable pattern similarity. Despite the differences in forcing history, a highly similar EOF 1 surface temperature response pattern is found in all three greenhouse warming experiments. A large part of this similarity is due to a common land-sea contrast component of the signal. To determine the degree to which the signal is contaminated by the natural variability (and/or drift) of the control run, we project the Scenario A data onto EOFs 1 and 2 of the control. Signal contamination by the EOF 1 and 2 modes of the noise is lowest for near-surface temperature, a situation favorable for detection. The signals for precipitable water, SLP, and the vertical structure of zonal temperature and zonal winds are significantly contaminated by the dominant noise modes. We use cumulative explained spatial variance, principal component time series, and projections onto EOFs in order to investigate the time evolution of the dominant signal and noise modes. In the case of near-surface temperature, a single pattern emerges as the dominant signal component in the second half of the Scenario A experiment. The projections onto EOFs 1 and 2 of the control run indicate that Scenario D has a large common variability and/or drift component with the control run. This common component is also apparent between years 30 and 50 of the Scenario A experiment, but is small in the 2 × CO2 integration. The trajectories of the dominant Scenario A and control run modes evolve differently, regardless of the basis vectors chosen for projection, thus making it feasible to separate signal and noise within the first two decades of the experiments. For Scenario D it may not be possible to discriminate between the dominant signal and noise modes until the final 2–3 decades of the 100-year integration.

Proceedings ArticleDOI
R. Gross1, D. Veeneman
01 May 1994
TL;DR: An accurate model developed to analyze the effects of clipping for a Gaussian signal with an arbitrary spectrum is presented and shows a 5 dB improvement in the signal-to-noise (SNR) ratio compared to previous work using an approximate analysis.
Abstract: An accurate model developed to analyze the effects of clipping for a Gaussian signal with an arbitrary spectrum is presented. The model provides information on the reduction of the signal level, the total noise power due to clipping and the spectral properties of the noise. The method is applied to a discrete multitone (DMT) transmission system with parameters that are applicable for the asymmetric digital subscriber line (ADSL) technology. System calculations given to illustrate this approach show a 5 dB improvement in the signal-to-noise (SNR) ratio compared to previous work using an approximate analysis. >

Journal ArticleDOI
TL;DR: A transputer-based platform has been developed which allows control of most relevant parameters of a diode- laser spectrometer and a signal-processing concept with novel aspects for tunable-diode-laser spectroscopy is presented and discussed.
Abstract: Tunable-diode-laser absorption spectroscopy fulfills the major requirements for trace-gas analysis: sensitivity, specifity, high detection speed, and the possibility of simultaneous in situ measurements The well-known limitations for low-concentration measurements become more and more dominant at sub-part-per-billion levels, where sensitive spectrometers are often influenced by noise, drift effects, and changes in the spectral background structure While many improvements in instrument development focus on optimizing electronics and optical components, much less effort has been put into postdetection signal processing and adaptive control Therefore, a transputer-based platform has been developed which allows control of most relevant parameters of a diode-laser spectrometer Fluctuations in the signal amplitude as well as drift and jitter effects in the frequency domain can cause a significant degradation of system performance and therefore determine the ultimate detection limit A signal-processing concept with novel aspects for tunable-diode-laser spectroscopy is presented and discussed

Journal ArticleDOI
TL;DR: The model predicts that an 8-b single-band image subject to noise with unit standard deviation can be compressed reversibly to no less than 2.0 b/pixel, equivalent to a maximum compression ratio of about 4:1, and has been extended to multispectral imagery.
Abstract: Reversible image compression rarely achieves compression ratios larger than about 3:1. An explanation of this limit is offered, which hinges upon the additive noise the sensor introduces into the image. Simple models of this noise allow lower bounds on the bit rate to be estimated from sensor noise parameters rather than from ensembles of typical images. The model predicts that an 8-b single-band image subject to noise with unit standard deviation can be compressed reversibly to no less than 2.0 b/pixel, equivalent to a maximum compression ratio of about 4:1. The model has been extended to multispectral imagery. The Airborne Visible and Infra Red Imaging Spectrometer (AVIRIS) is used as an example, as the noise in its 224 bands is well characterized. The model predicts a lower bound on the bit rate for the compressed data of about 5.5 b/pixel when a single codebook is used to encode all the bands. A separate codebook for each band (i.e., 224 codebooks) reduces this bound by 0.5 b/pixel to about 5.0 b/pixel, but 90% of this reduction is provided by only four codebooks. Empirical results corroborate these theoretical predictions. >

Journal ArticleDOI
TL;DR: Correct simplifications of the Rife and Boorstyn (1974) performance equations for the maximum likelihood estimator of frequency allow quick calculation of the onset of threshold as a function of sample size and signal to noise ratio.
Abstract: We present accurate simplifications of the Rife and Boorstyn (1974) performance equations for the maximum likelihood estimator of frequency. The simplicity of the result allows quick calculation of the onset of threshold as a function of sample size and signal to noise ratio (SNR). The accuracy of the new expression is demonstrated via simulation. >

Journal ArticleDOI
TL;DR: The paper studies the effect of model errors, i.e., differences between the assumed and actual array response, on the quality of the reconstructed signals, based on a signal estimation technique based on the MUSIC algorithm.
Abstract: Sensor arrays are frequently used to separate and reconstruct superimposed signals arriving from different directions. The paper studies the effect of model errors, i.e., differences between the assumed and actual array response, on the quality of the reconstructed signals. Model errors are the limiting factor of array performance when the observation time is sufficiently long. The authors analyze a signal estimation technique which is based on the MUSIC algorithm. Formulas are derived for the signal-to-interference and signal-to-noise ratios as function of the model errors. By evaluating these formulas for selected test cases they gain some insights into the sensitivity of the signal estimation problem to model uncertainty. >

Journal ArticleDOI
TL;DR: The issue of selecting and processing the best images from a finite data set of compensated short-exposure images and comparing image-spectrum SNRs shows a broad range of practical cases where processing the selected subset of the data results in superior SNR.
Abstract: Adaptive-optics systems have been used to overcome some of the effects of atmospheric turbulence on large-aperture astronomical telescopes. However, the correction provided by adaptive optics cannot restore diffraction-limited performance, due to discretized spatial sampling of the wavefront, limited degrees of freedom in the adaptive-optics system, and wavefront sensor measurement noise. Field experience with adaptive-optics imaging systems making short-exposure image measurements has shown that some of the images are better than others in the sense that the better images have higher resolution. This is a natural consequence of the statistical nature of the compensated optical transfer function in an adaptive-optics telescope. Hybrid imaging techniques have been proposed that combine adaptive optics and postdetection image processing to improve the high-spatial-frequency information of images. Performance analyses of hybrid methods have been based on prior knowledge of the ensemble statistics of the underlying random process. Improved image-spectrum SNRs have been predicted, and in some cases experimentally demonstrated. In this paper we address the issue of selecting and processing the best images from a finite data set of compensated short-exposure images. Image sharpness measures are used to select the data subset to be processed. Comparison of the image-spectrum SNRs for the cases of processing the entire data set and processing only the selected subset of the data shows a broad range of practical cases where processing the selected subset results in superior SNR.

Journal ArticleDOI
J.J. Simpson1, S.R. Yhann1
TL;DR: Use of the filtered data to improve image segmentation, labeling in cloud screening algorithms for AVHRR data, and multichannel sea surface temperature (MCSST) estimates is demonstrated.
Abstract: The channel 3 data of the Advanced Very High Resolution Radiometer (AVHRR) on the NOAA series of weather satellites (NOAA 6-12) are contaminated by instrumentation noise. The signal to noise ratio (S/N) varies considerably from image to image and the between sensor variation in S/N can be large. The characteristics of the channel noise in the image data are examined using Fourier techniques. A Wiener filtering technique is developed to reduce the noise in the channel 3 image data. The noise and signal power spectra for the Wiener filter are estimated from the channel 3 and channel 4 AVHRR data in a manner which makes the filter adaptive to observed variations in the noise power spectra. Thus, the degree of filtering is dependent upon the level of noise in the original data and the filter is adaptive to variations in noise characteristics. Use of the filtered data to improve image segmentation, labeling in cloud screening algorithms for AVHRR data, and multichannel sea surface temperature (MCSST) estimates is demonstrated. Examples also show that the method can be used with success in land applications. The Wiener filtering model is compared with alternate filtering methods and is shown to be superior in all applications tested. >

Patent
08 Apr 1994
TL;DR: In this paper, a 2X decimation digital filter was employed to remove the unwanted dither and spurious intermodulation signals. But the reduction of the noise in the low frequency band is limited due to the nonlinearities inherent in the digitization process.
Abstract: When processing video television information the low frequency band is inherently of primary interest due to the natural averaging properties of the human eye, combined with the limited frequency response of video display elements such as phosphorus and liquid crystal displays. Noise is generated and observed in the low frequency region of digitized analog video signals due to nonlinearities inherent in the digitization process. This invention reduces the noise measured in the low frequency region by shifting the noise upband and out of the frequencies of interest by adding a dither signal to the analog input signal and employing a 2X decimation digital filter to remove the unwanted dither and spurious intermodulation signals. The disclosed invention allows for a simple and inexpensive means for removal of the dither signal without having to resort to complex dither subtraction techniques employed in the prior art.