scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 1997"


Journal ArticleDOI
TL;DR: In this work, the signal-space projection (SSP) method, the signals measured by d sensors are considered to form a time-varying vector in a d-dimensional signal space, which is a measure of similarity of the equivalence classes in signal space and a way to characterise the separability of sources.
Abstract: CURRENTS INSIDE a conducting body can be estimated by measuring the magnetic and/or the electric field at multiple locations outside and then constructing a solution to the inverse problem, i.e. determining a current configuration that could have produced the measured field. Unfortunately, there is no unique solution to this problem (HELMHOLTZ, 1853) unless restricting assumptions are made. The minimum-norm estimate (HAM/~.L,~INEN and ILMONIEMI, 1994) provides a solution with the smallest expected overall error when minimum a priori information about the source distribution is available. Other methods to estimate a continuous current distribution producing the measured signals have been studied (PASCUAL-MARQUI et al., 1994; WANG et aL, 1995; GORODNITSKY, et al., 1995). A different approach is to divide the brain activity into discrete components such as current dipoles (ScHERG, 1990; MOSHER et al., 1992). Here we widen this approach into arbitrary current configurations. In our signal-space projection (SSP) method, the signals measured by d sensors are considered to form a time-varying vector in a d-dimensional signal space. The component vectors,, i.e. the signals caused by the different neuronal sources, have different and fixed orientations in the signal space. In other words, each source has a distinct and stable field pattern. All the current eonfi~marations producing the same measured field pattern are indistinguishable on the basis of the field: they have the same vector direction in the signal space and thus belong to the same equivalence class of current configurations (TESCHE et al., 1995a). The angle in the signal space between vectors representing different equivalence classes, e.g. between component vectors, is a measure of similarity of the equivalence classes in signal space and a way to characterise the separability of sources. The cosine of this angle has previously been used as a numerical charaeterisation of the difference between topographical distributions (DESMEDT and CHALK[.IN, 1989). If the direction of at least one of the component vectors forming the measured multi-channel signal can be determined from the data, or is known otherwise, SSP can be used to simplify subsequent analysis. For example, if an early deflection in an evoked response is produced by one source, and the rest of the response is a mixture of signals from this and other sources, SSP can separate the data into two parts so that the early source contributes only to one part. In general, the signals are divided into two orthogonal parts: s~, including the time-varying contribution from sources with known signalspace directions; and s~_, including the rest of the signals. Both sl~ and s j_ can then be analysed separately in more detail. By analysing s t , we can detect activity originally masked by s~. On the other hand, the sources included in stl are seen with an enhanced signal-to-noise ratio. By forward modelling of sources in selected patches of cortex, it is possible to form a spatial filter that selectively passes only the signals that may have been generated by currents in the given patches. If the subspace defined by artefacts can be determined, the artefactflee S L can be analysed. In SSP, in contrast to PCA (HARRIS, 1975; MAIER et al., 1987) and other analysis methods (GRUMMICH et al., 1991; KOLES et aL, 1990; KOLES, 1991; SOONG and KOLES, 1995; BESA*), the source decomposition does not depend on the orthogonality of source components or the availability of source or conductivity models. No conductivity or source models are needed if the component vectors are estimated directly from the measured signals. This is useful when no source estimation is needed, e.g. when artefacts or somatomotor activity in a cogrritive study must be filtered out. The angles between the components provide an easy and illustrative way to characterise the linear dependence between the components and thus the separability of sources. The concept of signal space in MEG was introduced previously ([LMONIEMI, 1981; [LMONIEMI and WILLIAMSON,

740 citations


Journal ArticleDOI
TL;DR: In this article, a region-of-interest (ROI) analysis is proposed to estimate signal-to-noise ratio (SNR) values in phased array magnitude images.
Abstract: A method is proposed to estimate signal-to-noise ratio (SNR) values in phased array magnitude images, based on a region-of-interest (ROI) analysis. It is shown that the SNR can be found by correcting the measured signal intensity for the noise bias effects and by evaluating the noise variance as the mean square value of all the pixel intensities in a chosen background ROI, divided by twice the number of receivers used. Estimated SNR values are shown to vary spatially within a bound of 20% with respect to the true SNR values as a result of noise correlations between receivers.

492 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new algorithm for carrier frequency estimation in burst-mode phase shift keying (PSK) transmissions, which is a data-aided and clock-aware algorithm that is easy to implement in digital form.
Abstract: Burst transmission of digital data is employed in several applications such as satellite time-division multiple access (TDMA) systems and terrestrial mobile cellular radio. We propose a new algorithm for carrier frequency estimation in burst-mode phase shift keying (PSK) transmissions. The algorithm is data-aided and clock-aided and has a feedforward structure that is easy to implement in digital form. Its estimation range is large, about /spl plusmn/20% of the symbol rate and its accuracy is close to the Cramer-Rao bound (CRB) for a signal-to-noise ratio (SNR) as low as 0 dB. Comparisons with earlier methods are discussed.

309 citations


Journal ArticleDOI
TL;DR: The design, test methods, and results of an ambulatory QRS detector are presented and the aim of the design work was to achieve high QRS detection performance in terms of timing accuracy and reliability, without compromising the size and power consumption of the device.
Abstract: The design, test methods, and results of an ambulatory QRS detector are presented. The device is intended for the accurate measurement of heart rate variability (HRV) and reliable QRS detection in both ambulatory and clinical use. The aim of the design work was to achieve high QRS detection performance in terms of timing accuracy and reliability, without compromising the size and power consumption of the device. The complete monitor system consists of a host computer and the detector unit. The detector device is constructed of a commonly available digital signal processing (DSP) microprocessor and other components. The QRS detection algorithm uses optimized prefiltering in conjunction with a matched filter and dual edge threshold detection. The purpose of the prefiltering is to attenuate various noise components in order to achieve improved detection reliability. The matched filter further improves signal-to-noise ratio (SNR) and symmetries the QRS complex for the threshold detection, which is essential in order to achieve the desired performance. The decision for detection is made in real-time and no search-back method is employed. The host computer is used to configure the detector unit, which includes the setting of the matched filter impulse response, and in the retrieval and postprocessing of the measurement results. The QRS detection timing accuracy and detection reliability of the detector system was tested with an artificially generated electrocardiogram (EGG) signal corrupted with various noise types and a timing standard deviation of less than 1 ms was achieved with most noise types and levels similar to those encountered in real measurements. A QRS detection error rate (ER) of 0.1 and 2.2% was achieved with records 103 and 105 from the MIT-BIH Arrhythmia database, respectively.

272 citations


Journal ArticleDOI
TL;DR: A solution to the problem of identifying multivariable finite dimensional linear time-invariant systems from noisy input/output measurements is developed in the framework of subspace identification and it is shown that the proposed algorithms give consistent estimates when the system is operating in open- or closed-loop.

223 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid holographic system was developed for three-dimensional particle image velocimetry, which combines advantages of both in-line and off-axis holography without having their draw-backs.
Abstract: A hybrid holographic system has been developed for three-dimensional particle image velocimetry. With unique high pass filters, the system combines advantages of both in-line and off-axis holography without having their draw-backs. It improves the signal to noise ratio of the reconstructed image, allows use of 3–15 μm particles in water at high population and achieves large dynamic ranges in both velocity and space. With an automated image acquisition and processing system it has been used for measuring the velocity distributions in a square duct at Re=1.23×105. The data consists of 97×97×87 vectors (with 50% overlapping of adjacent interrogation windows). The quality of the results is evaluated using the continuity equation. The deviation from the equation decreases rapidly with increasing control volume and reaches a level of less than 10%. Mean velocities, r.m.s. velocity fluctuations and turbulence spectra are estimated using the data.

215 citations


Journal ArticleDOI
TL;DR: In a computational model of the piriform cortex, the effect of noradrenergic suppression of synaptic transmission on signal-to-noise ratio is analyzed and increases in levels of norepinephrine mediated by locus coeruleus activity appear to enhance the influence of extrinsic input on cortical representations.
Abstract: Hasselmo, Michael E., Christiane Linster, Madhvi Patil, Daveena Ma, and Milos Cekic. Noradrenergic suppression of synaptic transmission may influence cortical signal-to-noise ratio. J. Neurophysiol...

209 citations


Journal ArticleDOI
TL;DR: A new fuzzy filter for the removal of heavy additive impulse noise, called the weighted fuzzy mean (WFM) filter, is proposed and analyzed in this paper.

172 citations


Journal ArticleDOI
TL;DR: This study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals and shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime.
Abstract: Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance ~ASR! scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the ‘‘resonance curve’’ in the multineuron stochastic resonance without tuningscenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal. @S1063-651X~97!01102-1#

161 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for frequency estimation in a power system by demodulation of two complex signals, which does not introduce a double frequency component and can improve fast frequency estimation of signals with good noise properties.
Abstract: This paper presents a method for frequency estimation in a power system by demodulation of two complex signals. In power system analysis, the /spl alpha//spl beta/-transform is used to convert three phase quantities to a complex quantity where the real part is the in-phase component and the imaginary part is the quadrature component. This complex signal is demodulated with a known complex phasor rotating in opposite direction to the input. The advantage of this method is that the demodulation does not introduce a double frequency component. For signals with high signal to noise ratio, the filtering demand for the double frequency component can often limit the speed of the frequency estimator. Hence, the method can improve fast frequency estimation of signals with good noise properties. The method loses its benefits for noisy signals, where the filter design is governed by the demand to filter harmonics and white noise. The method has been previously published, but not explored to its potential. The paper presents four examples to illustrate the strengths and weaknesses of the method.

145 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a technique that allows the authors to process the visibility samples over the hexagonal sampling grids given by Y-shaped and triangular-shaped arrays with standard rectangular FFT routines.
Abstract: In Earth observation programs there is a need of passive low frequency (L-band) measurements to monitor soil moisture and ocean salinity with high spatial resolution 10-20 km, a radiometric resolution of 1 K and a revisit time of 1-3 days. Compared to total power radiometers aperture synthesis interferometric radiometers are technologically attractive because of their reduced mass and hardware requirements. In this field it should be mentioned the one-dimensional (1D) linear interferometer ESTAR developed by NASA and MIRAS a two-dimensional (2D) Y-shaped interferometer currently under study by European Space Agency (ESA). Interferometer radiometers measure the correlation between pairs of nondirective antennas. Each complex correlation is a sample of the "visibility" function which, in the ideal case, is the spatial Fourier transform of the brightness temperature distribution. Since most receiver phase and amplitude errors can be hardware calibrated, Fourier based iterative inversion methods will be useful when antenna errors are small, their radiation voltage patterns are not too different, and mutual coupling is small. In order to minimize on-board hardware requirements-antennas, receivers and correlators-the choice of the interferometer array shape is of great importance since it determines the (u,v) sampling strategy and the minimum number of visibility samples required for a determined aliasing level. In this sense, Y-shaped and triangular-shaped arrays with equally spaced antennas are optimal. The main contribution of this paper is a technique that allows the authors to process the visibility samples over the hexagonal sampling grids given by Y-shaped and triangular-shaped arrays with standard rectangular FFT routines. Since no interpolation processes are involved, the risk of induced artifacts in the recovered brightness temperature over the wide held of view required in Earth observation missions is minimized and signal to noise ratio (SNR) is preserved.

01 Jan 1997
TL;DR: The main contribution of this paper is a technique that allows the authors to process the visibility samples over the hexagonal sampling grids given by Y-shaped and triangular-shaped arrays with standard rectangular FFT routines and the risk of induced artifacts in the recovered brightness temperature over the wide held of view required in Earth observation missions is minimized.
Abstract: In Earth observation programs there is a need of passive low frequency -band) measurements to monitor soil moisture and ocean salinity with high spatial resolution 10-20 Km, a radiometric resolution of 1K and a revisit time of 1-3 days (1). Compared to total power radiometers aperture syn- thesis interferometric radiometers are technologically attractive because of their reduced mass and hardware requirements. In this field it should be mentioned the one-dimensional (1-D) linear interferometer ESTAR developed by NASA (2) and MIRAS a two-dimensional (2-D) Y-shaped interferometer currently under study by European Space Agency (ESA) (3). Interferometer ra- diometers measure the correlation between pairs of nondirective antennas. Each complex correlation is a sample of the "visibility" function which, in the ideal case, is the spatial Fourier transform of the brightness temperature distribution. Since most receiver phase and amplitude errors can be hardware calibrated, Fourier based iterative inversion methods will be useful when antenna errors are small, their radiation voltage patterns are not too different, and mutual coupling is small. In order to minimize on-board hardware requirements—antennas, receivers and cor- relators—the choice of the interferometer array shape is of great importance since it determines the sampling strategy and the minimum number of visibility samples required for a determined aliasing level. In this sense, Y-shaped and triangular- shaped arrays with equally spaced antennas are optimal. The main contribution of this paper is a technique that allows us to process the visibility samples over the hexagonal sampling grids given by Y-shaped and triangular-shaped arrays with standard rectangular FFT routines. Since no interpolation processes are involved, the risk of induced artifacts in the recovered brightness temperature over the wide field of view required in Earth obser- vation missions is minimized and signal to noise ratio (SNR) is preserved.

Journal ArticleDOI
TL;DR: The proposed method for the detection and parameter estimation of mono or multicomponent polynomial-phase signals embedded in white Gaussian noise and based on a generalized ambiguity function is shown to be asymptotically efficient for second-order PPS and nearly asymptic efficient for third- order PPSs.
Abstract: The aim of this work is the performance analysis of a method for the detection and parameter estimation of mono or multicomponent polynomial-phase signals (PPS) embedded in white Gaussian noise and based on a generalized ambiguity function. The proposed method is shown to be asymptotically efficient for second-order PPS and nearly asymptotically efficient for third-order PPSs. The method presents some advantages with respect to similar techniques, like the polynomial-phase transform, for example, in terms of (i) a closer approach to the Cramer-Rao lower bounds, (ii) a lower SNR threshold, (iii) a better capability of discriminating multicomponent signals.

Journal ArticleDOI
TL;DR: Experimental evaluations demonstrate that the noise compensation methods achieve substantial improvement in recognition across a wide range of signal-to-noise ratios, and show that the cepstral-time matrix is more robust than a vector of identical size, which is composed of a combination of cEPstral and differential cepStral features.
Abstract: Several noise compensation schemes for speech recognition in impulsive and nonimpulsive noise are considered. The noise compensation schemes are spectral subtraction, HMM-based Wiener (1949) filters, noise-adaptive HMMs, and a front-end impulsive noise removal. The use of the cepstral-time matrix as an improved speech feature set is explored, and the noise compensation methods are extended for use with cepstral-time features. Experimental evaluations, on a spoken digit database, in the presence of ear noise, helicopter noise, and impulsive noise, demonstrate that the noise compensation methods achieve substantial improvement in recognition across a wide range of signal-to-noise ratios. The results also show that the cepstral-time matrix is more robust than a vector of identical size, which is composed of a combination of cepstral and differential cepstral features.

Journal ArticleDOI
TL;DR: This paper introduces a clutter removal filter that is based on Singular Value Decomposition (SVD), which is good, especially if a large temporal window is applied, which improves the performance for low blood flow velocities and compared with a standard linear regression filter.

Journal ArticleDOI
TL;DR: The mathematical expression of the signal to noise ratio in fluorescence fluctuation experiments is derived for arbitrary sample profiles and for any mechanism of translational motion, and experimentally verified as mentioned in this paper.
Abstract: The mathematical expression of the signal to noise ratio in fluorescence fluctuation experiments is derived for arbitrary sample profiles and for any mechanism of translational motion, and experimentally verified. The signal to noise ratio depends on the mean count rate per particle per dwell time, the mean number of particles per sample volume, time characteristics of the correlation function, sample profile characteristics, and the data collection time. Statistical accuracy of the third order moment of fluorescence intensity fluctuations is also studied. The optimum concentration for the third order moment analysis is about one particle per sample volume.

Journal ArticleDOI
TL;DR: A semianalytic model for the noise in CBV maps is presented and analytic and Monte Carlo techniques for determining the effect of experimental parameters and processing strategies upon CBV‐SNR are introduced.
Abstract: The use of cerebral blood volume (CBV) maps generated from dynamic MRI studies tracking the bolus passage of paramagnetic contrast agents strongly depends on the signal-to-noise ratio (SNR) of the maps. The authors present a semianalytic model for the noise in CBV maps and introduce analytic and Monte Carlo techniques for determining the effect of experimental parameters and processing strategies upon CBV-SNR. CBV-SNR increases as more points are used to estimate the baseline signal level. For typical injections, maps made with 10 baseline points have 34% more noise than those made with 50 baseline points. For a given peak percentage signal drop, an optimum TE can be chosen that, in general, is less than the baseline T2. However, because CBV-SNR is relatively insensitive to TE around this optimum value, choosing TE approximately equal to T2 does not sacrifice much SNR for typical doses of contrast agent. The TR that maximizes spin-echo CBV-SNR satisfies TR/T1 approximately equal to 1.26, whereas as short a TR as possible should be used to maximize gradient-echo CBV-SNR. In general, CBV-SNR is maximized for a given dose of contrast agent by selecting as short an input bolus duration as possible. For image SNR exceeding 20-30, the gamma-fitting procedure adds little extra noise compared with simple numeric integration. However, for noisier input images, can be the case for high resolution echo-planar images, the covarying parameters of the gamma-variate fit broaden the distribution of the CBV estimate and thereby decrease CBV-SNR. The authors compared the analytic noise predicted by their model with that of actual patient data and found that the analytic model accounts for roughly 70% of the measured variability of CBV within white matter regions of interest.

Journal ArticleDOI
TL;DR: Four algorithms for adaptive retrieval of slowly time-varying multiple cisoids in noise are studied: the adaptive notch filter, the multiple frequency tracker, the adaptive estimation scheme, and the hyperstable adaptive line enhancer.
Abstract: We study and compare four algorithms for adaptive retrieval of slowly time-varying multiple cisoids in noise: the adaptive notch filter, the multiple frequency tracker, the adaptive estimation scheme, and the hyperstable adaptive line enhancer. The local behavior of the algorithms in a neighborhood of their equilibrium state [assuming high signal-to-noise ratio (SNR) and large data sample] for a two-cisoid signal is treated in a similar way to the linear filter approximation technique used for a single-cisoid case. The validity of the results is confirmed by computer simulations.

Journal ArticleDOI
TL;DR: In this paper, a statistical analysis of the maximum average correlation height (MACH) filter is provided, and the performance of the MACH filter is compared to the matched spatial filter (MSF) in terms of the relation between the probabilities of correct detection and false alarm, which is represented as a receiver operating characteristic (ROC) curve.
Abstract: A statistical analysis is provided for the properties of the re- cently developed maximum average correlation height (MACH) filter (Mahalanobis et al. 1994). It is shown that the MACH filter can be inter- preted as an optimum filter for the detection of targets in additive noise. A rationale is given for using a popular peak-to-sidelobe ratio metric to characterize the output of the MACH filter. Finally, the performance of the MACH filter is compared to that of the matched spatial filter (MSF) in terms of the relation between the probabilities of correct detection and false alarm, which is represented as a receiver operating characteristic (ROC) curve. © 1997 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(97)00910-0)

Journal ArticleDOI
TL;DR: Training of the neural network for signal detection and its operation at some specified probability of false alarm are discussed and performance of neural detectors are compared with those of matched filter and locally optimum detectors.
Abstract: We employ neural networks to detect known signals in additive non-Gaussian noise. Training of the neural network for signal detection and its operation at some specified probability of false alarm are discussed. Performance of neural detectors are presented under several non-Gaussian noise environments and are compared with those of matched filter and locally optimum detectors.

Journal ArticleDOI
TL;DR: It is shown that, unlike the SMI method, the eigencanceler yields a conditional SNR distribution that is dependent on the covariance matrix, and it is further shown that simpler, covariance Matrix-independent approximations can be found for the large interference-to-noise case.
Abstract: The statistical characterization of the conditioned signal-to-noise ratio (SNR) of the sample matrix inversion (SMI) method has been known for some time. An eigenanalysis-based detection method, referred to as the eigencanceler, has been shown to be a useful alternative to SMI, when the interference has low rank. In this work, the density function of the conditioned SNR is developed for the eigencanceler. The development is based on the asymptotic expansion of the distribution of the principal components of the covariance matrix. It is shown that, unlike the SMI method, the eigencanceler yields a conditional SNR distribution that is dependent on the covariance matrix. It is further shown that simpler, covariance matrix-independent approximations can be found for the large interference-to-noise case. The new distribution is shown to be in good agreement with the numerical data obtained from simulations.

Journal ArticleDOI
TL;DR: In this paper, a transient value decomposition (SVD) of the data matrix is proposed to minimize the squared error over a finite data length rather than minimizing the expected value of the squared errors under the assumption of an infinite length of available data.
Abstract: The conventional method of f-x filtering for random noise reduction suffers from three drawbacks. Firstly, the wavenumber response of the filter does not peak exactly at the wavenumbers of the signal components. Secondly, the amplitude of the filter response is less than one at the signal component wavenumbers, causing attenuation of the signal. Finally, sidelobes in the filter response cause noise at wavenumbers well separated from the signal components to leak into the filtered output. Singular value decomposition (SVD) of the data matrix shows that the problems may be reduced by using a transient-free formulation of the data matrix; that is, minimizing the squared errors over a finite data length rather than minimizing the expected value of the squared errors under the assumption of an infinite length of available data. The SVD analysis of the transient-free case shows that the noise-reduction performance may be improved further at all signal-to-noise ratios (SNRs). Tests on synthetic data show that the SNR gain may typically be 10 dB for the selected eigenvector method, as opposed to 5 dB to 0 dB at different input signal-to-noise ratios for the other methods. The optimal filter lengths were about twice the optimal lengths found for themore » conventional method. Tests on real stacked data also show considerable improvement in performance. Care must be taken in areas of complicated structure, particularly when strongly curved events are present, to select sufficient eigenvectors, but this may be achieved at the price of a slight loss in noise reduction.« less

Journal ArticleDOI
TL;DR: In this paper, nine different iterative tomographic algorithms have been applied to the reconstruction of a two-dimensional object with internal defects from its projections, each projection of the solid object is interpreted as a path integral of the light-sensitive property of the object in the appropriate direction.
Abstract: Iterative tomographic algorithms have been applied to the reconstruction of a two-dimensional object with internal defects from its projections. Nine distinct algorithms with varying numbers of projections and projection angles have been considered. Each projection of the solid object is interpreted as a path integral of the light-sensitive property of the object in the appropriate direction. The integrals are evaluated numerically and are assumed to represent exact data. Errors in reconstruction are defined as the statistics of difference between original and reconstructed objects and are used to compare one algorithm with respect to another. The algorithms used in this work can be classified broadly into three groups, namely the additive algebraic reconstruction technique (ART), the multiplicative algebraic reconstruction technique (MART) and the maximization reconstruction technique (MRT). Additive ART shows a systematic convergence with respect to the number of projections and the value of the relaxation parameter. MART algorithms produce less error at convergence compared to additive ART but converge only at small values of the relaxation parameter. The MRT algorithm shows an intermediate performance when compared to ART and MART. An increasing noise level in the projection data increases the error in the reconstructed field. The maximum and RMS errors are highest in ART and lowest in MART for given projection data. Increasing noise levels in the projection data decrease the convergence rates. For all algorithms, a 20% noise level is seen as an upper limit, beyond which the reconstructed field is barely recognizable.

Journal ArticleDOI
TL;DR: In this article, a method for adding thermal and amplifier noise to a KLM model for a transducer element is described, which is used to compare the magnitudes of various noise sources in a 5 MHz array element typical of those used for linear array imaging.
Abstract: This paper describes a method for adding thermal and amplifier noise to a KLM model for a transducer element. The model is used to compare the magnitudes of various noise sources in a 5 MHz array element typical of those used for linear array imaging with and without an amplifier. Fundamental signal-to-noise ratio (SNR) issues of importance to array and amplifier designers are explored, including the effect on SNR of effective dielectric constant of the piezoelectric material, individual element size, changing the number of elements, and adding an amplifier to an element before and after a cable. SNR is considered both for the case in which the acoustic output is limited by the maximum rarefactive pressure which is considered safe for a particular application (Mechanical Index limitation) and the case in which acoustic output is limited by the maximum transmit voltage than can he delivered by the imaging system or tolerated by the transducer. It is shown that the SNR performance depends on many controllable parameters and that significant improvements in SNR can be achieved through proper design. The implications for 1.5-D and 2-D array elements are discussed.

Journal ArticleDOI
TL;DR: The clutter model is shown to refine some recently published models and is used to support the conjecture that clutter caused by relatively large grains often can be well approximated by a Gaussian stochastic process.

Patent
28 Aug 1997
TL;DR: In this article, the excitation applied to the sample is arranged so that phase and amplitude information may be obtained from the response signal, and in which the signal is resolved into two components.
Abstract: A method of nuclear quadrupole resonance testing a sample is disclosed in which the excitation applied to the sample is arranged so that phase and amplitude information may be obtained from the response signal, and in which the signal is resolved into two components. Particularly if a parameter such as radio-frequency field strength varies with position, this may give an indication of the distribution of nuclei in the sample, preferably from the phase of the response signal. Positional information can also be obtained by measuring from two or more reference points. This may be employed in imaging. The phase information may be employed to improve the signal to noise ratio obtainable in other methods where only amplitude information was previously available, for example in distinguishing genuine NQR response signals from spurious signals.

Journal ArticleDOI
TL;DR: The authors present a method for detecting the "true" response of the brain resulting from repeated auditory stimulation, based on selective averaging of single-trial evoked potentials, using an unsupervised fuzzy-clustering algorithm.
Abstract: The problem of extracting a useful signal (a response) buried in relatively high amplitude noise has been investigated, under the conditions of low signal-to-noise ratio. In particular, the authors present a method for detecting the "true" response of the brain resulting from repeated auditory stimulation, based on selective averaging of single-trial evoked potentials. Selective averaging: is accomplished in two steps. First, an unsupervised fuzzy-clustering algorithm is employed to identify groups of trials with similar characteristics, using a performance index as an optimization criterion. Then, typical responses are obtained by ensemble averaging of all trials in the same group. Similarity among the resulting estimates is quantified through a synchronization measure, which accounts for the percentage of time that the estimates are in phase. The performance of the classifier is evaluated with synthetic signals of known characteristics, and its usefulness is demonstrated with real electrophysiological data obtained from normal volunteers.

Patent
Michael John McCarthy1
23 Jul 1997
TL;DR: In this paper, a method and system for use with wireless communication systems having a cellular architecture with at least a first and a second cell is presented to ensure near uniform capacity and quality of channels within the second cell via the following steps.
Abstract: A method and system for use with wireless communication systems having a cellular architecture with at least a first and a second cell. The method and system provided ensure near uniform capacity and quality of channels within the second cell via the following steps. The noise signal power in unused data channels within the second cell is monitored. When a request for channel access is received, a determination is made whether the request for channel access is either a request for handoff from the first cell into the second cell, or not. In the event that the request is not a request for handoff, a determination is made whether idle channels exist to satisfy the request for channel access. In the event of a determination either that the request for channel access is a request for handoff, or both that the request is not a request for handoff and that idle channels exist to satisfy the request, a measured received signal power of a mobile unit subscriber unit making the request is determined. One of the unused channels in the second cell is then preferentially assigned to the mobile subscriber unit where such preference in assigning is to assign a channel, provided that a signal to noise ratio calculated upon the monitored received signal power and the monitored noise signal power of the preferentially assigned noisy channel meets or exceeds a required signal to noise ratio.

Proceedings ArticleDOI
03 Nov 1997
TL;DR: The theory and practice of a new advanced modem technology suitable for high data rate wireless communications and its performance over a frequency-flat Rayleigh fading channel is presented and the frame error rate (FER) performance is presented as a function of the signal to noise ratio (SNR) and Doppler spread in the presence of timing and frequency offset errors.
Abstract: This paper presents the theory and practice of a new advanced modem technology suitable for high data rate wireless communications and presents its performance over a frequency-flat Rayleigh fading channel. The new technology is based on space-time coded modulation (STCM) with multiple transmit and/or multiple receive antennas and orthogonal pilot sequence insertion (O-PSI). In this approach data is encoded by a space-time channel encoder and the output of the encoder is split into N streams to be simultaneously transmitted using N transmit antennas. The transmitter inserts periodic orthogonal pilot sequences in each of the simultaneously transmitted bursts. The receiver uses those pilot sequences to estimate the fading channel. When combined with an appropriately designed interpolation filter, accurate channel state information (CSI) can be estimated for the decoding process. Simulation results of the proposed modem as applied to the IS-136 cellular standard are presented. We present the frame error rate (FER) performance as a function of the signal to noise ratio (SNR) and Doppler spread in the presence of timing and frequency offset errors. Simulation results show that, for example, for 10% FER, data rates up to 54 kbps per a 30 kHz channel can be supported at a SNR of 11.7 dB and a Doppler spread of 180 Hz using a 32-state 8-PSK space-time code with 2 transmit and 2 receive antennas. Simulation results for other cases are also provided.

Patent
06 Oct 1997
TL;DR: In this article, the authors proposed a method and system for utilization with wireless communications systems having a cellular architecture covering a geographic area, where each sub-area is isolated from other sub-areas by the determined one or more pairs of sectors having a weak connection zone.
Abstract: The foregoing objects are achieved as is now described. Provided are a method and system for utilization with wireless communications systems having a cellular architecture covering a geographic area. The method and system accomplish their objects via the following. The geographic area is defined. One or more pairs of the sectors within the defined geographic area wherein a weak connection zone exists are determined. The geographic area is decomposed into two or more sub-areas wherein each sub-area is isolated from other sub-areas by the determined one or more pairs of sectors having a weak connection zone. A first of the sub-areas is selected. Frequency groups are assigned to each sector within the first selected sub-area such that signal to noise ratio is optimized. Thereafter, a second of the sub-areas is selected. One or more sectors within the second selected one of the sub-areas which are linked to sectors within the first selected sub-area are selected. Frequency groups are assigned to the selected sectors within the selected second of the sub-areas such that signal to noise ratio in the selected sector within the selected second of the sub-areas is optimized. Thereafter, frequency groups are assigned to every other sector within the selected second of the sub-areas such that signal to noise ratio is optimized across the second selected sub-area and such that signal to noise ratio over the defined geographic area is optimized.