scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 1987"


Journal ArticleDOI
TL;DR: In this article, an elastic finite-difference method is used to perform an inversion for P-wave velocity, S-wave impedance, and density, which is based on nonlinear least squares and proceeds by iteratively updating the earth parameters.
Abstract: The treatment of multioffset seismic data as an acoustic wave field is becoming increasingly disturbing to many geophysicists who see a multitude of wave phenomena, such as amplitude-offset variations and shear-wave events, which can only be explained by using the more correct elastic wave equation. Not only are such phenomena ignored by acoustic theory, but they are also treated as undesirable noise when they should be used to provide extra information, such as S-wave velocity, about the subsurface. The problems of using the conventional acoustic wave equation approach can be eliminated via an elastic approach. In this paper, equations have been derived to perform an inversion for P-wave velocity, S-wave velocity, and density as well as the P-wave impedance, S-wave impedance, and density. These are better resolved than the Lame parameters. The inversion is based on nonlinear least squares and proceeds by iteratively updating the earth parameters until a good fit is achieved between the observed data and the modeled data corresponding to these earth parameters. The iterations are based on the preconditioned conjugate gradient algorithm. The fundamental requirement of such a least-squares algorithm is the gradient direction which tells how to update the model parameters. The gradient direction can be derived directly from the wave equation and it may be computed by several wave propagations. Although in principle any scheme could be chosen to perform the wave propagations, the elastic finite-difference method is used because it directly simulates the elastic wave equation and can handle complex, and thus realistic, distributions of elastic parameters. This method of inversion is costly since it is similar to an iterative prestack shot-profile migration. However, it has greater power than any migration since it solves for the P-wave velocity, S-wave velocity, and density and can handle very general situations including transmission problems. Three main weaknesses of this technique are that it requires fairly accurate a priori knowledge of the low-wavenumber velocity model, it assumes Gaussian model statistics, and it is very computer-intensive. All these problems seem surmountable. The low-wavenumber information can be obtained either by a prior tomographic step, by the conventional normal-moveout method, by a priori knowledge and empirical relationships, or by adding an additional inversion step for low wavenumbers to each iteration. The Gaussian statistics can be altered by preconditioning the gradient direction, perhaps to make the solution blocky in appearance like well logs, or by using large model variances in the inversion to reduce the effect of the Gaussian model constraints. Moreover, with some improvements to the algorithm and more parallel computers, it is hoped the technique will soon become routinely feasible.

872 citations


Journal ArticleDOI
TL;DR: A non-Gaussian state—space approach to the modeling of nonstationary time series is shown, where the system noise and the observational noise are not necessarily Gaussian.
Abstract: A non-Gaussian state—space approach to the modeling of nonstationary time series is shown. The model is expressed in state—space form, where the system noise and the observational noise are not necessarily Gaussian. Recursive formulas of prediction, filtering, and smoothing for the state estimation and identification of the non-Gaussian state—space model are given. Also given is a numerical method based on piecewise linear approximation to the density functions for realizing these formulas. Significant merits of non-Gaussian modeling and the wide range of applicability of the method are illustrated by some numerical examples. A typical application of this non-Gaussian modeling is the smoothing of a time series that has mean value function with both abrupt and gradual changes. Simple Gaussian state—space modeling is not adequate for this situation. Here the model with small system noise variance cannot detect jump, whereas the one with large system noise variance yields unfavorable wiggle. To work...

867 citations


Journal ArticleDOI
Hans Günter Dosch1
TL;DR: In this article, it was shown that a model in which the QCD vacuum is simulated by a stochastic background field leads to an asymptotically linear potential.

330 citations


Journal ArticleDOI
TL;DR: In this paper, a method for calculating the dispersion of plumes in the atmospheric boundary layer is presented, where the inputs to the method are fundamental meteorological parameters, which act as distinct scaling parameters for the turbulence.

316 citations


Journal ArticleDOI
TL;DR: Convergence analysis of stochastic gradient adaptive filters using the sign algorithm is presented, and the theoretical and empirical curves show a very good match.
Abstract: Convergence analysis of stochastic gradient adaptive filters using the sign algorithm is presented in this paper. The methods of analysis currently available in literature assume that the input signals to the filter are white. This restriction is removed for Gaussian signals in our analysis. Expressions for the second moment of the coefficient vector and the steady-state error power are also derived. Simulation results are presented, and the theoretical and empirical curves show a very good match.

279 citations


Journal ArticleDOI
Richard A. Young1
TL;DR: In this paper, a new difference-of-offset-Gaussians (DOOG) neural mechanism was identified, which provided a plausible neural mechanism for generating such Gaussian derivative-like fields.
Abstract: Physiological evidence is presented that visual receptive fields in the primate eye are shaped like the sum of a Gaussian function and its Laplacian. A new 'difference-of-offset-Gaussians' or DOOG neural mechanism was identified, which provided a plausible neural mechanism for generating such Gaussian derivative-like fields. The DOOG mechanism and the associated Gaussian derivative model provided a better approximation to the data than did the Gabor or other competing models. A model-free Wiener filter analysis provided independent confirmation of these results. A machine vision system was constructed to simulate human foveal retinal vision, based on Gaussian derivative filters. It provided edge and line enhancement (deblurring) and noise suppression, while retaining all the information in the original image.

262 citations


Journal ArticleDOI
TL;DR: The Wigner distribution method is shown to be a convenient framework for characterizing Gaussian kernels and their unitary evolution under Sp(2n,openR) action and the nontrivial role played by a phase term in the kernel is brought out.
Abstract: Gaussian kernels representing operators on the Hilbert space scrH=L2(openRn) are studied. Necessary and sufficient conditions on such a kernel in order that the corresponding operator be positive semidefinite, corresponding to a density matrix (cross-spectral density) in quantum mechanics (optics), are derived. The Wigner distribution method is shown to be a convenient framework for characterizing Gaussian kernels and their unitary evolution under Sp(2n,openR) action. The nontrivial role played by a phase term in the kernel is brought out. The entire analysis is presented in a form which is directly applicable to n-dimensional oscillator systems in quantum mechanics and to Gaussian Schell-model partially coherent fields in optics.

250 citations


Journal ArticleDOI
TL;DR: In this paper, the propagation of rays, paraxial rays, and Gaussian beams in a medium where slowness differs only slightly from that of a reference medium is studied.
Abstract: We study the propagation of rays, paraxial rays, and Gaussian beams in a medium where slowness differs only slightly from that of a reference medium. Ray theory is developed using a Hamiltonian formalism that is independent of the coordinate system under consideration. Let us consider a ray in the unperturbed medium. The perturbation in slowness produces a change of the trajectory of this ray which may be calculated by means of canonical perturbation theory. We define paraxial rays as those rays that propagate in perturbed medium in the vicinity of the perturbed ray. The ray tracing equation for paraxial rays may be obtained by a linearization of the canonical ray equations. The linearized equations are then solved by a propagator method. With the help of the propagator we form beams, i.e. families of paraxial rays that depend on a single beam parameter. The results are very general and may be applied to a number of kinematic and dynamic ray tracing problems, like two-point ray tracing, Gaussian beams, wave front interpolation, etc. The perturbation methods are applied to the study of a few simple problems in which the unperturbed medium is homogeneous. First, we consider a two-dimensional spherical inclusion with a Gaussian slowness perturbation profile. Second, transmission and reflection problems are examined. We compare results for amplitude and travel time computed by exact and perturbed ray theory. The agreement is excellent and may be improved using an iterative procedure by which we change the reference unperturbed ray whenever the perturbation becomes large. Finally, we apply our technique to a three-dimensional problem: we calculate the amplitude perturbation and ray deflection produced by the velocity structure under the Mont Dore volcano (central France). Again a comparison shows excellent agreement between exact and perturbed ray theory.

214 citations


Journal ArticleDOI
TL;DR: The generalized Lloyd algorithm is applied to the design of joint source and channel trellis waveform coders to encode discrete-time continuous-amplitude stationary and ergodic sources operating over discrete memoryless noisy channels and it is observed that the jointly optimized codes achieve performance close to or better than that of separately optimized tandem codes of the same constraint length.
Abstract: The generalized Lloyd algorithm is applied to the design of joint source and channel trellis waveform coders to encode discrete-time continuous-amplitude stationary and ergodic sources operating over discrete memoryless noisy channels. Experimental results are provided for independent and autoregressive Gaussian sources, binary symmetric channels, and absolute error and squared error distortion measures. Performance of the joint codes is compared with the tandem combination of a trellis source code and a trellis channel code on the independent Gaussian source using the squared error distortion measure operating over an additive white Gaussian noise channel. It is observed that the jointly optimized codes achieve performance close to or better than that of separately optimized tandem codes of the same constraint length. Performance improvement via a predictive joint source and channel trellis code is demonstrated for the autoregressive Gaussian source using the squared error distortion measure.

193 citations


Journal ArticleDOI
TL;DR: In this article, the statistical theory for the angular central Gaussian model is presented and some topics treated are maximum likelihood estimation of the parameters, testing for uniformity and circularity, and principal components analysis.
Abstract: SUMMARY The angular central Gaussian distribution is an alternative to the Bingham distribution for modeling antipodal symmetric directional data. In this paper the statistical theory for the angular central Gaussian model is presented. Some topics treated are maximum likelihood estimation of the parameters, testing for uniformity and circularity, and principal components analysis. Comparisons to methods based upon the sample second moments are made via an example.

170 citations


Journal ArticleDOI
TL;DR: In this article, the authors demonstrate a technique, using a very high contrast resist, whereby the normalized point exposure distribution can be measured experimentally, both on solid substrates which cause backscattering, and on thin substrates where backscatter is negligible.
Abstract: The exposure distribution function in electron beam lithography, which is needed to perform proximity correction, is usually simulated by Monte Carlo techniques, assuming a Gaussian distribution of the primary beam. The resulting backscattered part of the exposure distribution is usually also fitted to a Gaussian term. In this paper we demonstrate a technique, using a very high contrast resist, whereby the normalized point exposure distribution can be measured experimentally, both on solid substrates which cause backscattering, and on thin substrates where backscattering is negligible. The data sets so obtained can be applied directly to proximity correction and represent the practical conditions met in pattern writing. Results are presented of the distributions obtained on silicon, gallium arsenide, and thin silicon nitride substrates at different beam energies. Significant deviations from the commonly assumed double Gaussian distributions are apparent. On GaAs substrates the backscatter distribution cannot adequately be described by a Gaussian function. Even on silicon a significant amount of exposure is found in the transition region between the two Gaussian terms. This deviation, which can be due to non‐Gaussian tails in the primary beam and to forward scattering in the resist, must be taken into account for accurate proximity correction in most submicron lithography, and certainly on the sub‐100 nm scale.

Journal ArticleDOI
TL;DR: In this paper, an extension of the semiclassical Gaussian wave packet dynamics to eliminate the three main restrictions of this method is proposed. But the method is not able to treat most classically forbidden processes.
Abstract: We propose an extension of the semiclassical Gaussian wave packet dynamics to eliminate the three main restrictions of this method. The first restriction is that the wave packet is forced to remain Gaussian. This is correct only for quadratic, linear, or constant potentials. The second restriction is that the method is, in general, not able to treat most classically forbidden processes. The third restriction is that the norm is conserved only for Gaussian wave packets. For a superposition of Gaussians this is no longer true. We can eliminate these restrictions by an extension of the method into complex phase space, keeping time real.

Journal ArticleDOI
TL;DR: A pseudospectral code for general polyatomic molecules has been developed using Gaussian basis functions in this paper, where the water molecule is studied using a 6−31G** basis set and the equilibrium geometry, total energy, first ionization potential, and vibrational force constants are obtained.
Abstract: A pseudospectral code for general polyatomic molecules has been developed using Gaussian basis functions As an example, the water molecule is studied using a 6‐31G** basis set Quantitative agreement with conventional calculations is obtained for the equilibrium geometry, total energy, first ionization potential, and vibrational force constants Timing results for a vectorized version of the code (run on a Cray X‐MP) indicate that for large molecules, rate enhancements of Hartree–Fock self‐consistent field calculations of order 103 can be achieved

Journal ArticleDOI
TL;DR: This work presents a technique for computing the convolution of an image with LoG (Laplacian-of-Gaussian) masks, with the paradoxical result that the computation time decreases when ¿ increases.
Abstract: We present a technique for computing the convolution of an image with LoG (Laplacian-of-Gaussian) masks. It is well known that a LoG of variance a can be decomposed as a Gaussian mask and a LoG of variance ?1 < ?. We take advantage of the specific spectral characteristics of these filters in our computation: the LoG is a bandpass filter; we can therefore fold the spectrum of the image (after low pass filtering) without loss of information, which is equivalent to reducing the resolution. We present a complete evaluation of the parameters involved, together with a complexity analysis that leads to the paradoxical result that the computation time decreases when ? increases. We illustrate the method on two images.

Journal ArticleDOI
TL;DR: In this article, the estimation of the parameters of a stationary random field on d-dimensional lattice by minimizing the classical Whittle approximation to the Gaussian log likelihood is considered.
Abstract: SUMMARY We consider the estimation of the parameters of a stationary random field on d-dimensional lattice by minimizing the classical Whittle approximation to the Gaussian log likelihood. If the usual biased sample covariances are used, the estimate is efficient only in one dimension. To remove this edge effect, we introduce data tapers and show that the resulting modified estimate is efficient also in two and three dimensions. This avoids the use of the unbiased sample covariances which are in general not positive-definite.

01 Jan 1987
TL;DR: In this paper, the expectation of the product of four scalar real Gaussian random variables is generalized to matrix-valued (real or complex) Gaussian Random Variables, and a simple derivation of the covariance matrix of instrumental variable estimates of parameters in multivariable regression models is presented.
Abstract: The formula for the expectation of the product of four scalar real Gaussian random variables is generalized to matrix-valued (real or complex) Gaussian random variables. As an application of the extended formula, a simple derivation is presented of the covariance matrix of instrumental variable estimates of parameters in multivariable regression models. >

Journal ArticleDOI
TL;DR: A numerical algorithm for locating both minima and transition states designed for use in the ab initio program package GAUSSIAN 82 is presented and is effectively the numerical version of an analytical algorithm (OPT = EF) previously published in this journal.
Abstract: A numerical algorithm for locating both minima and transition states designed for use in the ab initio program package GAUSSIAN 82 is presented It is based on the RFO method of Simons and coworkers and is effectively the numerical version of an analytical algorithm (OPT = EF) previously published in this journal The algorithm is designed to make maximum use of external second derivative information obtained from prior optimizations at lower levels of theory It can be used with any wave function for which an energy can be calculated and is about two to three times faster than the default DFP algorithm (OPT = FP) supplied with GAUSSIAN 82


Journal ArticleDOI
TL;DR: In this paper, the modular invariant partition function of the two-dimensional Ashkin-Teller model on a line of continuously varying critically is obtained and the second magnetic exponent is predicted to be 9 8.



Journal ArticleDOI
TL;DR: In this article, it was demonstrated that the use of a Gaussian charge distribution to represent the nucleus is advantageous in relativistic quantum chemical basis set expansion calculations, leading to a more rapid convergence of the ground state energy expectation value as a function of basis set size and to a large reduction in the exponents of the optimized basis sets.

Journal ArticleDOI
TL;DR: The results suggest that reliable communication is impossible at any positive code rate if the jammer is subject only to an average power constraint, and the asymptotic error probability suffered by optimal random codes in these cases is determined.
Abstract: The {\em arbitrarily varying channel} (AVC) can be interpreted as a model of a channel jammed by an intelligent and unpredictable adversary. We investigate the asymptotic reliability of optimal random block codes on Gaussian arbitrarily varying channels (GAVC's). A GAVC is a discrete-time memoryless Gaussian channel with input power constraint P_{T} and noise power N_{e} , which is further corrupted by an additive "jamming signal." The statistics of this signal are unknown and may be arbitrary, except that they are subject to a power constraint P_{J} . We distinguish between two types of power constraints: {\em peak} and {\em average.} For peak constraints on the input power and the jamming power we show that the GAVC has a random coding capacity. For the remaining cases in which either the transmitter or the jammer or both are subject to average power constraints, no capacities exist and only \lambda -capacities are found. The asymptotic error probability suffered by optimal random codes in these cases is determined. Our results suggest that if the jammer is subject only to an average power constraint, reliable communication is impossible at any positive code rate.

Journal ArticleDOI
01 Jan 1987
TL;DR: In this article, the in-phase and quadrature components of the clutter echoes have been modelled to give a Weibull probability density function (PDF) of the amplitude and a uniform PDF of the phase.
Abstract: The paper deals with the problem of radar detection of a target echo embedded in Weibull clutter and white Gaussian noise (WGN). Relevant features of the paper, with respect to previous papers on the same subject, refer to the coherent nature of the Weibull process (that modelling the clutter) and of the processing chain. In more detail, the in-phase and quadrature components of the clutter echoes have been modelled to give a Weibull probability density function (PDF) of the amplitude and a uniform PDF of the phase. Any shape of the correlation function among consecutive clutter samples is also allowed in the model. The so called 'coherent Weibull clutter' (CWC) introduced in the paper represents a suitable generalisation of the conventional ?coherent Gaussian clutter? (CGC). The processing chain is also coherent, i.e. it operates on the in-phase and quadrature components of the signals. To derive a suitable processing scheme, we resort to the general theory of radar detection which applies to any type of PDF and autocorrelation function (ACF) of the target and clutter. Briefly, it is found that the processing scheme is based on two nonlinear estimators of the clutter samples in the two alternative hypotheses (i.e. H0 and H1 the fully fledged architecture being discussed in the paper. The detection processor turns out to be a suitable generalisation of that concerning the CGC case. A new family of processing schemes is derived in accordance with the statistical model assumed for the useful target. The following cases are covered: target known a priori, Swerling 0, 1 and 2 models, and partially fluctuating target. Attention is also paid to the interesting case of a target modelled as a coherent Weibull process. The detection of such a target against white Gaussian noise is worked out. The detection performance of the a

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of estimating the slope of the error in the errors in variables model with normal error when either the ratio of error variances is known and the distribution of the independent variable is arbitrary and unknown or the distribution is not Gaussian or degenerate.
Abstract: We consider efficient estimation of the slope in the errors in variables model with normal error when either the ratio of error variances is known and the distribution of the independent is arbitrary and unknown or the distribution of the independent variable is not Gaussian or degenerate. We calculate information bounds and exhibit estimates achieving these bounds using an initial minimum distance estimate and suitable estimates of the efficient score function.

Journal ArticleDOI
TL;DR: In this paper, the authors used uniform asymptotic expansions to explain the behaviour of the Gaussian beam method and showed that the beam solution for head waves and in edge-diffracted shadow zones are both correct, but with governing parameters that are explicitly e-dependent.
Abstract: Summary. Recently, a method using superposition of Gaussian beams has been proposed for the solution of high-frequency wave problems. The method is a potentially useful approach when the more usual techniques of ray theory fail: it gives answers which are finite at caustics, computes a nonzero field in shadow zones, and exhibits critical angle phenomena, including head waves. Subsequent tests by several authors have been encouraging, although some reported solutions show an unexplained dependence on the ‘free’ complex parameter e which specifies the initial widths and phases of the Gaussian beams. We use methods of uniform asymptotic expansions to explain the behaviour of the Gaussian beam method. We show how it computes correctly the entire caustic boundary layer of a caustic of arbitrary complexity, and computes correctly in a region of critical reflection. However, the beam solution for head waves and in edge-diffracted shadow zones are shown to have the correct asymptotic form, but with governing parameters that are explicitly e-dependent. We also explain the mechanism by which the beam solution degrades when there are strong lateral inhomogeneities. We compare numerically our predictions for some representative, model problems, with exact solutions obtained by other means.

Journal ArticleDOI
TL;DR: In this article, an asymmetrical Gaussian/Lorentzian mixed function and the automatic removal of both background and X-ray satellites were used for curve synthesis of photoelectron spectra.

Journal ArticleDOI
TL;DR: The optimal estimator and 2-D fixed-lag smoother for this DSG model extending earlier work of Ackerson and Fu are presented and suboptimal estimators are investigated using both a tree and a decision-directed method.
Abstract: The two-dimensional (2-D) doubly stochastic Gaussian (DSG) model was introduced by one of the authors to provide a complete model for spatial filters which adapt to the local structure in an image signal. Here we present the optimal estimator and 2-D fixed-lag smoother for this DSG model extending earlier work of Ackerson and Fu. As the optimal estimator has an exponentially growing state space, we investigate suboptimal estimators using both a tree and a decision-directed method. Experimental results are presented.

Journal ArticleDOI
TL;DR: In this paper, a class of procedures for testing the composite hypothesis that a stationary stochastic process is Gaussian is proposed, which relies on quadratic forms in deviations of certain sample statistics from their population counterparts, minimized with respect to the unknown parameters.
Abstract: A class of procedures is proposed for testing the composite hypothesis that a stationary stochastic process is Gaussian. Requiring very limited prior knowledge about the structure of the process, the tests rely on quadratic forms in deviations of certain sample statistics from their population counterparts, minimized with respect to the unknown parameters. A specific test is developed, which employs differences between components of the sample and Gaussian characteristic functions, evaluated at certain points on the real line. By demonstrating that, under $H_0$, the normalized empirical characteristic function converges weakly to a continuous Gaussian process, it is shown that the test remains valid when arguments of the characteristic functions are in certain ways data dependent.

Journal ArticleDOI
TL;DR: In this article, a posterior probability density function of model parameters for given observed data and prior data is defined, and a simple algorithm for iterative search to find the maximum likelihood estimates is proposed.