scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 1986"


Journal ArticleDOI
TL;DR: In this paper, a set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed.
Abstract: A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

3,098 citations


Journal ArticleDOI
TL;DR: The results show that the proposed multiuser detectors afford important performance gains over conventional single-user systems, in which the signal constellation carries the entire burden of complexity required to achieve a given performance level.
Abstract: Consider a Gaussian multiple-access channel shared by K users who transmit asynchronously independent data streams by modulating a set of assigned signal waveforms. The uncoded probability of error achievable by optimum multiuser detectors is investigated. It is shown that the K -user maximum-likelihood sequence detector consists of a bank of single-user matched filters followed by a Viterbi algorithm whose complexity per binary decision is O(2^{K}) . The upper bound analysis of this detector follows an approach based on the decomposition of error sequences. The issues of convergence and tightness of the bounds are examined, and it is shown that the minimum multiuser error probability is equivalent in the Iow-noise region to that of a single-user system with reduced power. These results show that the proposed multiuser detectors afford important performance gains over conventional single-user systems, in which the signal constellation carries the entire burden of complexity required to achieve a given performance level.

2,300 citations


Journal ArticleDOI
TL;DR: It is shown that the Gaussian probability density function is the only kernel in a broad class for which first-order maxima and minima, respectively, increase and decrease when the bandwidth of the filter is increased.
Abstract: Scale-space filtering constructs hierarchic symbolic signal descriptions by transforming the signal into a continuum of versions of the original signal convolved with a kernal containing a scale or bandwidth parameter. It is shown that the Gaussian probability density function is the only kernel in a broad class for which first-order maxima and minima, respectively, increase and decrease when the bandwidth of the filter is increased. The consequences of this result are explored when the signal?or its image by a linear differential operator?is analyzed in terms of zero-crossing contours of the transform in scale-space.

852 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of finding a set of functions which is flexible enough to produce 'good' results over a wide range of molecular geometries and is still small enough to leave the problem computationally tractible and economically within reason.
Abstract: The quantum chemistry literature containa references to a plethora of basis seta, currently numbering almost 100. While professional quantum chemists might become familiar with several dozen of these in a lifetime of calculations, the OCeasionaJ user of ab initio programs probably wishes to ignore all but the two or three sets which, through habitual use, have become personal favorites. Unfortunately, this attitude has its drawbacks. Intelligent reading of the literature requires a t least a cursory knowledge of the limitations of other basis sets. Information conceming the likely aceuracy of a specific basis for a particular property is essential in order to judge the adequacy of the computational method and, hence, the soundness of the results. Occasionally, for reasons of economy or computational feasibility, a basii set is selected for which the computed results are nearly without significance. In light of the large number of publications reporting new basis sets or detailing the performance of existing sets the task of remaining informed has become very diffteult for experts and nonexperts alike. The existence of such a vast multitude of basis sets is attributable, a t least in part, to the difficulty of finding a single set of functions which is flexible enough to produce 'good" results over a wide range of molecular geometries and is still small enough to leave the problem computationally tractible and economically within reason. The driving force behind much of the research effort in small basis sets is the fact that the computer time required for some parts of an ab initio calculation is very strongly dependent on the number of basis functions. For example, the integral evaluation goes as the fourth power of the number of Gaussian primitives. Fortunately this is the only step which explicitly depends on the number of primitives. All subsequent steps depend on the number of contracted functions formed from the primitives. The concept of primitive and contracted functions will be discussed later. Consider a collection of K identical atoms, each with n doubly occupied orbitals and N unoccupied (or virtual) orbitals. The SCF step increases as (n + N'K', while the full transformation of the integrals over the original basis functions to integrals over molecular orbitals goes as (n + N5P. Methods to account for correlation effects vary greatly. Only a few of the popular ones will be considered here. Second order Moller-Plesset (MP2) perturbation theory goes as n2NX4 but still requires an nN4F integral transformation. MP3 goes as n2N4P, while a Hartree-Fock singles and doubles CI will have n2WK4 configurations, (n2NW')' hamiltonian matrix elements of which n2N4P will be nonzero. Pople and co-workers' have proposed

789 citations


Journal ArticleDOI
TL;DR: The present scheme with a significant saving of computer time is found superior to other currently available methods for molecular integral computations with respect to electron repulsion integrals and their derivatives.
Abstract: Recurrence expressions are derived for various types of molecular integrals over Cartesian Gaussian functions by the use of the recurrence formula for three‐center overlap integrals. A number of characteristics inherent in the recursive formalism allow an efficient scheme to be developed for molecular integral computations. With respect to electron repulsion integrals and their derivatives, the present scheme with a significant saving of computer time is found superior to other currently available methods. A long innermost loop incorporated in the present scheme facilitates a fast computation on a vector processing computer.

609 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the effects at a temperature-driven first-order transition by analyzing various moments of the energy distribution and the rounding of the singularities and the shifts in the location of the specific heat maximum.
Abstract: We study the finite-size effects at a temperature-driven first-order transition by analyzing various moments of the energy distribution. The distribution function for the energy is approximated by the superposition of two weighted Gaussian functions yielding quantitative estimates for various quantities and scaling form for the specific heat. The rounding of the singularities and the shifts in the location of the specific-heat maximum are analyzed and the characteristic features of a first-order transition are identified. The predictions are tested on the ten-state Potts model in two dimensions by carrying out extensive Monte Carlo calculations. The results are found to be in good agreement with theory. Comparison is made with the second-order transitions in the two- and three-state Potts models.

529 citations


Journal ArticleDOI
TL;DR: A system that takes a gray level image as input, locates edges with subpixel accuracy, and links them into lines and notes that the zero-crossings obtained from the full resolution image using a space constant ¿ for the Gaussian, are very similar, but the processing times are very different.
Abstract: We present a system that takes a gray level image as input, locates edges with subpixel accuracy, and links them into lines. Edges are detected by finding zero-crossings in the convolution of the image with Laplacian-of-Gaussian (LoG) masks. The implementation differs markedly from M.I.T.'s as we decompose our masks exactly into a sum of two separable filters instead of the usual approximation by a difference of two Gaussians (DOG). Subpixel accuracy is obtained through the use of the facet model [1]. We also note that the zero-crossings obtained from the full resolution image using a space constant ? for the Gaussian, and those obtained from the 1/n resolution image with 1/n pixel accuracy and a space constant of ?/n for the Gaussian, are very similar, but the processing times are very different. Finally, these edges are grouped into lines using the technique described in [2].

502 citations


Journal ArticleDOI
TL;DR: In this article, a split-valence 3-21G basis set for third and fourth-row, main-group elements has been developed for the calculation of equilibrium geometries, normal mode vibrational frequencies, reaction energies, and electric dipole moments involving a variety of normal and hypervalent compounds.
Abstract: Two new series of efficient basis sets for third- and fourth-row, main-group elements have been developed. Split-valence 3-21G basis sets have been formulated from the minimal expansions by Huzinaga, in which each atomic orbital has been represented by a sum of three Gaussians. The original expansions for s− and p−type orbitals (except those for 1s) have been replaced by new combinations in which the two sets of orbitals (of the same n quantum number) share Gaussian exponents. The Huzinaga expansions for 1s, 3d and 4d (fourth-row elements only) have been employed without further alteration. The valence atomic functions 4s, 4p for third-row elements; 5s, 5p for fourth-row elements have been split into two and one Gaussian parts. Supplemented 3-21G(*) representations have been formed from the 3-21G basis sets by the addition of a set of single d−type Gaussian functions. The performance of 3-21G and 3-21G(*) basis sets is examined with regard to the calculation of equilibrium geometries, normal mode vibrational frequencies, reaction energies, and electric dipole moments involving a variety of normal and hypervalent compounds containing third- and fourth-row, main-group elements. The supplementary functions incorporated into the 3-21G(*) basis sets are generally found to be important, especially for the proper description of equilibrium bond lengths and electric dipole moments. 3-21G(*) representations are recommended for general use in lieu of the unsupplemented 3-21G basis sets.

443 citations


Journal ArticleDOI
TL;DR: In this article, a discrete variable representation (DVR) for the angular, bend coordinate is combined with the distributed (real) Gaussian basis for the expansion of other, radial coordinates.
Abstract: A novel, efficient, and accurate quantum method for the calculation of highly excited vibrational levels of triatomic molecules is presented. The method is particularly well suited for applications to ‘‘floppy’’ molecules, having large amplitude motion, on potential surfaces which may have more than one local minimum. The discrete variable representation (DVR) for the angular, bend coordinate is combined with the distributed (real) Gaussian basis (DGB) for the expansion of other, radial coordinates. The DGB is tailored to the potential, covering only those regions where V(r)

380 citations


Journal ArticleDOI
TL;DR: The distributed Gaussian bases are defined and used to calculate eigenvalues for one and multidimensional potentials and are shown to be accurate, flexible, and efficient.
Abstract: Distributed Gaussian bases (DGB) are defined and used to calculate eigenvalues for one and multidimensional potentials. Comparisons are made with calculations using other bases. The DGB is shown to be accurate, flexible, and efficient. In addition, the localized nature of the basis requires only very low order numerical quadrature for the evaluation of potential matrix elements.

375 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined single-mode and two-mode Gaussian pure states (GPS), quantum mechanical pure states with Gaussian wave functions, and derived many of the important properties of GPS and of the Hamiltonians and unitary operators associated with them.

Journal ArticleDOI
TL;DR: In this paper, the authors propose procedures for detection of the number of signals in presence of Gaussian white noise under an additive model, which is related to the problem of finding the multiplicity of the smallest eigenvalue of the covariance matrix of the observation vector.

Journal ArticleDOI
TL;DR: In this paper, two cyclic algorithms for covariance selection are presented. But neither of these algorithms generalizes a published algorithm for the covariance matrix selection problem, whereas the second is analogous to the iterative proportional scaling of contingency tables.
Abstract: Gaussian Markov distributions are characterised by zeros in the inverse of their covariance matrix and we describe the conditional independencies which follow from a given pattern of zeros. Describing Gaussian distributions with given marginals and solving the likelihood equations with covariance selection models both lead to a problem for which we present two cyclic algorithms. The first generalises a published algorithm for covariance selection whilst the second is analogous to the iterative proportional scaling of contingency tables. A convergence proof is given for these algorithms and this uses the notion of $I$-divergence.

Journal ArticleDOI
TL;DR: In this paper, the propagation of an initially highly excited localized wave packet in an anharmonic oscillator potential is studied within the frozen Gaussian approximation, which involves the expansion of the initial wave function in terms of an overcomplete Gaussian basis set.
Abstract: The propagation of an initially highly excited localized wave packet in an anharmonic oscillator potential is studied within the frozen Gaussian approximation. Comparison is made to quantum mechanical basis set calculations. The frozen Gaussian approximation involves the expansion of the initial wave function in terms of an overcomplete Gaussian basis set. The wave function evolution is evaluated by allowing each Gaussian to travel along a classical trajectory with its shape held rigid. A Monte Carlo algorithm is employed in the selection of the initial Gaussian basis functions. The frozen Gaussian results are very good for times on the order of a few vibrational periods of the oscillator and remain qualitatively correct for the entire length of the calculations which is 12 vibrational periods. The dependence of the calculations on the width of the Gaussian basis functions is investigated and the effect of a simplifying approximation for the prefactor of the Gaussians is tested.

Journal ArticleDOI
TL;DR: In this paper, the statistical properties of dynamic speckles produced by a moving diffuse object were reviewed by providing the space-time correlation function and the power spectrum of speckle-intensity fluctuation for five combined cases of both the optical configuration and the illumination light.
Abstract: The statistical properties of dynamic speckles produced by a moving diffuse object were reviewed by providing the space–time correlation function and the power spectrum of speckle-intensity fluctuation for five combined cases of both the optical configuration and the illumination light. In the optical configuration, three kinds of geometry (free-space, single-lens, and double-lens) were taken, and three kinds of illumination light (a Gaussian beam, a plane-wave beam, and a Gaussian Schell-model beam) were used. Consequently, it was shown that the cross-correlation function and the power spectrum are both Gaussian under some assumptions. From the dynamic properties, two types of speckle motion, boiling and translation, were also evaluated for various conditions of object motion, optical configuration, and illumination light.

Journal ArticleDOI
I. Iscoe1
TL;DR: In this paper, a weighted occupation time is defined for measure-valued processes and a representation for it is obtained for a class of measurevalued branching random motions on Rd. Considered as a process in its own right, the first and second order asymptotics are found as time t→∞.
Abstract: A weighted occupation time is defined for measure-valued processes and a representation for it is obtained for a class of measure-valued branching random motions on Rd. Considered as a process in its own right, the first and second order asymptotics are found as time t→∞. Specifically the finiteness of the total weighted occupation time is determined as a function of the dimension d, and when infinite, a central limit type renormalization is considered, yielding Gaussian or asymmetric stable generalized random fields in the limit. In one Gaussian case the results are contrasted in high versus low dimensions.

Journal ArticleDOI
TL;DR: This paper describes an approach to implementing a Gaussian Pyramid which requires approximately two addition operations per pixel, per level, per dimension, and examines tradeoffs in choosing an algorithm for Gaussian filtering.
Abstract: Gaussian filtering is an important tool in image processing and computer vision. In this paper we discuss the background of Gaussian filtering and look at some methods for implementing it. Consideration of the central limit theorem suggests using a cascade of ``simple'' filters as a means of computing Gaussian filters. Among ``simple'' filters, uniform-coefficient finite-impulse-response digital filters are especially economical to implement. The idea of cascaded uniform filters has been around for a while [13], [16]. We show that this method is economical to implement, has good filtering characteristics, and is appropriate for hardware implementation. We point out an equivalence to one of Burt's methods [1], [3] under certain circumstances. As an extension, we describe an approach to implementing a Gaussian Pyramid which requires approximately two addition operations per pixel, per level, per dimension. We examine tradeoffs in choosing an algorithm for Gaussian filtering, and finally discuss an implementation.

Journal ArticleDOI
TL;DR: In this article, it was shown that the Laguerre-Gaussian and Hermite Gaussian beams with complex arguments arise naturally in correction terms of a perturbation expansion whose leading term is the fundamental paraxial Gaussian beam.
Abstract: Hermite–Gaussian and Laguerre–Gaussian beams with complex arguments of the type introduced by Siegman [ J. Opt. Soc. Am.63, 1093 ( 1973)] are shown to arise naturally in correction terms of a perturbation expansion whose leading term is the fundamental paraxial Gaussian beam. Additionally, they can all be expressed as derivatives of the fundamental Gaussian beam and as paraxial limits of multipole complex-source point solutions of the reduced-wave equation.

Journal ArticleDOI
TL;DR: In this article, a classe de modeles qui generalisent le travail de Smith (1979) and Bathes (1975) for des donnees censurees are introduced.
Abstract: On developpe une classe de modeles qui generalisent le travail de Smith (1979) et Bathes (1975) pour des donnees censurees

Journal ArticleDOI
TL;DR: In this article, it is shown that a number of contradictory results, seemingly present in large-scale data, in principle can recover full coherence, once the requirement that the underlying matter distribution be Gaussian is dropped.
Abstract: The possibility that, in the framework of a biased theory of galaxy clustering, the underlying matter distribution be non-Gaussian itself, because of the very mechanisms generating its present status, is explored. It is shown that a number of contradictory results, seemingly present in large-scale data, in principle can recover full coherence, once the requirement that the underlying matter distribution be Gaussian is dropped. For example, in the present framework, the requirement that the two-point correlation functions vanish at the same scale (for different kinds of objects) is overcome. A general formula, showing the effects of a non-Gaussian background on the expression of three-point correlations in terms of two-point correlations, is given. 28 references.

Journal ArticleDOI
TL;DR: In this article, the authors derived an expression which relates the mean curvature of isodensity surfaces to the power spectrum of density fluctuations in the linear regime of Gaussian fluctuations with random phases.
Abstract: It has been suggested recently that the topology of a distribution of galaxies can be characterized by the mean Gaussian curvature per unit volume of surfaces of constant density. An expression is derived which relates the mean curvature of isodensity surfaces to the power spectrum of density fluctuations in the linear regime of Gaussian fluctuations with random phases. The result may be compared to real galaxy catalogs if the galaxy density is smoothed over scales larger than a correlation length. The implications of the result for understanding the large-scale structure of the universe are discussed.

Journal ArticleDOI
TL;DR: The transition to chaos in ''the hydrogen atom in a magnetic field'' is numerically studied and shown to lead to well-defined signature on the energy-level fluctuations.
Abstract: The transition to chaos in "the hydrogen atom in a magnetic field" is numerically studied and shown to lead to well-defined signature on the energy-level fluctuations. Upon an increase in the energy, the calculated statistics evolve from Poisson to Gaussian orthogonal ensemble according to the regular or chaotic character of the classical motion. Several methods are employed to test the generic nature of these distributions.

Journal ArticleDOI
TL;DR: In this paper, the analytical properties and computational implications of the Gabor representation are investigated within the context of aperture theory, where the radiation field in the pertinent half-space is represented by a discrete set of linearly shifted and spatially rotated elementary beams.
Abstract: The analytical properties and computational implications of the Gabor representation are investigated within the context of aperture theory. The radiation field in the pertinent half-space is represented by a discrete set of linearly shifted and spatially rotated elementary beams that fall into two distinct categories, the propagating (characterized by real rotation angles) and evanescent beams. The representation may be considered a generalization in the sense that both the classical plane wave and Kirchhoff’s spatial-convolution forms are directly recoverable as limiting cases. The choice of a specific window function [w(x)] and the corresponding characteristic width (L) are, expectedly, cardinal decisions affecting the analytical complexity and the convergence rate of the Gabor series. The significant spectral compression achievable by an appropriate selection of w(x) and L is demonstrated numerically, and simple selection guidelines are derived. Two specific window functions possessing opposite characteristics are considered, the uniformly pulsed and the Gaussian distributions. These are studied analytically and numerically, highlighting several outstanding advantages of the latter. Consequently, the primary attention is focused on Gaussian elementary beams in their paraxial and their far-field estimates. Although the main effort is devoted to aperture analysis, demonstrating the advantages and limitations of the proposed approach, reference is also made to its potential when applied to aperture-synthesis and spatial-filtering problems. The quantitative effects of basic filtering in the discrete Gabor space are depicted.

Journal ArticleDOI
TL;DR: In this paper, the authors used bispectral analysis to detect and identify a nonlinear stochastic signal generating mechanism from data containing its output, and applied it to investigate whether the observed data record is consistent with the hypothesis that the underlying process has Gaussian distribution, and whether it contains evidence of nonlinearity in the underlying mechanisms generating the observed noise.
Abstract: Bispectral analysis is a statistical tool for detecting and identifying a nonlinear stochastic signal generating mechanism from data containing its output. Bispectral analysis can also be employed to investigate whether the observed data record is consistent with the hypothesis that the underlying stochastic process has Gaussian distribution. From estimates of bispectra of several records of ambient acoustic ocean noise, a newly developed statistical method for testing whether the noise had a Gaussian distribution, and whether it contains evidence of nonlinearity in the underlying mechanisms generating the observed noise is applied. Seven acoustic records from three environments are examined: the Atlantic south of Bermuda, the northeast Pacific, and the Indian Ocean. The collection of time series represents both ambient acoustic noise (no local shipping) and noise dominated by local shipping. The three ambient records appeared to be both linear and Gaussian processes when examined over a period on the ord...

Journal ArticleDOI
TL;DR: In this paper, a direct nonlinear inversion scheme has been constructed which can use any velocity model for which travel times can be calculated from an arbitrary source position to the receivers in the seismic network.
Abstract: Summary. The determination of earthquake locations requires a good velocity model for the region of interest, appropriate statistics for the residuals encountered and an efficient, stable inversion algorithm. A direct nonlinear inversion scheme has been constructed which can use any velocity model for which travel times can be calculated from an arbitrary source position to the receivers in the seismic network. The procedure is based on the minimization of a misfit function depending on the residuals between observed and calculated arrival times. Different statistics, e.g. Gaussian and Jeffreys distributions, can be accommodated by the choice of misfit function. The algorithm is based on a directed grid search which narrows down the range of possible origin times whilst carrying out a spatial search in the neighbourhood of the current minimum of the misfit function. No numerical differentiation of travel times is required, and convergence is rapid, stable and tolerant of occasional large errors in reading observed travel times. A useful product of the method is that the misfit function values are available in the neighbourhood of the minimum, so that a fully nonlinear treatment of the statistical confidence regions for a particular location can be made. A prerequisite for the use of the algorithm is the delineation of bounds on the four hypocentral parameters. Epicentral bounds are constructed using a variant of the ‘arrival order’ technique, and rapid scanning in depth and origin time over this region yields useful bounds on these parameters. The new nonlinear algorithm is illustrated by application to the SE Australian seismic network, for an event in the most active seismic zone. Two different velocity models are used with both Gaussian and Jeffreys statistics and good convergence for the algorithm is achieved despite significant nonlinearity in the behaviour. The Jeffreys statistics are more tolerant of large residuals and are to be preferred when the requisite velocity model is not too well known.

Journal ArticleDOI
TL;DR: In this article, several well-known linear and nonlinear image restoration methods are written as recursive algorithms, and some new recursive algorithms are developed, based on the assumption that the noise is either a Poisson or a Gaussian process.
Abstract: Linear and nonlinear image restoration methods have been studied in depth but have always been treated separately. In this paper several well-known linear and nonlinear restoration methods are written as recursive algorithms, and some new recursive algorithms are developed. The nonlinear restoration algorithms are based on the assumption that the noise is either a Poisson or a Gaussian process. The linear algorithms are shown to be related to the nonlinear methods through the partial derivative, with respect to the object, of a Poisson or a Gaussian likelihood function. A table of results is given, along with applications to real imagery.

Journal ArticleDOI
TL;DR: In this article, it was shown that the combined equivalent widths for a large population of Gaussian-like line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity.
Abstract: It is shown that the combined equivalent widths for a large population of Gaussian-like interstellar line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity. Two parametric distributions functions for the line populations are considered: a bivariate Gaussian for tau(0) and b and a power law distribution for tau(0) combined with a Gaussian dispersion for b. First, COGs for populations having an extremely large number of nonoverlapping components are derived, and the implications are shown by focusing on the doublet-ratio analysis for a pair of lines whose f-values differ by a factor of two. The consequences of having, instead of an almost infinite number of lines, a relatively small collection of components added together for each member of a doublet are examined. The theory of how the equivalent widths grow for populations of overlapping Gaussian profiles is developed. Examples of the composite COG analysis applied to existing collections of high-resolution interstellar line data are presented.

Journal ArticleDOI
TL;DR: In this paper, the amplitude distribution of primary reflection coefficients generated from a number of block-averaged well logs with block thicknesses corresponding to 1 ms (two-way time) was examined.
Abstract: One of the important properties of a series of primary reflection coefficients is its amplitude distribution. This paper examines the amplitude distribution of primary reflection coefficients generated from a number of block-averaged well logs with block thicknesses corresponding to 1 ms (two-way time). The distribution is always essentially symmetric, but has a sharper central peak and larger tails than a Gaussian distribution. Thus any attempt to estimate phase using the bi-spectrum (third-order spectrum) is unlikely to be successful, since the third-order moment is almost identically zero. Complicated tri-spectrum (fourth-order spectrum) calculations are thus required. Minimum Entropy Deconvolution (MED) schemes should be able to exploit this form of non-Gaussianity. However, both these methods assume a white reflectivity sequence; they would therefore mix up the contributions to the trace's spectral shape that are due to the wavelet and those that are due to non-white reflectivity unless corrections are introduced. A mixture of two Laplace distributions provides a good fit to the empirical amplitude distributions. Such a mixture distribution fits nicely with sedimentological observations, namely that clear distinctions can be made between sedimentary beds and lithological units that comprise one or more such beds with the same basic lithology, and that lithological units can be expected to display larger reflection coefficients at their boundaries than sedimentary beds. The geological processes that engender major lithological changes are not the same as those for truncation of bedding. Analyses of sub-sequences of the reflection series are seen to support this idea. The variation of the mixing proportion parameter allows for scale and shape changes in different segments of the series, and hence provides a more flexible description of the series than the generalized Gaussian distribution which is shown to also provide a good fit to the series. Both the mixture of two Laplace distributions and the generalized Gaussian distribution can be expressed as scale mixtures of the ordinary Gaussian distribution. This result provides a link with the ordinary Gaussian distribution which might have been expected to be the distribution of a natural series such as reflection coefficients. It is also important in the consideration of the solution of MED-type methods. It is shown that real (coloured) primary reflection series do not seem to be obtainable as the deconvolution result from MED-type deconvolution schemes.

Journal ArticleDOI
TL;DR: In this paper, the axial irradiance of focused uniform and Gaussian beams is calculated and the problem of optimum focusing is discussed, and the results for a collimated beam are obtained as a limiting case of a focused beam.
Abstract: Much is said in the literature about Gaussian beams. However, there is little in terms of a quantitative comparison between the propagation of uniform and Gaussian beams. Even when results for both types of beam are given, they appear in a normalized form in such a way that some of the quantitative difference between them is lost. In this paper we first consider an aberration-free beam and investigate the effect of Gaussian amplitude across the aperture on the focal-plane irradiance and encircled-power distributions. The axial irradiance of focused uniform and Gaussian beams is calculated, and the problem of optimum focusing is discussed. The results for a collimated beam are obtained as a limiting case of a focused beam. Next, we consider the problem of aberration balancing and compare the effects of primary aberrations on the two types of beam. Finally, the limiting case of weakly truncated Gaussian beams is discussed, and simple results are obtained for the irradiance distribution and the balanced aberrations.

Journal ArticleDOI
01 Jan 1986-Scanning
TL;DR: In this paper, an improved Gaussian ϕ(ϱ) model for quantitative electron probe microanalysis is presented, which is based on modifications of the original Packwood and Brown approach.
Abstract: An improved correction model for quantitative electron probe microanalysis, based on modifications of the Gaussian ϕ(ϱ) approach, originally introduced by Packwood and Brown, is presented. The improvements consist of better equations for the input parameters of this model which have been obtained by fitting to experimental ϕ(ϱ) data. The new program has been tested on 627 measurements for medium to heavy elements (Z>11) and on 117 carbon measurements with excellent results: an r.m.s. value of 2.99% in the former case and 4.1% in the latter. Finally the new program has been compared to five other current correction programs which were found to perform less satisfactorily.