scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 1996"


Journal ArticleDOI
TL;DR: The pseudopotential is of an analytic form that gives optimal efficiency in numerical calculations using plane waves as a basis set and is separable and has optimal decay properties in both real and Fourier space.
Abstract: We present pseudopotential coefficients for the first two rows of the Periodic Table. The pseudopotential is of an analytic form that gives optimal efficiency in numerical calculations using plane waves as a basis set. At most, seven coefficients are necessary to specify its analytic form. It is separable and has optimal decay properties in both real and Fourier space. Because of this property, the application of the nonlocal part of the pseudopotential to a wave function can be done efficiently on a grid in real space. Real space integration is much faster for large systems than ordinary multiplication in Fourier space, since it shows only quadratic scaling with respect to the size of the system. We systematically verify the high accuracy of these pseudopotentials by extensive atomic and molecular test calculations. \textcopyright{} 1996 The American Physical Society.

5,009 citations


ReportDOI
TL;DR: In this paper, a modified version of the Dickey-Fuller t test is proposed to improve the power when an unknown mean or trend is present, and a Monte Carlo experiment indicates that the modified test works well in small samples.
Abstract: The asymptotic power envelope is derived for point-optimal tests of a unit root in the autoregressive representation of a Gaussian time series under various trend specifications. We propose a family of tests whose asymptotic power functions are tangent to the power envelope at one point and are never far below the envelope. When the series has no deterministic component, some previously proposed tests are shown to be asymptotically equivalent to members of this family. When the series has an unknown mean or linear trend, commonly used tests are found to be dominated by members of the family of point-optimal invariant tests. We propose a modified version of the Dickey-Fuller t test which has substantially improved power when an unknown mean or trend is present. A Monte Carlo experiment indicates that the modified test works well in small samples.

4,284 citations


Journal ArticleDOI
TL;DR: A new algorithm based on a Monte Carlo method that can be applied to a broad class of nonlinear non-Gaussian higher dimensional state space models on the provision that the dimensions of the system noise and the observation noise are relatively low.
Abstract: A new algorithm for the prediction, filtering, and smoothing of non-Gaussian nonlinear state space models is shown. The algorithm is based on a Monte Carlo method in which successive prediction, filtering (and subsequently smoothing), conditional probability density functions are approximated by many of their realizations. The particular contribution of this algorithm is that it can be applied to a broad class of nonlinear non-Gaussian higher dimensional state space models on the provision that the dimensions of the system noise and the observation noise are relatively low. Several numerical examples are shown.

2,406 citations


Journal ArticleDOI
TL;DR: It is shown that nonlinear rescalings of a Gaussian linear stochastic process cannot be accounted for by a simple amplitude adjustment of the surrogates which leads to spurious detection of nonlinearity.
Abstract: Current tests for nonlinearity compare a time series to the null hypothesis of a Gaussian linear stochastic process. For this restricted null assumption, random surrogates can be constructed which are constrained by the linear properties of the data. We propose a more general null hypothesis allowing for nonlinear rescalings of a Gaussian linear process. We show that such rescalings cannot be accounted for by a simple amplitude adjustment of the surrogates which leads to spurious detection of nonlinearity. An iterative algorithm is proposed to make appropriate surrogates which have the same autocorrelations as the data and the same probability distribution.

1,364 citations


Book ChapterDOI
15 Apr 1996
TL;DR: The Condensation algorithm combines factored sampling with learned dynamical models to propagate an entire probability distribution for object position and shape, over time, and is markedly superior to what has previously been attainable from Kalman filtering.
Abstract: The problem of tracking curves in dense visual clutter is a challenging one. Trackers based on Kalman filters are of limited use; because they are based on Gaussian densities which are unimodal, they cannot represent simultaneous alternative hypotheses. Extensions to the Kalman filter to handle multiple data associations work satisfactorily in the simple case of point targets, but do not extend naturally to continuous curves. A new, stochastic algorithm is proposed here, the Condensation algorithm — Conditional Density Propagation over time. It uses ‘factored sampling’, a method previously applied to interpretation of static images, in which the distribution of possible interpretations is represented by a randomly generated set of representatives. The Condensation algorithm combines factored sampling with learned dynamical models to propagate an entire probability distribution for object position and shape, over time. The result is highly robust tracking of agile motion in clutter, markedly superior to what has previously been attainable from Kalman filtering. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.

1,309 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the probability that a setJ consisting of a finite union of intervals contains no eigenvalues for the finite N Gaussian Orthogonal (β = 1) and Gaussian Symplectic (β= 4) ensembles and their respective scaling limits both in the bulk and at the edge of the spectrum.
Abstract: The focus of this paper is on the probability,Eβ(O;J), that a setJ consisting of a finite union of intervals contains no eigenvalues for the finiteN Gaussian Orthogonal (β=1) and Gaussian Symplectic (β=4) Ensembles and their respective scaling limits both in the bulk and at the edge of the spectrum. We show how these probabilities can be expressed in terms of quantities arising in the corresponding unitary (β=2) ensembles. Our most explicit new results concern the distribution of the largest eigenvalue in each of these ensembles. In the edge scaling limit we show that these largest eigenvalue distributions are given in terms of a particular Painleve II function.

1,083 citations


Journal ArticleDOI
TL;DR: The mathematical connection between the Expectation-Maximization (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures is built up and an explicit expression for the matrix is provided.
Abstract: We build up the mathematical connection between the “Expectation-Maximization” (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of gaussian mixture models.

849 citations


Journal ArticleDOI
TL;DR: This paper fits Gaussian mixtures to each class to facilitate effective classification in non-normal settings, especially when the classes are clustered.
Abstract: Fisher-Rao linear discriminant analysis (LDA) is a valuable tool for multigroup classification. LDA is equivalent to maximum likelihood classification assuming Gaussian distributions for each class. In this paper, we fit Gaussian mixtures to each class to facilitate effective classification in non-normal settings, especially when the classes are clustered. Low dimensional views are an important by-product of LDA-our new techniques inherit this feature. We can control the within-class spread of the subclass centres relative to the between-class spread. Our technique for fitting these models permits a natural blend with nonparametric versions of LDA.

791 citations


Journal Article
TL;DR: The Condensation algorithm as discussed by the authors combines factored sampling with learned dynamical models to propagate an entire probability distribution for object position and shape, over time, achieving state-of-the-art performance.
Abstract: The problem of tracking curves in dense visual clutter is a challenging one. Trackers based on Kalman filters are of limited use; because they are based on Gaussian densities which are unimodal, they cannot represent simultaneous alternative hypotheses. Extensions to the Kalman filter to handle multiple data associations work satisfactorily in the simple case of point targets, but do not extend naturally to continuous curves. A new, stochastic algorithm is proposed here, the Condensation algorithm — Conditional Density Propagation over time. It uses ‘factored sampling’, a method previously applied to interpretation of static images, in which the distribution of possible interpretations is represented by a randomly generated set of representatives. The Condensation algorithm combines factored sampling with learned dynamical models to propagate an entire probability distribution for object position and shape, over time. The result is highly robust tracking of agile motion in clutter, markedly superior to what has previously been attainable from Kalman filtering. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.

667 citations



Book
01 Jan 1996
TL;DR: In this article, the Akaike AIC is used to evaluate Parametric Models and to estimate the probability of a smooth trend in a time series, and the AIC can be used for time series analysis.
Abstract: 1 Introduction.- 1.1 Background.- 1.2 What is in the Book.- 1.3 Time Series Examples.- 2 Modeling Concepts and Methods.- 2.1 Akaike's AIC: Evaluating Parametric Models.- 2.1.1 The Kullback-Leibler Measure and the Akaike AIC.- 2.1.2 Some Applications of the AIC.- 2.1.3 A Theoretical Development of the AIC.- 2.1.4 Further Discussion of the AIC.- 2.2 Least Squares Regression by Householder Transformation.- 2.3 Maximum Likelihood Estimation and an Optimization Algorithm.- 2.4 State Space Methods.- 3 The Smoothness Priors Concept.- 3.1 Introduction.- 3.2 Background, History and Related Work.- 3.3 Smoothness Priors Bayesian Modeling.- 4 Scalar Least Squares Modeling.- 4.1 Estimating a Trend.- 4.2 The Long AR Model.- 4.3 Transfer Function Estimation.- 4.3.1 Analysis.- 4.3.2 A Transfer Function Analysis Example.- 5 Linear Gaussian State Space Modeling.- 5.1 Introduction.- 5.2 Standard State Space Modeling.- 5.3 Some State Space Models.- 5.4 Modeling With Missing Observations.- 5.5 Unequally Spaced Observations.- 5.6 An Information Square-Root Filter/Smoother.- 6 Contents General State Space Modeling.- 6.1 Introduction.- 6.2 The General State Space Model.- 6.2.1 General Filtering and Smoothing.- 6.2.2 Model Identification.- 6.3 Numerical Synthesis of the Algorithms.- 6.4 The Gaussian Sum-Two Filter Formula Approximation.- 6.4.1 The Gaussian Sum Approximation.- 6.4.2 The Two-filter Formula and Gaussian Sum Smoothing.- 6.4.3 Remarks on the Gaussian Mixture Approximation.- 6.5 A Monte Carlo Filtering and Smoothing Method.- 6.5.1 Introduction.- 6.5.2 Non-Gaussian Nonlinear State Space Model and Filtering.- 6.5.3 Smoothing.- 6.6 A Derivation of the Kalman filter.- 6.6.1 Preparations.- 6.6.2 Derivation of the Filter and Smoother.- 7 Applications of Linear Gaussian State Space Modeling.- 7.1 AR Time Series Modeling.- 7.2 Kullback-Leibler Computations.- 7.3 Smoothing Unequally Spaced Data.- 7.4 A Signal Extraction Problem.- 7.4.1 Estimation of the Time Varying Variance.- 7.4.2 Separating a Micro Earthquake From Noisy Data.- 7.4.3 A Second Example.- 8 Modeling Trends.- 8.1 State Space Trend Models.- 8.2 State Space Estimation of Smooth Trend.- 8.2.1 Estimation of a Smooth Trend.- 8.2.2 Smooth Trend Plus Autoregressive Model.- 8.3 Multiple Time Series Modeling: The Common Trend Plus Individual Component AR Model.- 8.3.1 Maximum Daily Temperatures 1971-1992.- 8.3.2 Tiao and Tsay Flour Price Data.- 8.4 Modeling Trends with Discontinuities.- 8.4.1 Pearson Family, Gaussian Mixture and Monte Carlo Filter Es-timation of an Abruptly Changing Trend.- 9 Seasonal Adjustment.- 9.1 Introduction.- 9.2 A State Space Seasonal Adjustment Model.- 9.3 Smooth Seasonal Adjustment Examples.- 9.4 Non-Gaussian Seasonal Adjustment.- 9.5 Modeling Outliers.- 9.6 Legends.- 10 Estimation of Time Varying Variance.- 10.1 Introduction and Background.- 10.2 Modeling Time-Varying Variance.- 10.3 The Seismic Data.- 10.4 Smoothing the Periodogram.- 10.5 The Maximum Daily Temperature Data.- 11 Modeling Scalar Nonstationary Covariance Time Series.- 11.1 Introduction.- 11.2 A Time Varying AR Coefficient Model.- 11.3 A State Space Model.- 11.3.1 Instantaneous Spectral Density.- 11.4 PARCOR Time Varying AR Modeling.- 11.5 Examples.- 12 Modeling Multivariate Nonstationary Covariance Time Series.- 12.1 Introduction.- 12.2 The Instantaneous Response-Orthogonal Innovations Model.- 12.3 State Space Modeling.- 12.4 Time Varying PARCOR VAR Modeling.- 12.4.1 Constant Coefficient PARCOR VAR Time Series Modeling.- 12.4.2 Time Varying PARCOR Coefficient VAR Modeling.- 12.5 Examples.- 13 Modeling Inhomogeneous Discrete Processes.- 13.1 Nonstationary Discrete Process.- 13.2 Nonstationary Binary Processes.- 13.3 Nonstationary Poisson Process.- 14 Quasi-Periodic Process Modeling.- 14.1 The Quasi-periodic Model.- 14.2 The Wolfer Sunspot Data.- 14.3 The Canadian Lynx Data.- 14.4 Other Examples.- 14.4.1 Phase-unwrapping.- 14.4.2 Quasi-periodicity in the Rainfall data.- 14.5 Predictive Properties of Quasi-periodic Process Modeling.- 15 Nonlinear Smoothing.- 15.1 Introduction.- 15.2 State Estimation.- 15.3 A One Dimensional Problem.- 15.4 A Two Dimensional Problem.- 16 Other Applications.- 16.1 A Large Scale Decomposition Problem.- 16.1.1 Data Preparation and a Strategy for the Data Analysis.- 16.1.2 The Data Analysis.- 16.2 Markov State Classification.- 16.2.1 Introduction.- 16.2.2 A Markov Switching Model.- 16.2.3 Analysis and Results.- 16.3 SPVAR Modeling for Spectrum Estimation.- 16.3.1 Background.- 16.3.2 The Approach and an Example.- References.- Author Index.

Journal ArticleDOI
TL;DR: It is shown that any point in the capacity region of a Gaussian multiple-access channel is achievable by single-user coding without requiring synchronization among users, provided that each user "splits" data and signal into two parts.
Abstract: It is shown that any point in the capacity region of a Gaussian multiple-access channel is achievable by single-user coding without requiring synchronization among users, provided that each user "splits" data and signal into two parts. Based on this result, a new multiple-access technique called rate-splitting multiple accessing (RSMA) is proposed. RSMA is a code-division multiple-access scheme for the M-user Gaussian multiple-access channel for which the effort of finding the codes for the M users, of encoding, and of decoding is that of at most 2M-1 independent point-to-point Gaussian channels. The effects of bursty sources, multipath fading, and inter-cell interference are discussed and directions for further research are indicated.

Journal ArticleDOI
TL;DR: In this paper, the weak constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation, and several methods based on ensemble statistics that can be used to find the smoother solution are introduced and compared to traditional methods.
Abstract: The weak constraint inverse for nonlinear dynamical models is discussed and derived in term of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. They also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. The feasibility of the new methods is illustrated in a two-layer quasigeostrophic ocean model.

Journal ArticleDOI
TL;DR: In this article, the spectral representation of the stochastic field is used to obtain the mean value, autocorrelation function, and power spectral density function of a multi-dimensional, homogeneous Gaussian field.
Abstract: The subject of this paper is the simulation of multi-dimensional, homogeneous, Gaussian stochastic fields using the spectral representation method. Following this methodology, sample functions of the stochastic field can be generated using a cosine series formula. These sample functions accurately reflect the prescribed probabilistic characteristics of the stochastic field when the number of terms in the cosine series is large. The ensemble-averaged power spectral density or autocorrelation function approaches the corresponding target function as the sample size increases. In addition, the generated sample functions possess ergodic characteristics in the sense that the spatially-averaged mean value, autocorrelation function and power spectral density function are identical with the corresponding targets, when the averaging takes place over the multi-dimensional domain associated with the fundamental period of the cosine series. Another property of the simulated stochastic field is that it is asymptotically Gaussian as the number of terms in the cosine series approaches infinity. The most important feature of the method is that the cosine series formula can be numerically computed very efficiently using the Fast Fourier Transform technique. The main area of application of this method is the Monte Carlo solution of stochastic problems in structural engineering, engineering mechanics and physics. Specifically, the method has been applied to problems involving random loading (random vibration theory) and random material and geometric properties (response variability due to system stochasticity).

Journal ArticleDOI
05 Jan 1996-Science
TL;DR: In this article, a generalization of the fast multipole method to Gaussian charge distributions was proposed to reduce the computational requirements of the electronic quantum Coulomb problem, which is one of the limiting factors in ab initio electronic structure calculations.
Abstract: The computation of the electron-electron Coulomb interaction is one of the limiting factors in ab initio electronic structure calculations. The computational requirements for calculating the Coulomb term with commonly used analytic integration techniques between Gaussian functions prohibit electronic structure calculations of large molecules and other nanosystems. Here, it is shown that a generalization of the fast multipole method to Gaussian charge distributions dramatically reduces the computational requirements of the electronic quantum Coulomb problem. Benchmark calculations on graphitic sheets containing more than 400 atoms show near linear scaling together with high speed and accuracy.

Journal ArticleDOI
TL;DR: In this article, a general formulation of the moving horizon estimator is presented, and an algorithm with a fixed-size estimation window and constraints on states, disturbances, and measurement noise is developed, and a probabilistic interpretation is given.
Abstract: A general formulation of the moving horizon estimator is presented. An algorithm with a fixed-size estimation window and constraints on states, disturbances, and measurement noise is developed, and a probabilistic interpretation is given. The moving horizon formulation requires only one more tuning parameter (horizon size) than many well-known approximate nonlinear filters such as extended Kalman filter (EFK), iterated EKF, Gaussian second-order filter, and statistically linearized filter. The choice of horizon size allows the user to achieve a compromise between the better performance of the batch least-squares solution and the reduced computational requirements of the approximate nonlinear filters. Specific issues relevant to linear and nonlinear systems are discussed with comparisons made to the Kalman filter, EKF, and other recursive and optimization-based estimation schemes.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss various forms of these quantities, derived from probability density functions and based on Bragg diffraction data, both when the Gaussian approximation is appropriate and when it is not.
Abstract: Modern X-ray and neutron diffraction techniques can give precise parameters that describe dynamic or static displacements of atoms in crystals. However, confusing and inconsistent terms and symbols for these quantities occur in the crystallographic literature. This report discusses various forms of these quantities, derived from probability density functions and based on Bragg diffraction data, both when the Gaussian approximation is appropriate and when it is not. The focus is especially on individual atomic anisotropic displacement parameters (ADPs), which may represent atomic motion and possible static displacive disorder. The first of the four sections gives background information, including definitions. The second concerns the kinds of parameter describing atomic displacements that have most often been used in crystal structure analysis and hence are most commonly found in the literature on the subject. It includes a discussion of graphical representations of the Gaussian mean-square displacement matrix. The third section considers the expressions used when the Gaussian approximation is not adequate. The final section gives recommendations for symbols and nomenclature.

Journal ArticleDOI
TL;DR: Simulations are presented which demonstrate that Gaussian ARTMAP consistently obtains a better trade-off of classification rate to number of categories than fuzzy AR TMAP.

Journal ArticleDOI
TL;DR: It is shown that an i.i.d. sample of size n with density f is globally asymptotically equivalent to a white noise experiment with drift f l/2 and variance 1/4n -l .
Abstract: Signal recovery in Gaussian white noise with variance tending to zero has served for some time as a representative model for nonparametric curve estimation, having all the essential traits in a pure form. The equivalence has mostly been stated informally, but an approximation in the sense of Le Cam's deficiency distance $\Delta$ would make it precise. The models are then asymptotically equivalent for all purposes of statistical decision with bounded loss. In nonparametrics, a first result of this kind has recently been established for Gaussian regression. We consider the analogous problem for the experiment given by n i.i.d. observations having density f on the unit interval. Our basic result concerns the parameter space of densities which are in a Holder ball with exponent $\alpha > 1/2$ and which are uniformly bounded away from zero. We show that an i. i. d. sample of size n with density f is globally asymptotically equivalent to a white noise experiment with drift $f^{1/2}$ and variance $1/4 n^{-1}$. This represents a nonparametric analog of Le Cam's heteroscedastic Gaussian approximation in the finite dimensional case. The proof utilizes empirical process techniques related to the Hungarian construction. White noise models on f and log f are also considered, allowing for various "automatic" asymptotic risk bounds in the i.i.d. model from white noise.

Proceedings Article
03 Dec 1996
TL;DR: For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process as discussed by the authors.
Abstract: For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.

Journal ArticleDOI
TL;DR: It is shown that the achievable rates depend on the noise distribution only via its power and thus coincide with the capacity region of a white Gaussian noise channel with signal and noise power equal to those of the original channel.
Abstract: We study the performance of a transmission scheme employing random Gaussian codebooks and nearest neighbor decoding over a power limited additive non-Gaussian noise channel. We show that the achievable rates depend on the noise distribution only via its power and thus coincide with the capacity region of a white Gaussian noise channel with signal and noise power equal to those of the original channel. The results are presented for single-user channels as well as multiple-access channels, and are extended to fading channels with side information at the receiver.

Book ChapterDOI
01 Jan 1996
TL;DR: In this paper, it was shown that the corresponding priors over functions computed by the network reach reasonable limits as the number of hidden units goes to infinity, and that there is no need to limit the size of the network in order to avoid overfitting.
Abstract: In this chapter, I show that priors over network parameters can be defined in such a way that the corresponding priors over functions computed by the network reach reasonable limits as the number of hidden units goes to infinity. When using such priors,there is thus no need to limit the size of the network in order to avoid “overfitting”. The infinite network limit also provides insight into the properties of different priors. A Gaussian prior for hidden-to-output weights results in a Gaussian process prior for functions,which may be smooth, Brownian, or fractional Brownian. Quite different effects can be obtained using priors based on non-Gaussian stable distributions. In networks with more than one hidden layer, a combination of Gaussian and non-Gaussian priors appears most interesting.

Book
05 Dec 1996
TL;DR: In this paper, the authors present an analysis of nonlinear linear systems with multiple inputs and outputs, and state-space analysis of stochastic processes with random variables and Dirac delta functions.
Abstract: 1. Introduction. 2. Analysis of Stochastic Processes. 3. Time Domain Linear Vibration Analysis. 4. Frequency Domain Analysis. 5. Gaussian and Non-Gaussian Stochastic Processes. 6. Occurrence Rates and Distributions of Extremes. 7. Linear Systems with Multiple Inputs and Outputs. 8. State-Space Analysis. 9. Introduction to Nonlinear Stochastic Vibration. 10. Stochastic Analysis of Fatigue Damage. Appendix A. Analysis of Random Variables. Appendix B. Gaussian Random Variables. Appendix C. Dirac Delta Functions. Appendix D. Fourier Analysis. References.

Journal ArticleDOI
TL;DR: Thresholding criteria are introduced that enforce locality of exchange interactions in Cartesian Gaussian‐based Hartree–Fock calculations to demonstrate the O(N) complexity of the algorithm, its competitiveness with standard direct self‐consistent field methods, and a systematic control of error in converged total energies.
Abstract: Thresholding criteria are introduced that enforce locality of exchange interactions in Cartesian Gaussian‐based Hartree–Fock calculations. These criteria are obtained from an asymptotic form of the density matrix valid for insulating systems, and lead to a linear scaling algorithm for computation of the Hartree–Fock exchange matrix. Restricted Hartree–Fock/3‐21G calculations on a series of water clusters and polyglycine α‐helices are used to demonstrate the O(N) complexity of the algorithm, its competitiveness with standard direct self‐consistent field methods, and a systematic control of error in converged total energies.

Journal ArticleDOI
TL;DR: A new implementation of density functional theory for periodic systems in a basis of local Gaussian functions, including a thorough discussion of the various algorithms, is described.

Journal ArticleDOI
TL;DR: Fully symmetric interpolatory integration rules are constructed for multidimensional integrals over infinite integration regions with a Gaussian weight function in this paper, where the points for these rules are determined by successive extensions of the one-dimensional three-point Gauss-Hermite rule.

Journal ArticleDOI
TL;DR: This article proposes an alternative approach to RDA of discriminant analysis in the Gaussian framework, called EDDA, that is based on the reparameterization of the covariance matrix of a group Gk in terms of its eigenvalue decomposition.
Abstract: Friedman proposed a regularization technique (RDA) of discriminant analysis in the Gaussian framework. RDA uses two regularization parameters to design an intermediate classifier between the linear, the quadratic the nearest-means classifiers. In this article we propose an alternative approach, called EDDA, that is based on the reparameterization of the covariance matrix [Σ k ] of a group Gk in terms of its eigenvalue decomposition Σ k = λ k D k A k D k ′, where λk specifies the volume of density contours of Gk, the diagonal matrix of eigenvalues specifies its shape the eigenvectors specify its orientation. Variations on constraints concerning volumes, shapes orientations λ k , A k , and D k lead to 14 discrimination models of interest. For each model, we derived the normal theory maximum likelihood parameter estimates. Our approach consists of selecting a model by minimizing the sample-based estimate of future misclassification risk by cross-validation. Numerical experiments on simulated and rea...

Journal ArticleDOI
TL;DR: In this article, the shape of irregular small particles using multivariate lognormal statistics (Gaussian random shape) is modeled by the autocovariance function, which can be conveniently modeled by two statistical parameters: the standard deviation of radius and the correlation length of angular variations.
Abstract: We model the shapes of irregular small particles using multivariate lognormal statistics (Gaussian random shape), and compute absorption and scattering cross sections, asymmetry parameters, and scattering phase matrices in the ray optics approximation. The random shape is fully described by the autocovariance function, which can be conveniently modeled by two statistical parameters: the standard deviation of radius and the correlation length of angular variations. We present an efficient spherical harmonics method for generating sample Gaussian random particles, and outline a ray tracing algorithm that can be adapted to almost arbitrary, mathematically star-like particles. We study the scattering and absorption properties of Gaussian random particles much larger than the wavelength by systematically varying their statistical parameters and complex refractive indices. The results help us understand, in part, light scattering by solar system dust particles, and thereby constrain the physical properties of, for example, asteroid regoliths and cometary comae.

Journal ArticleDOI
TL;DR: A generalized form for the range parameter governing the pair interaction between soft ellipsoidal particles, and an explicit interaction potential for nonequivalent uniaxial particles is obtained by importing the uniaXial range parameter result into the standard Gay-Berne form.
Abstract: In this paper, we report a generalized form for the range parameter governing the pair interaction between soft ellipsoidal particles. For nonequivalent uniaxial particles, we extend the Berne-Pechukas Gaussian overlap formalism to obtain an explicit expression for this range parameter. We confirm that this result is identical to that given by an approach that is not widely recognized, based on an approximation to the Perram-Wertheim hard-ellipsoid contact function. We further illustrate the power of the latter route by using it to write down the range parameter for the interaction between two nonequivalent biaxial particles. An explicit interaction potential for nonequivalent uniaxial particles is obtained by importing the uniaxial range parameter result into the standard Gay-Berne form. A parametrization of this potential is investigated for a rod-disk interaction.

Journal ArticleDOI
TL;DR: In this paper, a simple test for dependence in the residuals of a linear parametric time series model fitted to non-gaussian data is presented, and the test statistic is a third order extension of the standard correlation test for whiteness.
Abstract: This paper presents a simple test for dependence in the residuals of a linear parametric time series model fitted to non gaussian data. The test statistic is a third order extension of the standard correlation test for whiteness. but the number of lags used in this test is a function of the sample size. The power of this test goes to one as the sample size goes to infinity for any alternative which has non zero bicovariances c e3(r,s)= E[e(t)e(t + r)e(t + s)] for a zero mean stationary random time series. The asymptotic properties of the test statistic are rigorously determined. This test is important for the validation of the sampling properties of the parameter estimates for standard finite parameter linear models when the unobserved input (innovations) process is white but not gaussian. The sizes and power derived from the asymptotic results are checked using artificial data for a number of sample sizes. Theoretical and simulation results presented in this paper support the proposition that the test wi...