scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 1995"


Journal ArticleDOI
TL;DR: The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task.
Abstract: This paper introduces and motivates the use of Gaussian mixture models (GMM) for robust text-independent speaker identification. The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity. The focus of this work is on applications which require high identification rates using short utterance from unconstrained conversational speech and robustness to degradations produced by transmission over a telephone channel. A complete experimental evaluation of the Gaussian mixture speaker model is conducted on a 49 speaker, conversational telephone speech database. The experiments examine algorithmic issues (initialization, variance limiting, model order selection), spectral variability robustness techniques, large population performance, and comparisons to other speaker modeling techniques (uni-modal Gaussian, VQ codebook, tied Gaussian mixture, and radial basis functions). The Gaussian mixture speaker model attains 96.8% identification accuracy using 5 second clean speech utterances and 80.8% accuracy using 15 second telephone speech utterances with a 49 speaker population and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task. >

3,134 citations


Journal ArticleDOI
TL;DR: Methods of optimization to derive the maximum likelihood estimates as well as the practical usefulness of these models are discussed and an application on stellar data which dramatically illustrated the relevance of allowing clusters to have different volumes is illustrated.

858 citations


Journal ArticleDOI
TL;DR: It is shown that the characteristic function yields results in agreement with recent simulations of truncated Levy flights, and the convergence of the L\'evy process towards the Gaussian is demonstrated without simulations.
Abstract: An analytic expression for characteristic function defining a truncated L\'evy flight is derived. It is shown that the characteristic function yields results in agreement with recent simulations of truncated L\'evy flights by Mantegna and Stanley [Phys. Rev. Lett. 73, 2946 (1994)]. With the analytic expression for the characteristic function, the convergence of the L\'evy process towards the Gaussian is demonstrated without simulations. In the calculation of first return probability the simulations are replaced by numerical integration using simple quadratures.

595 citations


Journal ArticleDOI
TL;DR: The simulation smoother is introduced, which draws from the multivariate posterior distribution of the disturbances of the model, so avoiding the degeneracies inherent in state samplers.
Abstract: SUMMARY Recently suggested procedures for simulating from the posterior density of states given a Gaussian state space time series are refined and extended. We introduce and study the simulation smoother, which draws from the multivariate posterior distribution of the disturbances of the model, so avoiding the degeneracies inherent in state samplers. The technique is important in Gibbs sampling with non-Gaussian time series models, and for performing Bayesian analysis of Gaussian time series.

587 citations


Journal ArticleDOI
TL;DR: A simple alternative method to estimate the shape parameter for the generalized Gaussian PDF is proposed that significantly reduces the number of computations by eliminating the need for any statistical goodness-of-fit test.
Abstract: A subband decomposition scheme for video signals, in which the original or difference frames are each decomposed into 16 equal-size frequency subbands, is considered. Westerink et al. (1991) have shown that the distribution of the sample values in each subband can be modeled with a "generalized Gaussian" probability density function (PDF) where three parameters, mean, variance, and shape are required to uniquely determine the PDF. To estimate the shape parameter, a series of statistical goodness-of-fit tests such as Kolmogorov-Smirnov or chi-squared tests have been used. A simple alternative method to estimate the shape parameter for the generalized Gaussian PDF is proposed that significantly reduces the number of computations by eliminating the need for any statistical goodness-of-fit test. >

565 citations


BookDOI
01 Jan 1995
TL;DR: This book discusses Gaussian distributions and random variables, the functional law of the iterated logarithm, and several open problems.
Abstract: Preface. 1: Gaussian distributions and random variables. 2: Multi-dimensional Gaussian distributions. 3: Covariances. 4: Random functions. 5: Examples of Gaussian random functions. 6: Modelling the covariances. 7: Oscillations. 8: Infinite-dimensional Gaussian distributions. 9: Linear functionals, admissible shifts, and the kernel. 10: The most important Gaussian distributions. 11: Convexity and the isoperimetric inequality. 12: The large deviations principle. 13: Exact asymptotics of large deviations. 14: Metric entropy and the comparison principle. 15: Continuity and boundedness. 16: Majorizing measures. 17: The functional law of the iterated logarithm. 18: Small deviations. 19: Several open problems. Comments. References. Subject index. List of basic notations.

459 citations


Journal ArticleDOI
TL;DR: Standard techniques for improved generalization from neural networks include weight decay and pruning and a comparison is made with results of MacKay using the evidence framework and a gaussian regularizer.
Abstract: Standard techniques for improved generalization from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy suggests a Laplace rather than a gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error and (2) those failing to achieve this sensitivity and that therefore vanish. Since the critical value is determined adaptively during training, pruning---in the sense of setting weights to exact zeros---becomes an automatic consequence of regularization alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a gaussian regularizer.

362 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: The rate-distortion region is determined in a special case that one source plays a role of partial side information to reproduce sequences emitted from the other source with a prescribed average distortion level.
Abstract: We consider the problem of separate coding for two correlated memoryless Gaussian source. We determine the rate-distortion region in a special case that one source plays a role of partial side information to reproduce sequences emitted from the other source with a prescribed average distortion level. We also derive an explicit outer bound of the rate-distortion region, demonstrating that the inner bound obtained by Berger (1978) partially coincides with the rate-distortion region.

341 citations


Book ChapterDOI
13 Sep 1995
TL;DR: The method uses simple non-linear modifications of Gaussian filters, thus avoiding iteration steps and convergence problems and providing excellent smoothing of fine image details without destroying the coarser structures.
Abstract: This paper presents a new diffusion method for edge preserving smoothing of images. In contrast to other methods it is not based on an anisotropic modification of the heat conductance equation, rather on a modification of the way the solution of the heat conductance equation is obtained by convolving the initial data with a Gaussian kernel. Hence the method uses simple non-linear modifications of Gaussian filters, thus avoiding iteration steps and convergence problems. A chain of three to five filters with suitable parameters provides excellent smoothing of fine image details without destroying the coarser structures. The size and contrast of the eliminated details can be selected. The choice of the parameters is not critical and the edges are not displaced when changing the scale. The filter stages can be implemented efficiently on almost any parallel hardware architecture.

339 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: There is a significant loss between the cases when the agents are allowed to convene and when they are not, and it is established that the distortion decays asymptotically only as R-l.
Abstract: A firm's CEO employs a team of L agents who observe independently corrupted versions of a data sequence {X(t)}/sub t=1//sup /spl infin//. Let R be the total data rate at which the agents may communicate information about their observations to the CEO. The agents are not allowed to convene. Berger, Zhang and Viswanathan (see ibid., vol.42, no.5, p.887-902, 1996) determined the asymptotic behavior of the minimal error frequency in the limit as L and R tend to infinity for the case in which the source and observations are discrete and memoryless. We consider the same multiterminal source coding problem when {X(t)}/sub t=1//sup /spl infin// is independent and identically distributed (i.i.d.) Gaussian random variable corrupted by independent Gaussian noise. We study, under quadratic distortion, the rate-distortion tradeoff in the limit as L and R tend to infinity. As in the discrete case, there is a significant loss between the cases when the agents are allowed to convene and when they are not. As L/spl rarr//spl infin/, if the agents may pool their data before communicating with the CEO, the distortion decays exponentially with the total rate R; this corresponds to the distortion-rate function for an i.i.d. Gaussian source. However, for the case in which they are not permitted to convene, we establish that the distortion decays asymptotically only as R-l.

321 citations


Journal ArticleDOI
TL;DR: In this article, the trial wave functions are chosen to be combinations of correlated Gaussians, which are constructed from products of the singleparticle Gaussian wave packets through an integral transformation, thereby facilitating fully analytical calculations of the matrix elements.
Abstract: Precise variational solutions are given for problems involving diverse fermionic and bosonic (N=2--7)-body systems. The trial wave functions are chosen to be combinations of correlated Gaussians, which are constructed from products of the single-particle Gaussian wave packets through an integral transformation, thereby facilitating fully analytical calculations of the matrix elements. The nonlinear parameters of the trial function are chosen by a stochastic technique. The method has proved very efficient, virtually exact, and it seems feasible for any few-body bound-state problems emerging in nuclear or atomic physics.


Journal ArticleDOI
TL;DR: A recursive formulation of discounted costs for a linear quadratic exponential Gaussian linear regulator problem which implies time-invariant linear decision rules in the infinite horizon case is described.
Abstract: In this note, we describe a recursive formulation of discounted costs for a linear quadratic exponential Gaussian linear regulator problem which implies time-invariant linear decision rules in the infinite horizon case. Time invariance in the discounted case is attained by surrendering state-separability of the risk-adjusted costs. >

Journal ArticleDOI
TL;DR: In this article, the authors review a very few results on some basic elements of large sample theory in a restricted structural framework, as described in detail in the recent book by LeCam and Yang (1990, Asymptotics in Statistics: Some Basic Concepts).
Abstract: The primary purpose of this paper is to review a very few results on some basic elements of large sample theory in a restricted structural framework, as described in detail in the recent book by LeCam and Yang (1990, Asymptotics in Statistics: Some Basic Concepts. New York: Springer), and to illustrate how the asymptotic inference problems associated with a wide variety of time series regression models fit into such a structural framework. The models illustrated include many linear time series models, including cointegrated models and autoregressive models with unit roots that are of wide current interest. The general treatment also includes nonlinear models, including what have become known as ARCH models. The possibility of replacing the density of the error variables of such models by an estimate of it (adaptive estimation) based on the observations is also considered.Under the framework in which the asymptotic problems are treated, only the approximating structure of the likelihood ratios of the observations, together with auxiliary estimates of the parameters, will be required. Such approximating structures are available under quite general assumptions, such as that the Fisher information of the common density of the error variables is finite and nonsingular, and the more specific assumptions, such as Gaussianity, are not required. In addition, the construction and the form of inference procedures will not involve any additional complications in the non-Gaussian situations because the approximating quadratic structure actually will reduce the problems to the situations similar to those involved in the Gaussian cases.

Journal ArticleDOI
TL;DR: Image subband and discrete cosine transform coefficients are modeled for efficient quantization and noiseless coding and pyramid codes for transform and subband image coding are selected.
Abstract: Image subband and discrete cosine transform coefficients are modeled for efficient quantization and noiseless coding. Quantizers and codes are selected based on Laplacian, fixed generalized Gaussian, and adaptive generalized Gaussian models. The quantizers and codes based on the adaptive generalized Gaussian models are always superior in mean-squared error distortion performance but, generally, by no more than 0.08 bit/pixel, compared with the much simpler Laplacian model-based quantizers and noiseless codes. This provides strong motivation for the selection of pyramid codes for transform and subband image coding. >

Journal ArticleDOI
TL;DR: In this article, a nearly complete analysis of the key distributions encountered in single and multi-look polarimetric synthetic aperture radar data under the bivariate Gaussian and K -distribution models is presented.
Abstract: This paper provides a nearly complete analysis of the key distributions encountered in single- and multi-look polarimetric synthetic aperture radar data under the bivariate Gaussian and K -distribution models. It contains new analytic results on the moments of the amplitude and phase difference in single look data and on the moments of the amplitude in multi-look data. As yet no analytic results for the moments of multi-look phase difference have been found, except in limiting cases. The maximum likelihood estimators of the covariance matrix elements of two jointly Gaussian channels are derived, together with their asymptotic variances. The problems in extending this analysis to the bivariate K distribution are also discussed.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method based on a linear minimum variance solution, given data and an assumed prior model which specifies the covariance matrix of the held to be reconstructed, which can be used to reconstruct the large-scale structure of the universe from noisy, sparse, and incomplete data.
Abstract: The formalism of Wiener filtering is developed here for the purpose of reconstructing the large-scale structure of the universe from noisy, sparse, and incomplete data. The method is based on a linear minimum variance solution, given data and an assumed prior model which specifies the covariance matrix of the held to be reconstructed While earlier applications of the Wiener filer have focused on estimation, namely suppressing the noise in the measured quantities, we extend the method here to perform both prediction and dynamical reconstruction. The Wiener filter is used to predict the values of unmeasured quantities, such as the density held in unsampled regions of space, or to deconvolve blurred data The method is developed, within the context of linear gravitational instability theory, to perform dynamical reconstruction of one held which is dynamically related to some other observed held. This is the case, for example, in the reconstruction of the real space galaxy distribution from its redshift distribution or the prediction of the radial velocity held from the observed density field.When the field to be reconstructed is a Gaussian random held, such as the primordial perturbation field predicted by the canonical model of cosmology, the Wiener filter can be pushed to its fullest potential. In such a case the Wiener estimator coincides with the Bayesian estimator designed to maximize the posterior probability. The Wiener filter can be also derived by assuming a quadratic regularization function, in analogy with the ''maximum entropy'' method. The mean field obtained by the minimal variance solution can be supplemented with constrained realizations of the Gaussian held to create random recitations of the residual from the mean.

Journal ArticleDOI
TL;DR: In this article, a novel and efficient method to expand the solute electronic wave function in a distributed Gaussian basis with a shell structure is presented, which is capable of mimicking the shape fluctuation of the excess charge distribution and its diffusion through the solvent.
Abstract: For mixed quantum‐classical molecular dynamics simulations of solvated excess charges a novel and efficient method to expand the solute electronic wave function in a distributed Gaussian basis with a shell structure is presented. The aggregate of Gaussian orbitals is capable of mimicking the shape fluctuation of the excess charge distribution and its diffusion through the solvent. This approach also offers an easy pathway to treat the solvent electronic polarization in an explicit and self‐consistent fashion. As applications, the results of adiabatic molecular dynamics simulations for the hydrated electron and the aqueous chloride are reported. For e−/H2O the computed ground state absorption spectrum is discussed. Adiabatic relaxation as well as nonadiabatic transition rates are evaluated—the latter by means of an original Golden Rule formula—and compared to experimental results. In the case of Cl−/H2O the charge transfer to solvent spectra are analyzed. The ability of the mobile basis set method to descr...

Journal ArticleDOI
TL;DR: An efficient scheme for evaluating the quasiparticle corrections to local-density-approximation (LDA) band structures within the GW approximation is reported and the static dielectric matrix of the Si(001) surface is fully calculated within the random phase approximation (RPA).
Abstract: We report an efficient scheme for evaluating the quasiparticle corrections to local-density-approximation (LDA) band structures within the GW approximation. In this scheme, the GW self-energy corrections are evaluated in a sufficiently flexible Gaussian orbital basis set instead of using plane-wave Fourier representations of the relevant two-point functions. It turns out that this set has to include orbitals up to f-type symmetry, when in the LDA calculations Gaussian orbitals up to d-type symmetry are needed for convergence. For bulk Si, both schemes yield virtually identical quasiparticle band structures and the demand on computer time is roughly the same. For the Si(001)-(2\ifmmode\times\else\texttimes\fi{}1) surface, the GW Gaussian orbital scheme is a factor of 5 faster. In our calculations for Si(001)-(2\ifmmode\times\else\texttimes\fi{}1) the dynamic dielectric matrix is obtained by applying a plasmon-pole approximation. The static dielectric matrix of the Si(001) surface is fully calculated within the random phase approximation (RPA). In addition, we have performed quasiparticle surface band-structure calculations employing two model dielectric matrices. Our respective results are compared with those obtained employing the full RPA dielectric matrix as well as with results of previous calculations by other authors which were based on model dielectric matrices.

Journal ArticleDOI
TL;DR: Two canonical simulation procedures for the generation of correlated non-Gaussian clutter are presented and a new approach for the goodness-of-fit test is proposed in order to assess the performance of the simulation procedure.
Abstract: We develop computer simulation procedures which enable us to generate any correlated non-Gaussian radar clutter that can be modeled as a spherically invariant random process (SIRP). In most cases, when the clutter is a correlated non-Gaussian random process, performance of the optimal radar signal processor cannot be evaluated analytically. Therefore, in order to evaluate such processors, there is a need for efficient computer simulation of the clutter. We present two canonical simulation procedures for the generation of correlated non-Gaussian clutter. A new approach for the goodness-of-fit test is proposed in order to assess the performance of the simulation procedure. >

Journal ArticleDOI
TL;DR: In this article, a test for the hypothesis that a time series is reversible is proposed, and it is shown that if reversibility can be rejected all static transformations of linear Gaussian random processes can be excluded as a model for the time series.

Journal ArticleDOI
TL;DR: The triple Gaussian pencil beam algorithm is used to demonstrate the field size dependence of the photon beam depth dose curve and its coefficients and parameters have been optimized using Fourier transform methods.
Abstract: The transverse profiles of pencil beams are often represented by Gaussian functions in order to speed up electron beam treatment planning algorithms, because convolutions of Gaussions with most beam fluence profiles can be performed analytically. We extend this approach to high-energy photon radiations. Monte-Carlo generated transverse profiles of photon pencil beams are adequately represented by a sum of three Gaussian functions, whose coefficients and parameters have been optimized using Fourier transform methods. The axial profile of the pencil beam is determined by the depth-dependent surface integral of the dose in the transverse plane. As a first application, the triple Gaussian pencil beam algorithm is used to demonstrate the field size dependence of the photon beam depth dose curve. Photon beams modified by wege filters or shielding blocks will be treated in a second communication.

Journal ArticleDOI
TL;DR: Generalized Gaussian and Laplacian source models are compared in discrete cosine transform (DCT) image coding and with block classification based on AC energy, the densities of the DCT coefficients are much closer to the LaPLacian or even the Gaussian.
Abstract: Generalized Gaussian and Laplacian source models are compared in discrete cosine transform (DCT) image coding. A difference in peak signal to noise ratio (PSNR) of at most 0.5 dB is observed for encoding different images. We also compare maximum likelihood estimation of the generalized Gaussian density parameters with a simpler method proposed by Mallat (1989). With block classification based on AC energy, the densities of the DCT coefficients are much closer to the Laplacian or even the Gaussian. >

Journal ArticleDOI
TL;DR: In this paper, the authors studied the properties of the probability distribution function of the cosmological continuous density field and compared dynamically motivated methods to derive the PDF, based on the regularization of integrals.
Abstract: The properties of the probability distribution function of the cosmological continuous density field are studied. We present further developments and compare dynamically motivated methods to derive the PDF. One of them is based on the Zel'dovich approximation (ZA). We extend this method for arbitrary initial conditions, regardless of whether they are Gaussian or not. The other approach is based on perturbation theory with Gaussian initial fluctuations. We include the smoothing effects in the PDFs. We examine the relationships between the shapes of the PDFs and the moments. It is found that formally there are no moments in the ZA, but a way to resolve this issue is proposed, based on the regularization of integrals. A closed form for the generating function of the moments in the ZA is also presented, including the smoothing effects. We suggest the methods to build PDFs out of the whole series of the moments, or out of a limited number of moments -- the Edgeworth expansion. The last approach gives us an alternative method to evaluate the skewness and kurtosis by measuring the PDF around its peak. We note a general connection between the generating function of moments for small r.m.s $\sigma$ and the non-linear evolution of the overdense spherical fluctuation in the dynamical models. All these approaches have been applied in 1D case where the ZA is exact, and simple analytical results are obtained. The 3D case is analyzed in the same manner and we found a mutual agreement in the PDFs derived by different methods in the the quasi-linear regime. Numerical CDM simulation was used to validate the accuracy of considered approximations. We explain the successful log-normal fit of the PDF from that simulation at moderate $\sigma$ as mere fortune, but not as a universal form of density PDF in general.

Journal ArticleDOI
Gustavo Deco1, Wilfried Brauer1
TL;DR: A model of factorial learning for general nonlinear transformations of an arbitrary non-Gaussian (or Gaussian) environment with statistically nonlinearly correlated input is presented.

Journal ArticleDOI
TL;DR: This work investigates the effective conductivity of a class of amorphous media defined by the level-cut of a Gaussian random field and the three point solid-solid correlation function is derived and utilised in the evaluation of the Beran-Milton bounds.
Abstract: We investigate the effective conductivity (se) of a class of amorphous media defined by the level cut of a Gaussian random field. The three point solid-solid correlation function is derived and utilized in the evaluation of the Beran-Milton bounds. Simulations are used to calculate se for a variety of fields and volume fractions at several different conductivity contrasts. Relatively large differences in se are observed between the Gaussian media and the identical overlapping sphere model used previously as a llmodelrr amorphous medium. In contrast, se shows little variability between different Gaussian media.

Journal ArticleDOI
TL;DR: Modeling of partial volume effect is shown to be useful when one of the materials is present in images mainly as a pixel component and incorporated into finite mixture densities in order to more accurately model the distribution of image pixel values.
Abstract: Statistical models of partial volume effect for systems with various types of noise or pixel value distributions are developed and probability density functions are derived. The models assume either Gaussian system sampling noise or intrinsic material variances with Gaussian or Poisson statistics. In particular, a material can be viewed as having a distinct value that has been corrupted by additive noise either before or after partial volume mixing, or the material could have nondistinct values with a Poisson distribution as might be the case in nuclear medicine images. General forms of the probability density functions are presented for the N material cases and particular forms for two- and three-material cases are derived. These models are incorporated into finite mixture densities in order to more accurately model the distribution of image pixel values. Examples are presented using simulated histograms to demonstrate the efficacy of the models for quantification. Modeling of partial volume effect is shown to be useful when one of the materials is present in images mainly as a pixel component. >


Proceedings Article
David Heckerman1, Dan Geiger1
18 Aug 1995
TL;DR: A general Bayesian scoring metric is derived, appropriate for both discrete and Gaussian domains, from well-known statistical facts about the Dirichlet and normal--Wishart distributions.
Abstract: We examine Bayesian methods for learning Bayesian networks from a combination of prior knowledge and statistical data In particular, we unify the approaches we presented at last year's conference for discrete and Gaussian domains We derive a general Bayesian scoring metric, appropriate for both domains We then use this metric in combination with well-known statistical facts about the Dirichlet and normal--Wishart distributions to derive our metrics for discrete and Gaussian domains