scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Journal ArticleDOI
TL;DR: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented and the applicability of the framework can be extended to diseased brains and neonatal brains.
Abstract: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented. A mixture model composed of a large number of Gaussians is used to represent the brain image. Each tissue is represented by a large number of Gaussian components to capture the complex tissue spatial layout. The intensity of a tissue is considered a global feature and is incorporated into the model through tying of all the related Gaussian parameters. The expectation-maximization (EM) algorithm is utilized to learn the parameter-tied, constrained Gaussian mixture model. An elaborate initialization scheme is suggested to link the set of Gaussians per tissue type, such that each Gaussian in the set has similar intensity characteristics with minimal overlapping spatial supports. Segmentation of the brain image is achieved by the affiliation of each voxel to the component of the model that maximized the a posteriori probability. The presented algorithm is used to segment three-dimensional, T1-weighted, simulated and real MR images of the brain into three different tissues, under varying noise conditions. Results are compared with state-of-the-art algorithms in the literature. The algorithm does not use an atlas for initialization or parameter learning. Registration processes are therefore not required and the applicability of the framework can be extended to diseased brains and neonatal brains

262 citations

Book ChapterDOI
01 Jan 1996
TL;DR: In this paper, it was shown that the corresponding priors over functions computed by the network reach reasonable limits as the number of hidden units goes to infinity, and that there is no need to limit the size of the network in order to avoid overfitting.
Abstract: In this chapter, I show that priors over network parameters can be defined in such a way that the corresponding priors over functions computed by the network reach reasonable limits as the number of hidden units goes to infinity. When using such priors,there is thus no need to limit the size of the network in order to avoid “overfitting”. The infinite network limit also provides insight into the properties of different priors. A Gaussian prior for hidden-to-output weights results in a Gaussian process prior for functions,which may be smooth, Brownian, or fractional Brownian. Quite different effects can be obtained using priors based on non-Gaussian stable distributions. In networks with more than one hidden layer, a combination of Gaussian and non-Gaussian priors appears most interesting.

261 citations

Journal ArticleDOI
01 Jan 1967
TL;DR: The present work considers the problem of designing a linear filter which combines the outputs of the 25 sensors in a subarray so as to suppress the noise without distorting the signal, or event, and finds that the signal-to-noise improvement given by the frequency- domain procedure is within 2 dB of the gain obtained with the time-domain procedure.
Abstract: The experimental Large Aperture Seismic Array (LASA) represents an attempt to improve the capability to monitor underground nuclear weapons tests and small earthquakes by making a large extrapolation in the existing art of building arrays of spaced and interconnected seismic transducers. The LASA is roughly equivalent to 21 separate subarrays, each consisting of 25 sensors, spread over an aperture of 200 km. The present work considers the problem of designing a linear filter which combines the outputs of the 25 sensors in a subarray so as to suppress the noise without distorting the signal, or event. This filter provides a minimum-variance unbiased estimate of the signal which is the same as the maximum-likelihood estimate of the signal if the noise is a multidimensional Gaussian process. An extensive discussion of the theory of multidimensional maximum-likelihood processing is given. A computer program implementation of the maximum-likelihood filter is presented which employs the cross-correlation matrix of noise measured just prior to the arrival of the event. This time-domain synthesis procedure requires relatively large amounts of computer time to synthesize the filter, and is quite sensitive to the assumption of noise stationarity. The asymptotic theory of maximum-likelihood filtering is also given. An asymptotically optimum frequency-domain synthesis technique is given for two-sided multidimensional filters. This procedure is well suited to machine computation and has the advantage with respect to the time-domain procedure of requiring about 10 times less computation time. A description of a computer program implementation of the frequency-domain synthesis method is given which employs the spectral matrix of the noise estimated just before the arrival of the event. The experimental results obtained by processing several events recorded at the LASA are presented, as well as a comparison of the performance of the frequency-domain method relative to the time-domain synthesis technique. It is found that the signal-to-noise improvement given by the frequency-domain procedure is within 2 dB of the gain obtained with the time-domain procedure, and that the frequency-domain method is relatively insensitive to the assumption of noise stationarity.

261 citations

Journal ArticleDOI
TL;DR: In this article, an appropriate analogue of the one-dimensional Stein equation is derived, and the necessary properties of its solutions are established, applied to the partial sums of stationary sequences and of dissociated arrays, to a process version of the Wald-Wolfowitz theorem and to the empirical distribution function.
Abstract: Stein's method of obtaining distributional approximations is developed in the context of functional approximation by the Wiener process and other Gaussian processes. An appropriate analogue of the one-dimensional Stein equation is derived, and the necessary properties of its solutions are established. The method is applied to the partial sums of stationary sequences and of dissociated arrays, to a process version of the Wald-Wolfowitz theorem and to the empirical distribution function.

261 citations

Proceedings Article
16 Jun 2013
TL;DR: In this article, simple closed-form kernels are derived by modelling a spectral density with a Gaussian mixture, which can be used with Gaussian processes to discover patterns and enable extrapolation, and demonstrate the proposed kernels by discovering patterns and performing long range extrapolation on synthetic examples, as well as atmospheric CO2 trends and airline passenger data.
Abstract: Gaussian processes are rich distributions over functions, which provide a Bayesian nonparametric approach to smoothing and interpolation. We introduce simple closed form kernels that can be used with Gaussian processes to discover patterns and enable extrapolation. These kernels are derived by modelling a spectral density - the Fourier transform of a kernel - with a Gaussian mixture. The proposed kernels support a broad class of stationary covariances, but Gaussian process inference remains simple and analytic. We demonstrate the proposed kernels by discovering patterns and performing long range extrapolation on synthetic examples, as well as atmospheric CO2 trends and airline passenger data. We also show that it is possible to reconstruct several popular standard covariances within our framework.

260 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978