scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Journal ArticleDOI
TL;DR: This work investigates by simulation the properties of four different estimation procedures under a linear model for correlated data with Gaussian error: maximum likelihood based on the normal mixed linear model; generalized estimating equations; a four-stage method, and a bootstrap method that resamples clusters rather than individuals.
Abstract: We investigate by simulation the properties of four different estimation procedures under a linear model for correlated data with Gaussian error: maximum likelihood based on the normal mixed linear model; generalized estimating equations; a four-stage method, and a bootstrap method that resamples clusters rather than individuals. We pay special attention to the group randomized trials where the number of independent clusters is small, cluster sizes are big, and the correlation within the cluster is weak. We show that for balanced and near balanced data when the number of independent clusters is small (< or = 10), the bootstrap is superior if analysts do not want to impose strong distribution and covariance structure assumptions. Otherwise, ML and four-stage methods are slightly better. All four methods perform well when the number of independent clusters reaches 50.

139 citations

Proceedings ArticleDOI
18 Mar 2005
TL;DR: Experimental results show that the performance of the voice conversion can be improved by using the global variance information, and it is demonstrated that the proposed algorithm is more effective than spectral enhancement by postfiltering.
Abstract: The paper describes a novel spectral conversion method for voice transformation. We perform spectral conversion between speakers using a Gaussian mixture model (GMM) on the joint probability density of source and target features. A smooth spectral sequence can be estimated by applying maximum likelihood (ML) estimation to the GMM-based mapping using dynamic features. However, there is still degradation of the converted speech quality due to an over-smoothing of the converted spectra, which is inevitable in conventional ML-based parameter estimation. In order to alleviate the over-smoothing, we propose an ML-based conversion taking account of the global variance of the converted parameter in each utterance. Experimental results show that the performance of the voice conversion can be improved by using the global variance information. Moreover, it is demonstrated that the proposed algorithm is more effective than spectral enhancement by postfiltering.

138 citations

Proceedings ArticleDOI
10 Jul 2006
TL;DR: It is shown here that the trajectories of the targets can be determined directly from the evolution of the Gaussian mixture and that single Gaussians within this mixture accurately track the correct targets.
Abstract: The Gaussian mixture probability hypothesis density filter (GM-PHD Filter) was proposed recently for jointly estimating the time-varying number of targets and their states from a noisy sequence of sets of measurements which may have missed detections and false alarms. The initial implementation of the GM-PHD filter provided estimates for the set of target states at each point in time but did not ensure continuity of the individual target tracks. It is shown here that the trajectories of the targets can be determined directly from the evolution of the Gaussian mixture and that single Gaussians within this mixture accurately track the correct targets. Furthermore, the technique is demonstrated to be successful in estimating the correct number of targets and their trajectories in high clutter density and shows better performance than the MHT filter

138 citations

Journal ArticleDOI
TL;DR: In this article, an approximate series expansion of the covariance function in terms of an eigenfunction expansion of Laplace operator in a compact subset of the Gaussian process is proposed.
Abstract: This paper proposes a novel scheme for reduced-rank Gaussian process regression. The method is based on an approximate series expansion of the covariance function in terms of an eigenfunction expansion of the Laplace operator in a compact subset of $$\mathbb {R}^d$$. On this approximate eigenbasis, the eigenvalues of the covariance function can be expressed as simple functions of the spectral density of the Gaussian process, which allows the GP inference to be solved under a computational cost scaling as $$\mathcal {O}(nm^2)$$ (initial) and $$\mathcal {O}(m^3)$$ (hyperparameter learning) with m basis functions and n data points. Furthermore, the basis functions are independent of the parameters of the covariance function, which allows for very fast hyperparameter learning. The approach also allows for rigorous error analysis with Hilbert space theory, and we show that the approximation becomes exact when the size of the compact subset and the number of eigenfunctions go to infinity. We also show that the convergence rate of the truncation error is independent of the input dimensionality provided that the differentiability order of the covariance function increases appropriately, and for the squared exponential covariance function it is always bounded by $${\sim }1/m$$ regardless of the input dimensionality. The expansion generalizes to Hilbert spaces with an inner product which is defined as an integral over a specified input density. The method is compared to previously proposed methods theoretically and through empirical tests with simulated and real data.

138 citations

Proceedings ArticleDOI
03 Aug 2013
TL;DR: This work proposes LSE, an algorithm that guides both sampling and classification based on GP-derived confidence bounds, and extends LSE and its theory to two more natural settings: where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches.
Abstract: Many information gathering problems require determining the set of points, for which an unknown function takes value above or below some given threshold level. We formalize this task as a classification problem with sequential measurements, where the unknown function is modeled as a sample from a Gaussian process (GP). We propose LSE, an algorithm that guides both sampling and classification based on GP-derived confidence bounds, and provide theoretical guarantees about its sample complexity. Furthermore, we extend LSE and its theory to two more natural settings: (1) where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. We evaluate the effectiveness of our proposed methods on two problems of practical interest, namely autonomous monitoring of algal populations in a lake environment and geolocating network latency.

138 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978