scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the convergence properties of the empirical characteristic process (Y_n(t) = n−1/2}(c_n (t) - c(t)) are investigated.
Abstract: The convergence properties of the empirical characteristic process $Y_n(t) = n^{1/2}(c_n(t) - c(t))$ are investigated. The finite-dimensional distributions of $Y_n$ converge to those of a complex Gaussian process $Y$. First the continuity properties of $Y$ are discussed. A class of counterexamples is presented, showing that if the underlying distribution has low logarithmic moments then $Y$ is almost surely discontinuous, and hence $Y_n$ cannot converge weakly. When the underlying distribution has high enough moments then $Y_n$ is strongly approximated by suitable sequences of Gaussian processes with specified rate-functions. The approximation is based on that of Komlos, Major and Tusnady for the empirical process. Convergence speeds for the distribution of functionals of $Y_n$ are derived. A Strassen-type log log law is established for $Y_n$, and supremum-functionals on the appropriate set of limit points are explicitly computed. The technique throughout uses results from the theory of the sample function behaviour of Gaussian processes.

159 citations

Journal ArticleDOI
TL;DR: The proposed spectral modeling method can significantly alleviate the over-smoothing effect and improve the naturalness of the conventional HMM-based speech synthesis system using mel-cepstra.
Abstract: This paper presents a new spectral modeling method for statistical parametric speech synthesis. In the conventional methods, high-level spectral parameters, such as mel-cepstra or line spectral pairs, are adopted as the features for hidden Markov model (HMM)-based parametric speech synthesis. Our proposed method described in this paper improves the conventional method in two ways. First, distributions of low-level, un-transformed spectral envelopes (extracted by the STRAIGHT vocoder) are used as the parameters for synthesis. Second, instead of using single Gaussian distribution, we adopt the graphical models with multiple hidden variables, including restricted Boltzmann machines (RBM) and deep belief networks (DBN), to represent the distribution of the low-level spectral envelopes at each HMM state. At the synthesis time, the spectral envelopes are predicted from the RBM-HMMs or the DBN-HMMs of the input sentence following the maximum output probability parameter generation criterion with the constraints of the dynamic features. A Gaussian approximation is applied to the marginal distribution of the visible stochastic variables in the RBM or DBN at each HMM state in order to achieve a closed-form solution to the parameter generation problem. Our experimental results show that both RBM-HMM and DBN-HMM are able to generate spectral envelope parameter sequences better than the conventional Gaussian-HMM with superior generalization capabilities and that DBN-HMM and RBM-HMM perform similarly due possibly to the use of Gaussian approximation. As a result, our proposed method can significantly alleviate the over-smoothing effect and improve the naturalness of the conventional HMM-based speech synthesis system using mel-cepstra.

159 citations

Proceedings ArticleDOI
15 Jul 2015
TL;DR: A stabilization task, linearizes the nonlinear, GP-based model around a desired operating point, and solves a convex optimization problem to obtain a linear robust controller that provides robust stability and performance guarantees during learning.
Abstract: This paper introduces a learning-based robust control algorithm that provides robust stability and performance guarantees during learning. The approach uses Gaussian process (GP) regression based on data gathered during operation to update an initial model of the system and to gradually decrease the uncertainty related to this model. Embedding this data-based update scheme in a robust control framework guarantees stability during the learning process. Traditional robust control approaches have not considered online adaptation of the model and its uncertainty before. As a result, their controllers do not improve performance during operation. Typical machine learning algorithms that have achieved similar high-performance behavior by adapting the model and controller online do not provide the guarantees presented in this paper. In particular, this paper considers a stabilization task, linearizes the nonlinear, GP-based model around a desired operating point, and solves a convex optimization problem to obtain a linear robust controller. The resulting performance improvements due to the learning-based controller are demonstrated in experiments on a quadrotor vehicle.

159 citations

Proceedings ArticleDOI
06 Apr 2003
TL;DR: A novel recursive Bayesian estimation algorithm that combines an importance sampling based measurement update step with a bank of sigma-point Kalman filters for the time-update and proposal distribution generation is presented.
Abstract: For sequential probabilistic inference in nonlinear non-Gaussian systems, approximate solutions must be used. We present a novel recursive Bayesian estimation algorithm that combines an importance sampling based measurement update step with a bank of sigma-point Kalman filters for the time-update and proposal distribution generation. The posterior state density is represented by a Gaussian mixture model that is recovered from the weighted particle set of the measurement update step by means of a weighted EM algorithm. This step replaces the resampling stage needed by most particle filters and mitigates the "sample depletion" problem. We show that this new approach has an improved estimation performance and reduced computational complexity compared to other related algorithms.

158 citations

Journal ArticleDOI
TL;DR: This work proposes modeling target motion patterns as a mixture of Gaussian processes with a Dirichlet process prior over mixture weights, which provides an adaptive representation for each individual motion pattern and automatically adjusts the complexity of the motion model based on the available data.
Abstract: The most difficult--and often most essential--aspect of many interception and tracking tasks is constructing motion models of the targets Experts rarely can provide complete information about a target's expected motion pattern, and fitting parameters for complex motion patterns can require large amounts of training data Specifying how to parameterize complex motion patterns is in itself a difficult task In contrast, Bayesian nonparametric models of target motion are very flexible and generalize well with relatively little training data We propose modeling target motion patterns as a mixture of Gaussian processes (GP) with a Dirichlet process (DP) prior over mixture weights The GP provides an adaptive representation for each individual motion pattern, while the DP prior allows us to represent an unknown number of motion patterns Both automatically adjust the complexity of the motion model based on the available data Our approach outperforms several parametric models on a helicopter-based car-tracking task on data collected from the greater Boston area

158 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978