scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Posted Content
TL;DR: This work presents a fully Bayesian approach to inference and learning in nonlinear nonparametric state-space models and places a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena.
Abstract: State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference \emph{and learning} (i.e. state estimation and system identification) in nonlinear nonparametric state-space models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity.

127 citations

Journal ArticleDOI
TL;DR: The proposed scheme for generating a random sequence with a specified marginal distribution and autocovariance consists of a white Gaussian noise source input to a linear digital filter followed by a zero-memory nonlinearity (ZMNL).
Abstract: We consider the problem of generating a random sequence with a specified marginal distribution and autocovariance. The proposed scheme for generating such a sequence consists of a white Gaussian noise source input to a linear digital filter followed by a zero-memory nonlinearity (ZMNL). The ZMNL is chosen so that the desired distribution is exactly realized and the digital filter is designed so that the desired autocovariance is closely approximated. Both analytic results and examples are included. The proposed scheme should prove useful in simulations involving non-Gaussian processes.

127 citations

Journal ArticleDOI
TL;DR: In this paper, a revealing alternate proof is provided for the Iglehart-Borovkov (1967) heavy-traffic limit theorem for GI/G/s queues.
Abstract: A revealing alternate proof is provided for the Iglehart (1965), (1973)–Borovkov (1967) heavy-traffic limit theorem for GI/G/s queues. This kind of heavy traffic is obtained by considering a sequence of GI/G/s systems with the numbers of servers and the arrival rates going to ∞ while the service-time distributions are held fixed. The theorem establishes convergence to a Gaussian process, which in general is not Markov, for an appropriate normalization of the sequence of stochastic processes representing the number of customers in service at arbitrary times. The key idea in the new proof is to consider service-time distributions that are randomly stopped sums of exponential phases, and then work with the discrete-time vector-valued Markov chain representing the number of customers in each phase of service at arrival epochs. It is then easy to show that this sequence of Markov chains converges to a multivariate O–U (Ornstein–Uhlenbeck) diffusion process by applying simple criteria in Stroock and Varadhan (1979). The Iglehart–Borovkov limit for these special service-time distributions is the sum of the components of this multivariate O–U process. Heavy-traffic convergence is also established for the steady-state distributions of GI/M/s queues under the same conditions by exploiting stochastic-order properties.

127 citations

Journal ArticleDOI
TL;DR: In this paper, small ball probabilities for locally non-deterministic Gaussian processes with stationary increments, a class of processes that includes the fractional Brownian motions, were used to prove Chung type laws of the iterated logarithm.
Abstract: We estimate small ball probabilities for locally nondeterministic Gaussian processes with stationary increments, a class of processes that includes the fractional Brownian motions. These estimates are used to prove Chung type laws of the iterated logarithm.

127 citations

Journal ArticleDOI
TL;DR: This paper proposes a new variational approximation for infinite mixtures of Gaussian processes that uses variational inference and a truncated stick-breaking representation of the Dirichlet process to approximate the posterior of hidden variables involved in the model.
Abstract: This paper proposes a new variational approximation for infinite mixtures of Gaussian processes. As an extension of the single Gaussian process regression model, mixtures of Gaussian processes can characterize varying covariances or multimodal data and reduce the deficiency of the computationally cubic complexity of the single Gaussian process model. The infinite mixture of Gaussian processes further integrates a Dirichlet process prior to allowing the number of mixture components to automatically be determined from data. We use variational inference and a truncated stick-breaking representation of the Dirichlet process to approximate the posterior of hidden variables involved in the model. To fix the hyperparameters of the model, the variational EM algorithm and a greedy algorithm are employed. In addition to presenting the variational infinite-mixture model, we apply it to the problem of traffic flow prediction. Experiments with comparisons to other approaches show the effectiveness of the proposed model.

127 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978