scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Journal ArticleDOI
TL;DR: An approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets is developed based on a combination of a Bayesian on-line algorithm and a sequential construction of a relevant subsample of data that fully specifies the prediction of the GP model.
Abstract: We develop an approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a Combination of a Bayesian on-line algorithm, together with a sequential construction of a relevant subsample of the data that fully specifies the prediction of the GP model. By using an appealing parameterization and projection techniques in a reproducing kernel Hilbert space, recursions for the effective parameters and a sparse gaussian approximation of the posterior process are obtained. This allows for both a propagation of predictions and Bayesian error measures. The significance and robustness of our approach are demonstrated on a variety of experiments.

802 citations

Journal ArticleDOI
TL;DR: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented, and the notion of ideal free traffic is introduced.
Abstract: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented. Insight into the parameters is obtained by relating the model to an equivalent burst model. Results on a corresponding storage process are presented. The buffer occupancy distribution is approximated by a Weibull distribution. The model is compared with publicly available samples of real Ethernet traffic. The degree of the short-term predictability of the traffic model is studied through an exact formula for the conditional variance of a future value given the past. The applicability and interpretation of the self-similar model are discussed extensively, and the notion of ideal free traffic is introduced. >

800 citations

Journal ArticleDOI
TL;DR: This paper demonstrates the application of correlated Gaussian process based approximations to optimization where multiple levels of analysis are available, using an extension to the geostatistical method of co-kriging.
Abstract: This paper demonstrates the application of correlated Gaussian process based approximations to optimization where multiple levels of analysis are available, using an extension to the geostatistical method of co-kriging. An exchange algorithm is used to choose which points of the search space to sample within each level of analysis. The derivation of the co-kriging equations is presented in an intuitive manner, along with a new variance estimator to account for varying degrees of computational ‘noise’ in the multiple levels of analysis. A multi-fidelity wing optimization is used to demonstrate the methodology.

799 citations

Journal ArticleDOI
TL;DR: A well-defined crossover is found between a L\'evy and a Gaussian regime, and that the crossover carries information about the relevant parameters of the underlying stochastic process.
Abstract: We introduce a class of stochastic process, the truncated L\'evy flight (TLF), in which the arbitrarily large steps of a L\'evy flight are eliminated. We find that the convergence of the sum of $n$ independent TLFs to a Gaussian process can require a remarkably large value of $n$---typically $n\ensuremath{\approx}{10}^{4}$ in contrast to $n\ensuremath{\approx}10$ for common distributions. We find a well-defined crossover between a L\'evy and a Gaussian regime, and that the crossover carries information about the relevant parameters of the underlying stochastic process.

799 citations

01 Jan 1998
TL;DR: This chapter will assess whether the feedforward network has been superceded, for supervised regression and classification tasks, and will review work on this idea by Williams and Rasmussen (1996), Neal (1997), Barber and Williams (1997) and Gibbs and MacKay (1997).
Abstract: Feedforward neural networks such as multilayer perceptrons are popular tools for nonlinear regression and classification problems. From a Bayesian perspective, a choice of a neural network model can be viewed as defining a prior probability distribution over non-linear functions, and the neural network's learning process can be interpreted in terms of the posterior probability distribution over the unknown function. (Some learning algorithms search for the function with maximum posterior probability and other Monte Carlo methods draw samples from this posterior probability). In the limit of large but otherwise standard networks, Neal (1996) has shown that the prior distribution over non-linear functions implied by the Bayesian neural network falls in a class of probability distributions known as Gaussian processes. The hyperparameters of the neural network model determine the characteristic length scales of the Gaussian process. Neal's observation motivates the idea of discarding parameterized networks and working directly with Gaussian processes. Computations in which the parameters of the network are optimized are then replaced by simple matrix operations using the covariance matrix of the Gaussian process. In this chapter I will review work on this idea by Williams and Rasmussen (1996), Neal (1997), Barber and Williams (1997) and Gibbs and MacKay (1997), and will assess whether, for supervised regression and classification tasks, the feedforward network has been superceded.

795 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978