Topic
Gaussian process
About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: An approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets is developed based on a combination of a Bayesian on-line algorithm and a sequential construction of a relevant subsample of data that fully specifies the prediction of the GP model.
Abstract: We develop an approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a Combination of a Bayesian on-line algorithm, together with a sequential construction of a relevant subsample of the data that fully specifies the prediction of the GP model. By using an appealing parameterization and projection techniques in a reproducing kernel Hilbert space, recursions for the effective parameters and a sparse gaussian approximation of the posterior process are obtained. This allows for both a propagation of predictions and Bayesian error measures. The significance and robustness of our approach are demonstrated on a variety of experiments.
802 citations
••
TL;DR: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented, and the notion of ideal free traffic is introduced.
Abstract: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented. Insight into the parameters is obtained by relating the model to an equivalent burst model. Results on a corresponding storage process are presented. The buffer occupancy distribution is approximated by a Weibull distribution. The model is compared with publicly available samples of real Ethernet traffic. The degree of the short-term predictability of the traffic model is studied through an exact formula for the conditional variance of a future value given the past. The applicability and interpretation of the self-similar model are discussed extensively, and the notion of ideal free traffic is introduced. >
800 citations
••
TL;DR: This paper demonstrates the application of correlated Gaussian process based approximations to optimization where multiple levels of analysis are available, using an extension to the geostatistical method of co-kriging.
Abstract: This paper demonstrates the application of correlated Gaussian process based approximations to optimization where multiple levels of analysis are available, using an extension to the geostatistical method of co-kriging. An exchange algorithm is used to choose which points of the search space to sample within each level of analysis. The derivation of the co-kriging equations is presented in an intuitive manner, along with a new variance estimator to account for varying degrees of computational ‘noise’ in the multiple levels of analysis. A multi-fidelity wing optimization is used to demonstrate the methodology.
799 citations
••
TL;DR: A well-defined crossover is found between a L\'evy and a Gaussian regime, and that the crossover carries information about the relevant parameters of the underlying stochastic process.
Abstract: We introduce a class of stochastic process, the truncated L\'evy flight (TLF), in which the arbitrarily large steps of a L\'evy flight are eliminated. We find that the convergence of the sum of $n$ independent TLFs to a Gaussian process can require a remarkably large value of $n$---typically $n\ensuremath{\approx}{10}^{4}$ in contrast to $n\ensuremath{\approx}10$ for common distributions. We find a well-defined crossover between a L\'evy and a Gaussian regime, and that the crossover carries information about the relevant parameters of the underlying stochastic process.
799 citations
01 Jan 1998
TL;DR: This chapter will assess whether the feedforward network has been superceded, for supervised regression and classification tasks, and will review work on this idea by Williams and Rasmussen (1996), Neal (1997), Barber and Williams (1997) and Gibbs and MacKay (1997).
Abstract: Feedforward neural networks such as multilayer perceptrons are popular tools for nonlinear regression and classification problems. From a Bayesian perspective, a choice of a neural network model can be viewed as defining a prior probability distribution over non-linear functions, and the neural network's learning process can be interpreted in terms of the posterior probability distribution over the unknown function. (Some learning algorithms search for the function with maximum posterior probability and other Monte Carlo methods draw samples from this posterior probability). In the limit of large but otherwise standard networks, Neal (1996) has shown that the prior distribution over non-linear functions implied by the Bayesian neural network falls in a class of probability distributions known as Gaussian processes. The hyperparameters of the neural network model determine the characteristic length scales of the Gaussian process. Neal's observation motivates the idea of discarding parameterized networks and working directly with Gaussian processes. Computations in which the parameters of the network are optimized are then replaced by simple matrix operations using the covariance matrix of the Gaussian process. In this chapter I will review work on this idea by Williams and Rasmussen (1996), Neal (1997), Barber and Williams (1997) and Gibbs and MacKay (1997), and will assess whether, for supervised regression and classification tasks, the feedforward network has been superceded.
795 citations