scispace - formally typeset
Search or ask a question

Does Gaussian Process Regression (GPR) have computational complexity? 


Best insight from top research papers

Gaussian Process Regression (GPR) has a high computational complexity, which has been a long-standing challenge. The standard GPR requires the inversion of a kernel matrix, resulting in a cubic time complexity growth with the number of samples . This high computational complexity hinders the wide application of GPR . However, there have been several approaches proposed to address this issue. One approach is to approximate the covariance kernel using eigenvalues and functions, leading to a significant reduction in training and regression complexity . Another approach is to sequentially partition the input space and fit a localized Gaussian process to each disjoint region, resulting in superior time and space complexity compared to existing methods . Additionally, an online compression scheme has been developed to preserve convergence to the population posterior while reducing the computational burden with streaming data .

Answers from top 5 papers

More filters
Papers (5)Insight
The paper does not explicitly mention the computational complexity of Gaussian Process Regression (GPR).
The paper does not explicitly mention the computational complexity of Gaussian Process Regression (GPR).
Open accessJournal ArticleDOI
Nick Terry, Youngjun Choe 
24 Aug 2021-PLOS ONE
7 Citations
The paper states that Gaussian process regression suffers from poor scaling in both memory and computational complexity.
The paper states that Gaussian processes have a computational burden that scales cubically with the training sample size.
The paper discusses the computational complexity of Gaussian process regression (GPR) and proposes a method to reduce the complexity.

Related Questions

What is a gaussian Process in Machien LEarning?5 answersA Gaussian Process (GP) in Machine Learning is a powerful model for learning unknown functions while capturing uncertainty, aiding in optimal decision-making systems. GPs heavily rely on kernel functions for regression and classification tasks. They are often used as interpolating data-driven models in various disciplines and can be embedded in optimization problems for global optimization. Deep Learning Gaussian Processes (DL-GP) are proposed for analyzing computer models with heteroskedastic and high-dimensional output, with a deterministic transformation of inputs and traditional GP predictions. Additionally, Gaussian Processes can be integrated within discrete choice models to improve representations of unobserved heterogeneity, assigning individuals to behaviorally homogeneous clusters probabilistically. This integration enhances model flexibility and predictive power while maintaining interpretability at different levels of analysis.
Does Gaussian Process Regression (GPR) have issues with data dependency?5 answersGaussian Process Regression (GPR) can have issues with data dependency. The direct dependency of the prediction error bounds on the training data is generally unclear. However, a Bayesian prediction error bound for GPR has been derived, which decays with the growth of a kernel-based measure of data density. This allows for time-varying tracking accuracy guarantees for learned GP models used as feedback compensation of unknown nonlinearities, achieving vanishing tracking error with increasing data density. Additionally, the usage of the radial basis function (RBF) kernel in GPR often suffers from the issue of failing to find the optimal hyperparameter, leading to over-fitting or under-fitting. To address this problem, the hat kernel is proposed as an alternative to reduce the risk of under-fitting.
Does Gaussian Process Regression have the ability to highlight the significance of various input features?5 answersGaussian Process Regression (GPR) has the ability to highlight the significance of various input features. GPR is a nonparametric regression model that learns nonlinear maps from input features to real-valued output using a kernel function. The kernel function constructs the covariance matrix among all pairs of data, allowing GPR to capture the relationships between input features and the output. In GPR with local explanation, the feature contributions to the prediction of each sample are revealed using an easy-to-interpret locally linear model. This model provides both the prediction and explanation for each sample, allowing for the interpretation of feature contributions. The weight vector of the locally linear model is generated from multivariate Gaussian process priors, further enhancing interpretability. Therefore, GPR can not only provide accurate predictions but also offer insights into the significance of different input features.
Does Gaussian Process Regression (GPR) have the ability to highlight the significance of various input features?5 answersGaussian Process Regression (GPR) has the ability to highlight the significance of various input features. GPR is a flexible, nonparametric approach to regression that naturally quantifies uncertainty. In many applications, the goal is to select covariates that are related to the response, and GPR can help in this regard. GPR can optimize a penalized GP log-likelihood based on the Vecchia GP approximation, which implies a sparse Cholesky factor of the precision matrix. This allows for the selection of relevant covariates and the deselection of irrelevant ones. Theoretical analysis and numerical studies have demonstrated the improved scalability and accuracy of GPR in selecting significant input features.
How does Gaussian process regression work?5 answersGaussian process regression is a method used to model complex physical phenomena by simulating a large number of variables. It involves building metamodels on a restricted set of influential variables and progressively expanding the set to improve knowledge on the output. One approach, called seqGPR, uses Gaussian process regression to model the output as the sum of two independent Gaussian processes, with one modeling the output at the previous step and the other being a correction term. An EM algorithm is used to estimate the parameters of the processes. This metamodel, seqGPR, has been shown to outperform standard kriging metamodels in three test cases. Gaussian process regression can also be extended to non-Euclidean input spaces, such as probability measures, by defining a Positive Definite kernel using the Wasserstein distance. However, numerical issues can arise due to a non-stationarity relationship between the Wasserstein-based squared exponential kernel and its Euclidean-based counterpart. To address the issue of outliers in observational training data sets, a robust process model in the Gaussian process framework has been proposed. This model uses the Huber probability distribution as the likelihood of observed data and employs weights based on projection statistics to scale residuals and bound the influence of outliers.
Who invented gaussian processes?4 answersGaussian processes were introduced by Trefor W. Evans and Prasanth B. Nair. The idea of discarding parameterized networks and working directly with Gaussian processes was motivated by Neal's observation.