Open AccessProceedings Article
Deep Neural Networks as Gaussian Processes
Jaehoon Lee,Yasaman Bahri,Roman Novak,Samuel S. Schoenholz,Jeffrey Pennington,Jascha Sohl-Dickstein +5 more
TLDR
The exact equivalence between infinitely wide deep networks and GPs is derived and it is found that test performance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite- width networks.Abstract:
It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network.
In this work, we derive the exact equivalence between infinitely wide deep networks and GPs. We further develop a computationally efficient pipeline to compute the covariance function for these GPs. We then use the resulting GPs to perform Bayesian inference for wide deep neural networks on MNIST and CIFAR-10. We observe that trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite-width networks. Finally we connect the performance of these GPs to the recent theory of signal propagation in random neural networks.read more
Citations
More filters
Posted Content
The Principles of Deep Learning Theory.
TL;DR: In this paper, the authors developed the notion of representation group flow (RG flow) to characterize the propagation of signals through the network and showed that the depth-to-width aspect ratio of the network can control the deviations from the infinite-width Gaussian description.
Proceedings ArticleDOI
Variational Implicit Processes
TL;DR: In this paper, the authors introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables, with examples including data simulators, Bayesian neural networks and non-linear transformations of the process.
Proceedings Article
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Ben Adlam,Jeffrey Pennington +1 more
TL;DR: In this article, the authors provide a high-dimensional asymptotic analysis of generalization under kernel regression with the Neural Tangent Kernel, which characterizes the behavior of wide neural networks optimized with gradient descent.
Posted Content
Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes
TL;DR: This work derives the optimal approximate posterior over the top-layer weights in a Bayesian neural network for regression, and shows that it exhibits strong dependencies on the lower- layer weights, and extends this approach to deep Gaussian processes, unifying inference in the two model classes.
Posted Content
The Recurrent Neural Tangent Kernel
TL;DR: This paper introduces and study the Recurrent Neural Tangent Kernel (RNTK), which sheds new insights into the behavior of overparametrized RNNs, including how different time steps are weighted by the RNTK to form the output under different initialization parameters and nonlinearity choices, and how inputs of different lengths are treated.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Book
Bayesian learning for neural networks
TL;DR: Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional neural network learning methods.
Journal ArticleDOI
A Unifying View of Sparse Approximate Gaussian Process Regression
TL;DR: A new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression, relies on expressing the effective prior which the methods are using, and highlights the relationship between existing methods.
Journal Article
In Defense of One-Vs-All Classification
Ryan Rifkin,Aldebaro Klautau +1 more
TL;DR: It is argued that a simple "one-vs-all" scheme is as accurate as any other approach, assuming that the underlying binary classifiers are well-tuned regularized classifiers such as support vector machines.
Proceedings Article
Gaussian processes for Big data
TL;DR: In this article, the authors introduce stochastic variational inference for Gaussian process models, which enables the application of Gaussian Process (GP) models to data sets containing millions of data points.