R
Roman Novak
Researcher at Google
Publications - 24
Citations - 3332
Roman Novak is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Gaussian process. The author has an hindex of 16, co-authored 20 publications receiving 2106 citations.
Papers
More filters
Proceedings Article
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
TL;DR: A library for working with infinite-width neural networks, Neural Tangents provides a high-level API for specifying complex and hierarchical neural network architectures and provides tools to study gradient descent training dynamics of wide but finite networks.
Posted Content
Sensitivity and Generalization in Neural Networks: an Empirical Study
TL;DR: It is found that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization.
Posted Content
Deep Neural Networks as Gaussian Processes
Jaehoon Lee,Yasaman Bahri,Roman Novak,Samuel S. Schoenholz,Jeffrey Pennington,Jascha Sohl-Dickstein +5 more
TL;DR: In this article, the authors derive the exact equivalence between infinitely wide deep networks and Gaussian Processes (GP) and develop a computationally efficient pipeline to compute the covariance function for these GPs.
Proceedings Article
Finite Versus Infinite Neural Networks: an Empirical Study
Jaehoon Lee,Samuel S. Schoenholz,Jeffrey Pennington,Ben Adlam,Lechao Xiao,Roman Novak,Jascha Sohl-Dickstein +6 more
TL;DR: Improved best practices for using NNGP and NT kernels for prediction are developed, including a novel ensembling technique that achieves state-of-the-art results on CIFAR-10 classification for kernels corresponding to each architecture class the authors consider.
Posted Content
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
Roman Novak,Lechao Xiao,Jaehoon Lee,Yasaman Bahri,Greg Yang,Jiri Hron,Daniel A. Abolafia,Jeffrey Pennington,Jascha Sohl-Dickstein +8 more
TL;DR: In this article, an equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs) was derived for CNNs both with and without pooling layers, and achieved state-of-the-art results on CIFAR10 for GPs without trainable kernels.