scispace - formally typeset
Open AccessPosted Content

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

Reads0
Chats0
TLDR
Arora et al. as discussed by the authors showed that neural tangent kernels perform strongly on low-data tasks, and compared their performance with the finite-width net derived from the same neural network, showing that NTK's efficacy may trace to lower variance of output.
Abstract
Recent research shows that the following two models are equivalent: (a) infinitely wide neural networks (NNs) trained under l2 loss by gradient descent with infinitesimally small learning rate (b) kernel regression with respect to so-called Neural Tangent Kernels (NTKs) (Jacot et al., 2018). An efficient algorithm to compute the NTK, as well as its convolutional counterparts, appears in Arora et al. (2019a), which allowed studying performance of infinitely wide nets on datasets like CIFAR-10. However, super-quadratic running time of kernel methods makes them best suited for small-data tasks. We report results suggesting neural tangent kernels perform strongly on low-data tasks. 1. On a standard testbed of classification/regression tasks from the UCI database, NTK SVM beats the previous gold standard, Random Forests (RF), and also the corresponding finite nets. 2. On CIFAR-10 with 10 - 640 training samples, Convolutional NTK consistently beats ResNet-34 by 1% - 3%. 3. On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance. 4. Comparing the performance of NTK with the finite-width net it was derived from, NTK behavior starts at lower net widths than suggested by theoretical analysis(Arora et al., 2019a). NTK's efficacy may trace to lower variance of output.

read more

Citations
More filters
Posted Content

Few-Shot Learning via Learning the Representation, Provably

TL;DR: The results demonstrate representation learning can fully utilize all $n_1T$ samples from source tasks and the advantage of representation learning in both high-dimensional linear regression and neural network learning.
Proceedings Article

Neural Tangents: Fast and Easy Infinite Neural Networks in Python

TL;DR: A library for working with infinite-width neural networks, Neural Tangents provides a high-level API for specifying complex and hierarchical neural network architectures and provides tools to study gradient descent training dynamics of wide but finite networks.
Posted Content

How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks.

TL;DR: The success of GNNs in extrapolating algorithmic tasks to new data relies on encoding task-specific non-linearities in the architecture or features, and a hypothesis is suggested for which theoretical and empirical evidence is provided.
Posted Content

What Can Neural Networks Reason About

TL;DR: In this article, the authors develop a framework to characterize which reasoning tasks a network can learn well, by studying how well its computation structure aligns with the algorithmic structure of the relevant reasoning process.
Posted Content

Differentially Private Learning Needs Better Features (or Much More Data)

TL;DR: This work introduces simple yet strong baselines for differentially private learning that can inform the evaluation of future progress in this area and shows that private learning requires either much more private data, or access to features learned on public data from a similar domain.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
BookDOI

Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond

TL;DR: Learning with Kernels provides an introduction to SVMs and related kernel methods that provide all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms.
Posted Content

Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

TL;DR: The Exponential Linear Unit (ELU) as mentioned in this paper was proposed to alleviate the vanishing gradient problem via the identity for positive values, which has improved learning characteristics compared to the units with other activation functions.
Journal ArticleDOI

Do we need hundreds of classifiers to solve real world classification problems

TL;DR: The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in theTop-20, respectively).
Related Papers (5)