scispace - formally typeset
Open AccessPosted Content

Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks

TLDR
In this paper, the authors developed an infinite width neural network framework for matrix completion that is simple, fast, and flexible, which is based on the connection between the infinite width limit of neural networks and kernels known as neural tangent kernels (NTK).
Abstract
Matrix completion problems arise in many applications including recommendation systems, computer vision, and genomics. Increasingly larger neural networks have been successful in many of these applications, but at considerable computational costs. Remarkably, taking the width of a neural network to infinity allows for improved computational performance. In this work, we develop an infinite width neural network framework for matrix completion that is simple, fast, and flexible. Simplicity and speed come from the connection between the infinite width limit of neural networks and kernels known as neural tangent kernels (NTK). In particular, we derive the NTK for fully connected and convolutional neural networks for matrix completion. The flexibility stems from a feature prior, which allows encoding relationships between coordinates of the target matrix, akin to semi-supervised learning. The effectiveness of our framework is demonstrated through competitive results for virtual drug screening and image inpainting/reconstruction. We also provide an implementation in Python to make our framework accessible on standard hardware to a broad audience.

read more

Citations
More filters
Journal ArticleDOI

Extrapolating missing antibody-virus measurements across serological studies.

Tal Einav, +1 more
- 01 Jul 2022 - 
TL;DR: The authors applied matrix completion to several large-scale influenza and HIV-1 studies and explored how prediction accuracy evolves as the number of measurements changes and approximated the number required in several highly incomplete datasets (suggesting ∼250,000 measurements could be saved).
Journal ArticleDOI

Wide and Deep Neural Networks Achieve Optimality for Classification

TL;DR: This work identifies and constructs an explicit set of neural network classifiers that achieve optimality and creates a taxonomy of in-nitely wide and deep networks and shows that these models implement one of three well-known classi fier depending on the activation function used.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)