scispace - formally typeset
R

Roi Livni

Researcher at Tel Aviv University

Publications -  57
Citations -  873

Roi Livni is an academic researcher from Tel Aviv University. The author has contributed to research in topics: Concept class & Computer science. The author has an hindex of 14, co-authored 49 publications receiving 678 citations. Previous affiliations of Roi Livni include Princeton University & Hebrew University of Jerusalem.

Papers
More filters
Proceedings Article

On the Computational Efficiency of Training Neural Networks

TL;DR: In this paper, the authors revisit the computational complexity of training neural networks from a modern perspective and provide both positive and negative results, some of them yield new provably efficient and practical algorithms for training certain types of neural networks.
Proceedings ArticleDOI

Private PAC learning implies finite Littlestone dimension

TL;DR: It is shown that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log*(d)) examples, and it follows that the class of thresholds over ℕ can not be learned in a private manner.
Posted Content

On the Computational Efficiency of Training Neural Networks

TL;DR: In this article, the authors revisit the computational complexity of training neural networks from a modern perspective and provide both positive and negative results, some of them yield new provably efficient and practical algorithms for training certain types of neural networks.
Posted Content

An Algorithm for Training Polynomial Networks

TL;DR: The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training deep neural networks, which is a universal learner in the sense that the training error is guaranteed to decrease at every iteration, and can eventually reach zero under mild conditions.
Proceedings Article

Vanishing Component Analysis

TL;DR: An efficient procedure is described and analyzed that constructs a set of generators of a vanishing ideal that captures nonlinear structure in data, and can for example be used within supervised learning.