scispace - formally typeset
P

Praneeth Netrapalli

Researcher at Microsoft

Publications -  117
Citations -  6792

Praneeth Netrapalli is an academic researcher from Microsoft. The author has contributed to research in topics: Stochastic gradient descent & Gradient descent. The author has an hindex of 38, co-authored 117 publications receiving 5387 citations. Previous affiliations of Praneeth Netrapalli include University of Texas at Austin & Google.

Papers
More filters
Posted Content

Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization

TL;DR: In this paper, a bandit feedback algorithm was proposed to minimize the regret in online learning with a pseudo-1d bandit convex optimization problem, where the loss function is a convex Lipschitz continuous function.
Posted Content

Sample Efficient Linear Meta-Learning by Alternating Minimization.

TL;DR: In this article, a simple alternating minimization method (MLLAM) was proposed to learn the low-dimensional subspace and the regressors simultaneously, and it was shown that for a constant subspace dimension MLLAM obtains nearly-optimal estimation error, despite requiring only $Omega(log d)$ samples per task.
Posted Content

Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems

TL;DR: In this paper, the authors considered the problem of learning non-linear dynamical systems without mixing and showed that SGD-RER can achieve sub-optimal sample complexity for correlated data.
Proceedings ArticleDOI

Learning structure of power-law Markov networks

TL;DR: An efficient learning algorithm is developed for accurate reconstruction of graph structure of Ising model on power-law graphs and it is shown that order-wise optimal number of samples suffice for recovering the exact graph under certain constraints on Isingmodel parameters and scalings of node degrees.

Learnability of Learned Neural Networks

TL;DR: The results herein suggest that there is a strong correlation between small generalization errors and high learnability, and there exist significant qualitative differences in shallow networks as compared to popular deep networks.