P
Praneeth Netrapalli
Researcher at Microsoft
Publications - 117
Citations - 6792
Praneeth Netrapalli is an academic researcher from Microsoft. The author has contributed to research in topics: Stochastic gradient descent & Gradient descent. The author has an hindex of 38, co-authored 117 publications receiving 5387 citations. Previous affiliations of Praneeth Netrapalli include University of Texas at Austin & Google.
Papers
More filters
Proceedings Article
Non-convex Robust PCA
TL;DR: In this paper, a simple alternating projection onto the set of low-rank and sparse matrices with intermediate de-noising steps was proposed, which has low computational complexity and can recover a low rank matrix which is corrupted with sparse perturbations.
Proceedings ArticleDOI
Learning the graph of epidemic cascades
TL;DR: This work analytically establishes sufficient conditions on the number of epidemics for both the global maximum-likelihood (ML) estimator, and a natural greedy algorithm to succeed with high probability in finding the graph on which an epidemic spreads, given only the times when each node gets infected.
Posted Content
Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization
TL;DR: In this paper, the basin of attraction for the global optimum (corresponding to the true dictionary and the coefficients) is shown to be O(1/s 2 ) where s is the sparsity level in each sample and the dictionary satisfies RIP.
Posted Content
Phase Retrieval using Alternating Minimization
TL;DR: In this article, the authors show that a resampling variant of the alternating minimization algorithm converges geometrically to the solution of a nonconvex phase retrieval problem.
Posted Content
The Pitfalls of Simplicity Bias in Neural Networks
TL;DR: It is demonstrated that common approaches for improving generalization and robustness---ensembles and adversarial training---do not mitigate SB and its shortcomings, and a collection of piecewise-linear and image-based datasets that naturally incorporate a precise notion of simplicity and capture the subtleties of neural networks trained on real datasets are introduced.