scispace - formally typeset
P

Praneeth Netrapalli

Researcher at Microsoft

Publications -  117
Citations -  6792

Praneeth Netrapalli is an academic researcher from Microsoft. The author has contributed to research in topics: Stochastic gradient descent & Gradient descent. The author has an hindex of 38, co-authored 117 publications receiving 5387 citations. Previous affiliations of Praneeth Netrapalli include University of Texas at Austin & Google.

Papers
More filters
Proceedings Article

A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)

TL;DR: This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent, for the special case of least squares, by analyzing SGD as a Stochastic process and sharply characterizing the stationary covariance matrix of this process.
Posted Content

No quantum speedup over gradient descent for non-smooth convex optimization

TL;DR: It is established that in general even quantum algorithms need $\Omega((GR/\epsilon)^2)$ queries to solve the problem, and there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.
Proceedings Article

Computing Matrix Squareroot via Non Convex Local Search

TL;DR: In this article, a non-convex formulation of the PSD matrix square root problem is studied and a natural algorithm performing gradient descent is proposed to solve the problem in O(n ω 3/2 ) iterations.
Proceedings Article

Thresholding Based Efficient Outlier Robust PCA

TL;DR: In this article, a thresholding-based iterative algorithm with per-iteration complexity at most linear in the data size is proposed to recover principal directions despite the presence of outlier data points.
Proceedings Article

Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games

TL;DR: Follow the Perturbed leader (FTPL) is a popular algorithm for online learning and its application to solving minimax games as discussed by the authors, which enjoys the optimal worst-case regret guarantee for both convex and nonconvex losses.