scispace - formally typeset
L

Lam M. Nguyen

Researcher at IBM

Publications -  65
Citations -  1526

Lam M. Nguyen is an academic researcher from IBM. The author has contributed to research in topics: Convex function & Stochastic gradient descent. The author has an hindex of 15, co-authored 58 publications receiving 1063 citations. Previous affiliations of Lam M. Nguyen include McNeese State University & Lehigh University.

Papers
More filters
Proceedings Article

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

TL;DR: In this paper, the authors proposed a StochAstic Recursive Gradient Algorithm for finite-sum minimization (SARAH), which admits a simple recursive framework for updating stochastic gradient estimates.
Posted Content

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

TL;DR: A StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems is proposed, and a linear convergence rate is proven under strong convexity assumption.
Proceedings Article

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

TL;DR: In this paper, a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm, and they also propose an alternative convergence analysis for SGD with diminishing learning rate.
Posted Content

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

TL;DR: This paper studies and analyzes the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses and provides a sublinear convergence rate and a linear convergence rate for gradient dominated functions.
Posted Content

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

TL;DR: A new stochastic first-order algorithmic framework to solve stochastically composite nonconvex optimization problems that covers both finite-sum and expectation settings and new constant and adaptive step-sizes that help to achieve desired complexity bounds while improving practical performance are proposed.