scispace - formally typeset
G

Guanghui Lan

Researcher at Georgia Institute of Technology

Publications -  127
Citations -  9431

Guanghui Lan is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Convex optimization & Stochastic optimization. The author has an hindex of 34, co-authored 110 publications receiving 7395 citations. Previous affiliations of Guanghui Lan include University of Florida.

Papers
More filters
Journal ArticleDOI

Robust Stochastic Approximation Approach to Stochastic Programming

TL;DR: It is intended to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems.
Journal ArticleDOI

Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming

TL;DR: The randomized stochastic gradient (RSG) algorithm as mentioned in this paper is a type of approximation algorithm for non-convex nonlinear programming problems, and it has a nearly optimal rate of convergence if the problem is convex.
Journal ArticleDOI

Accelerated gradient methods for nonconvex nonlinear and stochastic programming

TL;DR: The AG method is generalized to solve nonconvex and possibly stochastic optimization problems and it is demonstrated that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general non Convex smooth optimization problems by using first-order information, similarly to the gradient descent method.
Journal ArticleDOI

An optimal method for stochastic composite optimization

TL;DR: The accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP is introduced, and it is shown that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO.
Journal ArticleDOI

Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

TL;DR: In this paper, a randomized stochastic projected gradient (RSPG) algorithm was proposed to solve the convex composite optimization problem, in which proper mini-batch of samples are taken at each iteration depending on the total budget of stochiastic samples allowed, and a post-optimization phase was also proposed to reduce the variance of the solutions returned by the algorithm.