scispace - formally typeset
G

Gary Cheng

Researcher at Stanford University

Publications -  15
Citations -  29

Gary Cheng is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Frank–Wolfe algorithm. The author has an hindex of 2, co-authored 7 publications receiving 12 citations. Previous affiliations of Gary Cheng include University of California, Berkeley.

Papers
More filters
Proceedings Article

Minibatch Stochastic Approximate Proximal Point Methods

TL;DR: This work proposes two minibatched algorithms for which a non-asymptotic upper bound on the rate of convergence is proved, revealing a linear speedup in minibatch size.
Journal ArticleDOI

Does Federated Dropout actually work?

TL;DR: It is argued in this paper that the metrics used to measure the performance of Federated Dropout and its variants are misleading and proposed and performed new experiments which suggest that Federated dropout is actually detrimental to scaling efforts.
Posted Content

Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization.

TL;DR: In this article, the authors extend the approximate-proximal point (aProx) family of model-based methods for solving stochastic convex optimization problems to the minibatch and accelerated setting.
Proceedings ArticleDOI

Approximate Function Evaluation via Multi-Armed Bandits

TL;DR: This work designs an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least 1 − δ returns an ε accurate estimate of f ( µ ).
Proceedings ArticleDOI

Private optimization in the interpolation regime: faster rates and hardness results

TL;DR: In this paper , the authors investigate differentially private stochastic optimization in the interpolation regime and propose an adaptive algorithm that improves the sample complexity to achieve expected error α from for any fixed ρ > 0 , while retaining the standard minimax-optimal sample complexity.