scispace - formally typeset
J

Jun-Kun Wang

Researcher at Georgia Institute of Technology

Publications -  22
Citations -  186

Jun-Kun Wang is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Convex function & Gradient descent. The author has an hindex of 6, co-authored 19 publications receiving 127 citations. Previous affiliations of Jun-Kun Wang include National Taiwan University.

Papers
More filters
Proceedings Article

On Frank-Wolfe and Equilibrium Computation

TL;DR: This paper considers the Frank-Wolfe method for constrained convex optimization, and shows that this classical technique can be interpreted from a different perspective: FW emerges as the computation of an equilibrium (saddle point) of a special convex-concave zero sum game.
Proceedings Article

Acceleration through Optimistic No-Regret Dynamics

TL;DR: In this paper, the authors consider the problem of minimizing a smooth convex function by reducing the optimization to computing the Nash equilibrium of a particular zero-sum convex-concave game.
Proceedings Article

Faster Rates for Convex-Concave Games

TL;DR: In this paper, the Frank-Wolfe method was shown to achieve a linear convergence rate of O(exp(T)) for convex-concave games with additional curvature assumptions.
Posted Content

Acceleration through Optimistic No-Regret Dynamics

TL;DR: This paper shows that the problem of minimizing a smooth convex function by reducing the optimization to computing the Nash equilibrium of a particular zero-sum convex-concave game can be enhanced to a rate of O(1/T^2) by extending recent work that leveragesoptimistic learning to speed up equilibrium computation.
Posted Content

A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network

TL;DR: This work provably shows that Polyak’s momentum achieves acceleration for training a one-layer wide ReLU network and a deep linear network, which are perhaps the two most popular canonical models for studying optimization and deep learning in the literature.