scispace - formally typeset
T

Tianbao Yang

Researcher at University of Iowa

Publications -  278
Citations -  7349

Tianbao Yang is an academic researcher from University of Iowa. The author has contributed to research in topics: Computer science & Convex optimization. The author has an hindex of 38, co-authored 247 publications receiving 5848 citations. Previous affiliations of Tianbao Yang include General Electric & Princeton University.

Papers
More filters
Posted Content

Improved Dynamic Regret for Non-degenerate Functions

TL;DR: This article showed that the dynamic regret of strongly convex functions can be improved by allowing the learner to query the gradient of the function multiple times, and meanwhile the strong convexity can be weakened to other non-degenerate conditions.
Posted Content

Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity

TL;DR: It is shown that the accelerated distributed stochastic variance reduced gradient algorithm achieves a lower bound for the number of rounds of communication for a broad class of distributed first-order methods including the proposed algorithms in this paper.
Journal ArticleDOI

Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning

TL;DR: Min–max problems have broad applications in machine learning, including learning with non-decomposable loss and learning with robustness to data distribution.
Posted Content

Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Weakly-Monotone Variational Inequality

TL;DR: This paper proposes an algorithmic framework motivated by the proximal point method, which solves a sequence of strongly monotone variational inequalities constructed by adding a stronglymonotone mapping to the original mapping with a periodically updated proximal center, and establishes the first work that establishes the non-asymptotic convergence to a stationary point of a non-convexnon-concave min-max problem.
Posted Content

Stochastic AUC Maximization with Deep Neural Networks.

TL;DR: Stochastic AUC maximization problem with a deep neural network as the predictive model is considered and Polyak-Łojasiewicz (PL) condition is explored, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme.