T
Tianbao Yang
Researcher at University of Iowa
Publications - 278
Citations - 7349
Tianbao Yang is an academic researcher from University of Iowa. The author has contributed to research in topics: Computer science & Convex optimization. The author has an hindex of 38, co-authored 247 publications receiving 5848 citations. Previous affiliations of Tianbao Yang include General Electric & Princeton University.
Papers
More filters
Journal ArticleDOI
A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints
TL;DR: In this article, a stochastic feasible level set method (SFLS) is proposed to solve a root-finding problem by calling a novel first-order oracle that computes an upper bound on the level-set function.
Journal Article
Finite-Sum Compositional Stochastic Optimization: Theory and Applications
Bokun Wang,Tianbao Yang +1 more
TL;DR: Improved oracle complexities with the parallel speed-up by the moving-average based stochastic estimator with mini-batching and new insights for improving the practical implementation by sampling the batches of equal size for the outer and inner levels are provided.
Posted Content
EIGEN: Ecologically-Inspired GENetic Approach for Neural Network Structure Searching.
TL;DR: Zhang et al. as discussed by the authors proposed an ecologically-inspired GENetic approach for neural network structure search, which includes succession, mimicry, and gene duplication to evolve a community of poor initialized neural network structures into a more diverse community.
Proceedings ArticleDOI
A Generic Approach for Accelerating Stochastic Zeroth-Order Convex Optimization
TL;DR: A generic approach for accelerating the convergence of existing algorithms to solve the problem of stochastic zeroth-order convex optimization (SZCO), which is applicable to both settings with one-point evaluation and two-point evaluations and yields an improvement on convergence for a broad family of problems.
Posted Content
Stochastic Optimization of Area Under Precision-Recall Curve for Deep Learning with Provable Convergence.
TL;DR: In this article, the authors proposed a principled technical method to optimize AUPRC for deep learning based on maximizing the averaged precision (AP), which is an unbiased point estimator of AU PRC.