scispace - formally typeset
T

Tianyi Chen

Researcher at Rensselaer Polytechnic Institute

Publications -  97
Citations -  2672

Tianyi Chen is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Stochastic gradient descent & Stochastic optimization. The author has an hindex of 23, co-authored 95 publications receiving 1729 citations. Previous affiliations of Tianyi Chen include Fudan University & University of Minnesota.

Papers
More filters
Journal ArticleDOI

Approximations of continuous functionals by neural networks with application to dynamic systems

TL;DR: The paper gives several strong results on neural network representation in an explicit form that are a significant development beyond earlier work, where theorems of approximating continuous functions defined on a finite-dimensional real space by neural networks with one hidden layer were given.
Journal ArticleDOI

An Online Convex Optimization Approach to Proactive Network Resource Allocation

TL;DR: In this article, a modified online saddle-point (MOSP) scheme is developed, and proved to simultaneously yield sublinear dynamic regret and fit, provided that the accumulated variations of per-slot minimizers and constraints are sublinearly growing with time.
Posted Content

LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning.

TL;DR: In this article, a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation is presented, called Lazily Aggregated Gradient (LAG).
Journal ArticleDOI

RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets

TL;DR: In this article, the authors proposed a robust stochastic subgradient method for distributed learning from heterogeneous datasets at presence of an unknown number of Byzantine workers, where Byzantine workers may send arbitrary incorrect messages to the master due to data corruptions, communication failures or malicious attacks.
Proceedings Article

LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning

TL;DR: In this article, a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation is proposed, called Lazily Aggregated Gradient (LAG).