scispace - formally typeset
T

Tianle Cai

Researcher at Princeton University

Publications -  30
Citations -  522

Tianle Cai is an academic researcher from Princeton University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 8, co-authored 24 publications receiving 296 citations. Previous affiliations of Tianle Cai include Peking University.

Papers
More filters
Posted Content

Adversarially Robust Generalization Just Requires More Unlabeled Data

TL;DR: It is proved that for a specific Gaussian mixture problem illustrated by [35], adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided.
Posted Content

Convergence of Adversarial Training in Overparametrized Neural Networks

TL;DR: This paper provides a partial answer to the success of adversarial training, by showing that it converges to a network where the surrogate loss with respect to the the attack algorithm is within $\epsilon$ of the optimal robust loss.
Posted Content

GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training

TL;DR: A principled normalization method, Graph Normalization (GraphNorm), where the key idea is to normalize the feature values across all nodes for each individual graph with a learnable shift, which improves generalization of GNNs, achieving better performance on graph classification benchmarks.
Posted Content

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

TL;DR: Experimental results show that the zero-shot random tickets outperform or attain a similar performance compared to existing "initial tickets", and a new method called "hybrid tickets", which achieves further improvement, is proposed.
Posted Content

Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems

TL;DR: A novel Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss and provides convergence guarantee for mini-batch GGN algorithm, which is, to the authors' knowledge, the first convergence result for themini-batch version of a second-order method on overparameterized neural networks.