scispace - formally typeset
Q

Qi Lei

Researcher at Princeton University

Publications -  66
Citations -  1713

Qi Lei is an academic researcher from Princeton University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 20, co-authored 55 publications receiving 1235 citations. Previous affiliations of Qi Lei include Zhejiang University & University of Texas at Austin.

Papers
More filters
Proceedings Article

Gradient Coding: Avoiding Stragglers in Distributed Learning

TL;DR: This work proposes a novel coding theoretic framework for mitigating stragglers in distributed learning and shows how carefully replicating data blocks and coding across gradients can provide tolerance to failures andstragglers for synchronous Gradient Descent.
Posted Content

Few-Shot Learning via Learning the Representation, Provably

TL;DR: The results demonstrate representation learning can fully utilize all $n_1T$ samples from source tasks and the advantage of representation learning in both high-dimensional linear regression and neural network learning.
Posted Content

Predicting What You Already Know Helps: Provable Self-Supervised Learning

TL;DR: This paper quantifies how approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task with drastically reduced sample complexity by just training a linear layer on top of the learned representation.
Proceedings Article

Hessian-based Analysis of Large Batch Training and Robustness to Adversaries

TL;DR: This work performs a Hessian based study to analyze exactly how the landscape of the loss function changes when training with large batch size, and provides empirical and theoretical proof that the inner loop for robust training is a saddle-free optimization problem.
Posted Content

CAT: Customized Adversarial Training for Improved Robustness.

TL;DR: A new algorithm, named Customized Adversarial Training (CAT), is proposed, which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training.