scispace - formally typeset
Q

Qizhe Xie

Researcher at Carnegie Mellon University

Publications -  30
Citations -  5983

Qizhe Xie is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Minimax & Language proficiency. The author has an hindex of 17, co-authored 28 publications receiving 3330 citations. Previous affiliations of Qizhe Xie include Google & Shanghai Jiao Tong University.

Papers
More filters
Proceedings ArticleDOI

Self-Training With Noisy Student Improves ImageNet Classification

TL;DR: A simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images.
Posted Content

Unsupervised Data Augmentation for Consistency Training

TL;DR: A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Proceedings ArticleDOI

RACE: Large-scale ReAding Comprehension Dataset From Examinations

TL;DR: RACE as discussed by the authors is a dataset for benchmark evaluation of methods in reading comprehension task, collected from the English exams for middle and high school Chinese students in the age range between 12 to 18.
Posted Content

RACE: Large-scale ReAding Comprehension Dataset From Examinations

TL;DR: The proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models and the ceiling human performance.
Posted Content

Self-training with Noisy Student improves ImageNet classification

TL;DR: Noisy Student Training as mentioned in this paper extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning, achieving 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images.