Y
Yin Cui
Researcher at Google
Publications - 24
Citations - 1812
Yin Cui is an academic researcher from Google. The author has contributed to research in topics: Object detection & Contextual image classification. The author has an hindex of 12, co-authored 24 publications receiving 564 citations. Previous affiliations of Yin Cui include Columbia University & Cornell University.
Papers
More filters
Posted Content
Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Golnaz Ghiasi,Yin Cui,Aravind Srinivas,Rui Qian,Tsung-Yi Lin,Ekin D. Cubuk,Quoc V. Le,Barret Zoph +7 more
TL;DR: A systematic study of the Copy-Paste augmentation for instance segmentation where the authors randomly paste objects onto an image finds that the simple mechanism of pasting objects randomly is good enough and can provide solid gains on top of strong baselines.
Proceedings Article
Rethinking Pre-training and Self-training
TL;DR: Self-training works well exactly on the same setup that pre-training does not work (using ImageNet to help COCO), and on the PASCAL segmentation dataset, though pre- training does help significantly, self-training improves upon the pre-trained model.
Posted Content
Spatiotemporal Contrastive Video Representation Learning
TL;DR: This work proposes a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames, and proposes a sampling-based temporal augmentation methods to avoid overly enforcing invariance on clips that are distant in time.
Proceedings ArticleDOI
Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Golnaz Ghiasi,Yin Cui,Aravind Srinivas,Rui Qian,Tsung-Yi Lin,Ekin D. Cubuk,Quoc V. Le,Barret Zoph +7 more
TL;DR: In this paper, the Copy-Paste method is used for instance segmentation where objects are pasted randomly onto an image. And the authors show that the simple mechanism of pasting objects randomly is good enough and can provide solid gains on top of strong baselines.
Proceedings ArticleDOI
SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization
Xianzhi Du,Tsung-Yi Lin,Pengchong Jin,Golnaz Ghiasi,Mingxing Tan,Yin Cui,Quoc V. Le,Xiaodan Song +7 more
TL;DR: SpineNet is proposed, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search, and can transfer to classification tasks, achieving 5% top-1 accuracy improvement on a challenging iNaturalist fine-grained dataset.