Q
Qianyu Zhou
Researcher at Shanghai Jiao Tong University
Publications - 39
Citations - 237
Qianyu Zhou is an academic researcher from Shanghai Jiao Tong University. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 5, co-authored 16 publications receiving 79 citations. Previous affiliations of Qianyu Zhou include Jilin University.
Papers
More filters
Posted Content
Semi-Supervised Semantic Segmentation via Dynamic Self-Training and Class-Balanced Curriculum
TL;DR: The method, Dynamic Self-Training and Class-Balanced Curriculum (DST-CBC), exploits inter-model disagreement by prediction confidence to construct a dynamic loss robust against pseudo label noise, enabling it to extend pseudo labeling to a class-balanced curriculum learning process.
Posted Content
Uncertainty-Aware Consistency Regularization for Cross-Domain Semantic Segmentation.
TL;DR: An uncertainty-aware consistency regularization method to tackle the issue for semantic segmentation by exploiting the latent uncertainty information of the target samples so that more meaningful and reliable knowledge from the teacher model would be transferred to the student model.
Proceedings ArticleDOI
End-to-End Video Object Detection with Spatial-Temporal Transformers
Lu He,Qianyu Zhou,Xiangtai Li,Li Niu,Guangliang Cheng,Xiao Li,Wenxuan Liu,Yunhai Tong,Lizhuang Ma,Liqing Zhang +9 more
TL;DR: TransVOD as mentioned in this paper is an end-to-end video object detection model based on a spatial-temporal Transformer architecture, which consists of three components: Temporal Deformable Transformer Encoder (TDTE) to encode the multiple frame spatial details, TQE to fuse object queries, and TDTD to obtain current frame detection results.
Posted Content
DMT: Dynamic Mutual Training for Semi-Supervised Learning
Xuequan Lu,Zhengyang Feng,Qianyu Zhou,Qiqi Gu,Xin Tan,Guangliang Cheng,Xuequan Lu,Jianping Shi,Lizhuang Ma +8 more
TL;DR: The authors proposed Dynamic Mutual Training (DMT), which leverages inter-model disagreement between different models by a dynamically re-weighted loss function, where a larger disagreement indicates a possible error and corresponds to a lower loss value.
Book ChapterDOI
Generative Domain Adaptation for Face Anti-Spoofing
TL;DR: In this paper , a generative domain adaptation (GDA) framework combines two carefully designed consistency constraints: 1) Inter-domain neural statistic consistency guides the generator in narrowing the inter-domain gap.