H
Hong-Yu Zhou
Researcher at University of Hong Kong
Publications - 46
Citations - 945
Hong-Yu Zhou is an academic researcher from University of Hong Kong. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 10, co-authored 46 publications receiving 429 citations. Previous affiliations of Hong-Yu Zhou include Chinese PLA General Hospital & Sichuan University.
Papers
More filters
Journal ArticleDOI
ThiNet: Pruning CNN Filters for a Thinner Net
TL;DR: It is revealed that the original VGG-16 model can be compressed into a very small model (ThiNet-Tiny) with only 2.66 MB model size, but still preserve AlexNet level accuracy, and “gcos” (Group COnvolution with Shuffling), a more accurate group convolution scheme, to further reduce the pruned model size.
Proceedings ArticleDOI
Age Estimation Using Expectation of Label Distribution Learning
TL;DR: A lightweight network architecture is designed and a unified framework which can jointly learn age distribution and regress age is proposed which can achieve comparable results as the state-of-the-art even though model parameters are further reduced to 0.9M~(3.8MB disk storage).
Posted Content
Comparing to Learn: Surpassing ImageNet Pretraining on Radiographs By Comparing Image Representations
TL;DR: In this paper, the authors proposed a Comparing to Learn (C2L) method to bridge the domain gap between natural images and medical images, which learns robust features by comparing different image representations.
Journal ArticleDOI
Efficient and Effective Training of COVID-19 Classification Networks With Self-Supervised Dual-Track Learning to Rank
Yuexiang Li,Dong Wei,Jiawei Chen,Shilei Cao,Hong-Yu Zhou,Zhu Yanchun,Wu Jianrong,Lan Lan,Wenbo Sun,Tianyi Qian,Kai Ma,Haibo Xu,Yefeng Zheng +12 more
TL;DR: A novel self-supervised learning method is proposed to extract features from the COVID-19 and negative samples and can achieve superior performance using about half of the negative samples, substantially reducing model training time.
Book ChapterDOI
Comparing to Learn: Surpassing ImageNet Pretraining on Radiographs by Comparing Image Representations
TL;DR: The experimental results on radiographs show that C2L can outperform ImageNet pretraining and previous state-of-the-art approaches significantly, and is called as Comparing to Learn (C2L) because it learns robust features by comparing different image representations.