C
Chen Change Loy
Researcher at Nanyang Technological University
Publications - 111
Citations - 2881
Chen Change Loy is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Computer science & Feature (computer vision). The author has an hindex of 15, co-authored 111 publications receiving 782 citations. Previous affiliations of Chen Change Loy include Harbin Institute of Technology & University of Sydney.
Papers
More filters
Posted Content
RGB-D Salient Object Detection with Cross-Modality Modulation and Selection
TL;DR: Wang et al. as mentioned in this paper proposed a cross-modality feature modulation (cmFM) module to enhance feature representations by taking the depth features as prior, which models the complementary relations of RGB-D data.
Proceedings ArticleDOI
Unsupervised 3D Shape Completion through GAN Inversion
Junzhe Zhang,Xinyi Chen,Zhongang Cai,Liang Pan,Haiyu Zhao,Shuai Yi,Chai Kiat Yeo,Bo Dai,Chen Change Loy +8 more
TL;DR: ShapeInversion as mentioned in this paper uses a GAN pre-trained on complete shapes by searching for a latent code that gives a complete shape that best reconstructs the given partial input, which is capable of incorporating the rich prior captured in a well-trained generative model.
Proceedings ArticleDOI
Seesaw Loss for Long-Tailed Instance Segmentation
Jiaqi Wang,Wenwei Zhang,Yuhang Zang,Yuhang Cao,Jiangmiao Pang,Tao Gong,Kai Chen,Ziwei Liu,Chen Change Loy,Dahua Lin +9 more
TL;DR: Seesaw Loss as discussed by the authors dynamically re-balance gradients of positive and negative samples for each category, with two complementary factors, i.e., mitigation factor and compensation factor.
Posted Content
Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D Image GANs
TL;DR: This work presents the first attempt to directly mine 3D geometric clues from an off-the-shelf 2D GAN that is trained on RGB images only and finds that such a pre-trained GAN indeed contains rich 3D knowledge and thus can be used to recover 3D shape from a single 2D image in an unsupervised manner.
Proceedings ArticleDOI
Deep Animation Video Interpolation in the Wild
TL;DR: AnimeInterp as mentioned in this paper proposes segment-guided matching and recurrent flow refinement to interpolate the in-between animation frames in a coarse-to-fine manner, which shows favorable perceptual quality for animation scenarios in the wild.