scispace - formally typeset
C

Chen Change Loy

Researcher at Nanyang Technological University

Publications -  111
Citations -  2881

Chen Change Loy is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Computer science & Feature (computer vision). The author has an hindex of 15, co-authored 111 publications receiving 782 citations. Previous affiliations of Chen Change Loy include Harbin Institute of Technology & University of Sydney.

Papers
More filters
Posted Content

Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation

TL;DR: This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images by allowing the generator to be fine-tuned on-the-fly in a progressive manner regularized by feature distance obtained by the discriminator in GAN.
Posted Content

BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond

TL;DR: A succinct pipeline is shown that achieves appealing improvements in terms of speed and restoration quality in comparison to many state-of-the-art algorithms and can serve as strong baselines for future VSR approaches.
Proceedings ArticleDOI

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation

TL;DR: In this article, a pose code is learned in a modulated convolution-based reconstruction framework to generate pose-controllable talking faces with audio-visual modality modularization.
Proceedings ArticleDOI

BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond

TL;DR: In this article, the authors propose a succinct pipeline, BasicVSR, that achieves appealing improvements in terms of speed and restoration quality in comparison to many state-of-the-art algorithms.
Proceedings ArticleDOI

Audio-Driven Emotional Video Portraits

TL;DR: In this paper, a cross-reconstructed emotion disentanglement technique is proposed to decompose speech into two decoupled spaces, i.e., a duration independent emotion space and a duration dependent content space.