scispace - formally typeset
C

Chang Xu

Researcher at University of Sydney

Publications -  467
Citations -  13012

Chang Xu is an academic researcher from University of Sydney. The author has contributed to research in topics: Computer science & Chemistry. The author has an hindex of 42, co-authored 260 publications receiving 7189 citations. Previous affiliations of Chang Xu include University of Melbourne & Information Technology University.

Papers
More filters
Journal ArticleDOI

Self-Supervised Pose Adaptation for Cross-Domain Image Animation

TL;DR: A two-stage self-supervised pose adaptation framework for general image animation tasks, and a domain-independent pose adaptation generative adversarial network (DIPA-GAN) and a shuffle-patch generative adversary network (Shuffle-patch GAN) are proposed to penalize the rationality of the synthesized frame's pose and appearance, respectively.
Proceedings Article

Adapting Neural Architectures Between Domains

TL;DR: The theoretical analyses lead to AdaptNAS, a novel and principled approach to adapt neural architectures between domains in NAS, which shows that only a small part of ImageNet will be sufficient for AdaptNAS to extend its architecture success to the entire ImageNet and outperform state-of the-art comparison algorithms.
Proceedings ArticleDOI

Learning Student Networks in the Wild

TL;DR: In this paper, a data-free learning for student networks is proposed for solving users' anxiety caused by the privacy problem of using original training data, where the student network cannot achieve the comparable performance to the pre-trained teacher network especially on the large-scale image dataset.
Posted Content

HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens

TL;DR: An hourglass-inspired approach (HourNAS) for extremely fast NAS to identify the vital blocks and make them the priority in the architecture search, which outperforms the state-of-the-art methods.
Posted Content

Towards Evolutional Compression

TL;DR: This paper presents an evolutionary method to automatically eliminate redundant convolution filters in convolutional neural networks, representing each compressed network as a binary individual of specific fitness that is upgraded at each evolutionary iteration using genetic operations.