scispace - formally typeset
C

Chunjing Xu

Researcher at Huawei

Publications -  122
Citations -  5872

Chunjing Xu is an academic researcher from Huawei. The author has contributed to research in topics: Convolutional neural network & Computer science. The author has an hindex of 26, co-authored 106 publications receiving 2193 citations. Previous affiliations of Chunjing Xu include Zhejiang University & Chinese Academy of Sciences.

Papers
More filters
Proceedings ArticleDOI

GhostNet: More Features From Cheap Operations

Abstract: Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https://github.com/huawei-noah/ghostnet.
Posted Content

GhostNet: More Features from Cheap Operations

TL;DR: A novel Ghost module is proposed to generate more feature maps from cheap operations based on a set of intrinsic feature maps to generate many ghost feature maps that could fully reveal information underlying intrinsic features.
Posted Content

Pre-Trained Image Processing Transformer

TL;DR: To maximally excavate the capability of transformer, the IPT model is presented to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs and the contrastive learning is introduced for well adapting to different image processing tasks.
Proceedings ArticleDOI

Pre-Trained Image Processing Transformer

TL;DR: Hu et al. as discussed by the authors proposed a pre-trained image processing transformer (IPT) model for denoising, super-resolution and deraining tasks, which is trained on corrupted image pairs with multi-heads and multi-tails.
Proceedings ArticleDOI

Data-Free Learning of Student Networks

TL;DR: A novel framework for training efficient deep neural networks by exploiting generative adversarial networks (GANs) is proposed, where the pre-trained teacher networks are regarded as a fixed discriminator and the generator is utilized for derivating training samples which can obtain the maximum response on the discriminator.